var/home/core/zuul-output/0000755000175000017500000000000015137326227014535 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015137342301015470 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000335446515137342151020274 0ustar corecorei}ikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD . s!mv?_eGbuuțx{w7ݭ7֫gB% oo/q3m^]/o?8.7oW}ʋghewx/mX,ojŻ ^Tb3b#׳:}=p7뼝ca㑔`e0I1Q!&ѱ[/o^{W-{t3_U|6 x)K#/5ΌR"ggóisR)N %emOQ/Ϋ[oa0vs68/Jʢ ܚʂ9ss3+aô٥J}{37FEbп3 FKX1QRQlrTvb)E,s)Wɀ;$#LcdHM%vz_. o~I|3j dF{ "IΩ?PF~J~ ` 17ׅwڋًM)$Fiqw7Gt7L"u 0V9c  ˹dvYļU[ Z.׿-h QZ*U1|t5wKOؾ{mk b2 ܨ;RJK!b>JR*kl|+"N'C_#a7]d]sJg;;>Yp׫,w`ɚ'd$ecwŻ^~7EpQС3DCS[Yʧ?DDS aw߾)VxX帟AB}nyи0stĈCo.:wAZ{sy:7qsWctx{}n-+ZYsI{/.Ra9XcђQ0FK@aEDO2es ׇN# ZF͹b,*YVi+$<QMGhC}^}?BqG!(8l K3T[<~6]90}(*T7siv'=k 9Q2@vN ( R['>v*;o57sp$3ncx!>t®W>]tF-iܪ%GYbaRvHa}dkD̶*';ک|s_}8yj,('GrgTZ'U鋊TqOſ * /Ijo!՟8`"j}zӲ$k3jS|C7;A)͎V.r?t\WU1ojjr<~Tq> `=tJ!aݡ=h6Yݭw}?lѹ`f_" J9w4ts7NG GGG]ҡgc⌝M b/Ζlpah E ur C&`XR JcwB~R2EL9j7e\(Uё$׿atyХ?*t5z\+`/ErVQUxMҔ&ۈt.3;eg_O ξL1KiYLizpV:C5/=v-}҅"o ']쌕|tϓX8nJ*A*%J[T2pI1Je;s_[,Ҩ38_ь ͰM0ImY/MiVJ5&jNgBt90v߁R:~U jځU~oN9xԞ~J|dݤ߯R> kH&Y``:"s ayiBq)u%'4 yܽ yW0 -i̭uJ{KưЖ@+UBj -&JO x@}DS.€>3T0|9ē7$3z^.I< )9qf e%dhy:O40n'c}c1XҸuFiƠIkaIx( +")OtZ l^Z^CQ6tffEmDφǽ{QiOENG{P;sHz"G- >+`قSᔙD'Ad ѭj( ہO r:91v|ɛr|٦/o{C Ӹ!uWȳ)gj_hF}W5XDV7V5EE9esYYfiMOV i/ f>3VQ 7,oTW⇊AqO:rƭĘ DuZ^ To3dEN/} fI+?|Uz5SԠplP0K=( $hC=h2@M+ `@P4Re]1he}k|]eO,v^ȹ [=zX[tꆯI7c<ۃ'B쿫dIc*Qqk&60XdGY!D ' @{!b4ִ s Exb ߤKߒ'&YILұ4q6y{&G`%$8Tt ȥ#5vGVO2Қ;m#NS8}d0Q?zLV3\LuOx:,|$;rVauNjk-ؘPꐤ`FD'JɻXC&{>.}y7Z,).Y톯h7n%PAUË?/,z_jx܍>М>ӗom$rۇnu~Y݇̇TIwӜ'}׃nxuoỴRZ&Yzbm ]) %1(Y^9{q"4e?x+ [Vz;E|d1&ږ/0-Vb=SSO|k1A[|gbͧɇد;:X:@;afU=Sru CK >Y%LwM*t{zƝ$;ȾjHim @tBODɆj>0st\t@HTu( v e`H*1aK`3CmF1K>*Mk{_'֜dN${OT-n,'}6ȴ .#Sqη9]5zoX#ZVOy4%-Lq6dACYm*H@:FUф(vcD%F"i ' VVdmcOTKpwq.M?m12N[=tuw}opYG]2u<ΰ+a1tHayɒ aY(P*aaʨ@ΰ<pX X{k[%Egl1$9  ֲQ$'dJVE%mT{z`R$77.NC49 *#LC ءzCwS%'m'3ܚ|otoʉ!9:PZ"ρ5M^kVځIX%G^{;+Fi7Z(ZN~;MM/u2}ݼPݫedKAd#[ BeMP6" YǨ 0vyv?7R F"}8&q]ows!Z!C4g*8n]rMQ ;N>Sr??Ӽ]\+hSQזL {g6R/wD_tՄ.F+HP'AE; J j"b~+'h=TԫeVިO? )-1 8/%\hC(:=4< ,RmDRWfRoUJy ŗ-ܲ(4k%הrΒ]rύW -e]hx&gs7,6BxzxօoFMA['҉F=NGD4sTq1HPld=Q,DQ IJipqc2*;/!~x]y7D7@u邗`unn_ư-a9t_/.9tTo]r8-X{TMYtt =0AMUk}G9^UA,;Tt,"Dxl DfA\w; &`Ͱ٢x'H/jh7hM=~ ֟y[dI~fHIqC۶1Ik\)3 5Ķ']?SؠC"j_6Ÿ9؎]TTjm\D^x6ANbC ]tVUKe$,\ܺI `Qز@UӬ@B {~6caR!=A>\+܁<lW Gϸ}^w'̅dk  C 7fbU{3Se[s %'!?xL 2ڲ]>i+m^CM&WTj7ȗE!NC6P}H`k(FUM gul)b ;2n6'k}ˍ[`-fYX_pL +1wu(#'3"fxsuҮױdy.0]?ݽb+ uV4}rdM$ѢIA$;~Lvigu+]NC5ÿ nNჶT@~ܥ 7-mU,\rXmQALglNʆ P7k%v>"WCyVtnV K`pC?fE?~fjBwU&'ᚡilRї`m] leu]+?T4v\% ;qF0qV(]pP4W =d#t ru\M{Nj.~27)p|Vn60֭l$4԰vg`i{ 6uwŇctyX{>GXg&[ņzP8_ "J~7+0_t[%XU͍ &dtO:odtRWon%*44JٵK+Woc.F3 %N%FF"HH"\$ۤ_5UWd̡bh塘ZRI&{3TUFp/:4TƳ5[۲yzz+ 4D.Ճ`!TnPFp':.4dMFN=/5ܙz,4kA<:z7y0^} "NqK$2$ Ri ?2,ᙌEK@-V3ʱd:/4Kwm2$'dW<qIE2Ľ)5kJҼMЌ DR3csf6rRSr[I߽ogCc;S5ׂdKZ=M3դ#F;SYƘK`K<<ƛ G׌MU.APf\M*t*vw]xo{:l[n=`smFQµtxx7/W%g!&^=SzDNew(æ*m3D Bo.hI"!A6:uQզ}@j=Mo<}nYUw1Xw:]e/sm lˣaVۤkĨdԖ)RtS2 "E I"{;ōCb{yex&Td >@).p$`XKxnX~E膂Og\IGֻq<-uˮ◶>waPcPw3``m- } vS¢=j=1 W=&;JW(7b ?Q.|K,ϩ3g)D͵Q5PBj(h<[rqTɈjM-y͢FY~p_~O5-֠kDNTͷItI1mk"@$AǏ}%S5<`d+0o,AրcbvJ2O`gA2Ȏp@M6rN+LxE>^DݮEڬTk1+trǴ5RHİ{qJ\}X` >+%ni3+(0m8HЭ*zAep!*)jxG:Up~gfu#x~ .2ןGRLIۘT==!TlN3ӆv%#oV}N~ˊc,_,=COU C],Ϣa!L}sy}u\0U'&2ihbvz=.ӟk ez\ƚO; -%M>AzzGvݑT58ry\wW|~3Ԟ_f&OC"msht: rF<SYi&It1!ʐDN q$0Y&Hv]9Zq=N1/u&%].]y#z18m@n1YHR=53hHT( Q(e@-#!'^AK$wTg1!H$|HBTf̋ Y@Mwq[Fī h[W,Ê=j8&d ԋU.I{7O=%iG|xqBչ̋@1+^.r%V12, _&/j"2@+ wm 4\xNtˆ;1ditQyc,m+-!sFɸv'IJ-tH{ "KFnLRH+H6Er$igsϦ>QKwҰ]Mfj8dqV+"/fC Q`B 6כy^SL[bJgW^;zA6hrH#< 1= F8) 򃟤,ŏd7>WKĉ~b2KQdk6՛tgYͼ#$eooԦ=#&d.09DHN>AK|s:.HDŽ">#%zNEt"tLvfkB|rN`)81 &ӭsēj\4iO,H̎<ߥ諵z/f]v2 0t[U;;+8&b=zwɓJ``FiQg9XʐoHKFϗ;gQZg܉?^7V<*EzfH{]:*6M x-v쳎M'.hO3p-IGh ܆hR ]zi2hB9'S_;I/d0oIU:m/~[*K1QA="D:V&f:{7N>^uU` c/X)mS5KC߄":{H)"%,!3w{"ZWÂk>/F?RJ>FIY*%5Hg}3Ď89؟N/pgÞ tJXB-Gjsٶ 3Gzp؍H|*cyp@\첹,[up`uV,\KCB\qGiW痃[?i?S{eϻl71X:݌>EEly(*SHN:ӫOq{{L$?Q{϶(F_Ej>3mqfΤP-j)H˧&8?a?2xĐ+EV؍x0bv6 fd1^ 2ӎԥ sZR cgu/bn/34'h9Dݥ:U:vV[ 'Mȥ@ەX㧿-p0?Q6 y2XN2_h~Cֆ֙82)=Ȓ7D- V)T? O/VFeUk'7KIT, WeՔ}-66V؅ʹ;T$pZ#@L; ?0]"2v[hׂ'cJ6H4bs+3(@z$.K!#Šj2ݢxK-di +9Hᇷ絻+ O.i2.I+69EVyw8//|~<ëng)P<xͯ~? fp,CǴ_BjDN^5)s('cBh+6ez0)_~zJz"ё`Z&Z![0rGBK 5G~<:H~W>;ٍVnSt%_!BZMMeccBҎÒJH+"ūyR}X~juPp- j\hЪQxchKaS,xS"cV8i8'-sOKB<չw"|{/MC8&%Og3E#O%`N)p#4YUh^ ɨڻ#Ch@(R &Z+<3ݰb/St=&yo|BL,1+t C<ˉvRfQ*e"T:*Dᰤ*~IClz^F6!ܠqK3%$E)~?wy,u'u() C>Gn} t]2_}!1NodI_Bǂ/^8\3m!'(Ֆ5Q&xo 8;'Jbo&XL_ʣ^^"Lq2E3,v1ɢu^}G7Z/qC^'+HDy=\]?d|9i,p?߼=\Ce"|Rݷ Q+=zxB.^Bld.HSntºB4~4]%.i|҂"? ~#ݤ[tfv3Ytck0O ͧ gP\|bЯ݃5H+v}e\0zE|!@E " ;9Ώf3kZc7B 8yݪkIf-8>V#ہll/ؽnA(ȱbAj>C9O n6HNe">0]8@*0)QsUN8t^N+mXU q2EDö0^R) hCt{d}ܜFnԴ.2w⠪R/r| w,?VMqܙ7;qpUۚ5Tnj ۝jlN$q:w$U>tL)NC*<` `)ĉJآS2 z]gQ)Bی:D`W&jDk\7XD&?Y\9ȢG:${1`+i n8=%Ml%İȖb7AޗuV3A7ำqE*\qb'YpuHƩҬV nm=Ɂ-2=|5ʹ zi ' ׹U>8bK0%V\ t!Lku`+]c0h&)IVC)p| QUA:]XL/2La[Xѓ F;/-rtx-rei0hE˝ݸDt#{I} `v;jUvK S x1Q2XU&6k&lE"} Q\E)+u>.,SzbQ!g:l0r5aI`"Ǒm O\B!,ZDbjKM%q%Em(>Hm 2z=Eh^&hBk X%t>g:Y #)#vǷOV't d1 =_SEp+%L1OUaY쎹aZNnDZ6fV{r&ȑ|X!|i*FJT+gj׾,$'qg%HWc\4@'@—>9V*E :lw)e6;KK{s`>3X: P/%d1ؑHͦ4;W\hx锎vgqcU!}xF^jc5?7Ua,X nʬ^Cv'A$ƝKA`d;_/EZ~'*"ȜH*Duƽ˳bKg^raͭ̍*tPu*9bJ_ ;3It+v;3O'CX}k:U{⧘pvzz0V Y3'Dco\:^dnJF7a)AH v_§gbȩ<+S%EasUNfB7™:%GY \LXg3۾4\.?}f kj· dM[CaVۿ$XD'QǛU>UݸoRR?x^TE.1߬VwխmLaF݄",Uy%ífz,/o/Z^]ݖF\\UR7򱺹...m/~q[ /7n!7xB[)9nI [GۿsH\ow!>66}եl?|i [%۾s& Z&el-ɬeb.E)բA l1O,dE>-KjLOgeΏe|Bf".ax)֒t0E)J\8ʁ,Gulʂ+lh)6tqd!eó5d ¢ku|M"kP-&ђ5h ^pN0[|B>+q"/[ڲ&6!%<@fpѻKQ31pxFP>TU?!$VQ`Rc1wM "U8V15> =҆#xɮ}U`۸ہt=|X!~Pu(UeS@%Nb:.SZ1d!~\<}LY aBRJ@ѥuȑz.# 3tl7 ]وb Xnݔ[TN1|ttc‡-5=VrPhE0Ǐ}Wd|\aD;(;Ha.]1-{s1`HbKV$n}Z+sz'ʀ*E%N3o2c06JZW?V g>ed\)g.C]pj|4逜*@ nBID f"!!*7kS4޷V+8弔*A19`RI/Hй qPq3TY'퀜+/Ĥ'cp2\1: 0mtH,.7>\hSؗ΀ѩ آSNEYdEcaLF&"FhQ|![gIK v~,Jc%+8[dI368fp*CDrc3k.2WM:UbX[cO;R`RA]d+w!e rr솜[/V`+@;Τ`5d0ϕ_Lع`C"cK>JG.}Ε00e>& 2䯫vNj31c$ i '2Sn-51Y}rE~b>|Ď6Oj~ebIapul9| 3QtUqSCxTD7U9/nq.JYCtuc nrCtVDƖϧ;INOKx%'t+sFUJq:ǫf!NRT1D(3.8Q;І?O+JL0SU%jfˬ1lމZ|VA/.ȍȱh M-r ~[0AG꠭y*8D*-Rz_z{/S[*"꫒?`a;N6uilLn<Yllmb rY״͆jqTI!j.Pٱh s!:W_´KxA|Hk1nE6=W|$O -{]1Ak$ ѫQ6Plp;3F$RveL l5`:~@c>q,7}VE-Q8W70up˳ A¦g/OEU:غA>?=CۣPqȅlW11/$f*0@б 2Dݘrt +qrx!8 J&[V =͋A,z`S,J|L/vrʑ=}IhM4fG(Ȋ1{TT%41Oa'$ {W6_K(>Xd' nJ\ ٚ~,9[V1b%ovXgwHMR$i̓:)'[JP&ol$8Tš*I1|AɢقcU!*ZwDUVI-6wrl!CUZB..ʴXxB  ¶fc*avܞ%_~C9Tip !_3jd"H`xT#t U=njQF?TUXS.)kb9On7v9TuarBL s8 G3e==xˬ9L!pyOr.n 0/ wP%L d|O OIي='nXQ[ ^_a/iDkiEG#̗ŕcx eq嚆 ?y#dD~Cرe,"=6 ޳6DwC0Ԍ܎YĿѤ5i~[kqÒF{@VA9_EiH'Je˗;۾S!VCla<ﮃxQhw'Yd"|AmǮys,?~Azg0-tr?#Nr%'vKUuO/IOSy^>ħs =;! nhFLטQ䆏w4cdnkֽ:ZSok7 ɹ뛯~'㱩Εn^^-m׮K4f8^8E`-O0Lh~ӾoiygyB?^,=y\[w՗XoǏ -eӑX*P$yY$3Q(D>X#ѿgI=*Y2PjTϊ6tQ3եu1~uE2] 15MY]^Dau=k[RuU^FHu2jZZ`xSAZYIiR}KZOs3c"nFx_`|<+IS`%bb%dmZ?(q']ug~uVh$`pA!9% r ~Ag>]rwYUі9+ACe9x[%w܇-4DΈ:ΚsrqFR3xD ےŴv7>qKݥr`ڥ߮I,"o.h}M0\4U( t4)A8طpThlggDCs*qY i vs  Cݵ`.Ee!@o9wdmR0 6әr~cR>]/i.__b~nhnH @:(<a Ƕayr{ަM҇9$(ʁ,R۵jѿ/qSXi3vp  5,MI@L;YϘ /pE9:*{]o%{"2,$N 2+fbE []\)Zfػ>byx98.5S9'8O0Pͧ$O5CSZ2@C}=-Y=~#O/ϺR:mM|o;B6UY"gYmaOCSv/)eI <Tni,B4+F 5.ߊsR5-x^@R2tNSG׈~ Y< . K1d4+x8)7AE}OӢ姕6. Ek2ԧPQޜK`~Pd ` _-Sbw]v.4ے4@e\ڢa4{|YĹl* 23&"H \Izr <&pȢѴ}<4(-r/ѤhVarIr^dUiPlS 7v{hM擻ψh[eyRN[]'4ڎ=ŝQQ2&joEU8>rh_zw*L!i:bCdmDZ6ʾ*@+9szΖ Gd A',7=L0.Hh9ֲʃYCtW65PXl1:lzŠgPRbt@$eېkzEڴ9&WU 6wF2 <*a '$JaNYQvJp[ X^'{=ʔ*lAٵA2)mh@LZ7[Ty{u+%ӼJ*À FD c9'7$w=2ESIqAjdtcxRk%=ykwe7E ~ H}UER[^JF԰@lI2-\uwjLk*/Q뚨TUyEIВHNp=DvcCÄڲ 5p<>8mz,D[}= KN%#VpS 5F6[RgGͳw+ (h]Ysm[6}Z(hU{qo?_P)p*Xiyx0}.nz=]Ҿ"cλ=Τ3Jԋ<ڨ)ES5w+jw |dD9ǖ'0M atYۖgѤuwʛYbEDFFӳJő$.UJ" o7+UtJ!8 g'YXHŅCqȳ ǻ x?~x c,k\alߓl0i*":/rXSn [wvݟ$>!Y8i"}!˖u[ҠN}OQ/ 9n(B w8&o?xͯw OU} w-m[ bwo( 0 y4hI@Il$QKJvA3CR#Yv$wۢM;<3;_#ļ5yV 5,,$Pgɛ'Ӥ2_>8#fZNήy'Tc7~& jqrc_)D~G=~N )Бu{u:\ee#jJ8UZUCo/TR'<[~wZ5.0F ]u4O>x&[yǟU6IF[:aրzTsܲ^_c9*zŸLJ}rLOGE5wNB$"|2^Vopi(§>Hn wyP n5">. G0""bcA4GBT[ݟ/B *=(ơ⩸X;w 'C$"r'7v! Ew."݈#(On&Mq> Y@}kE,Q+x͵n  1x^ĈGx1#q ^?Kx4B1 uGiA }J0"0tjD,NlKAОzh\J %+" ^ !]Sw}"h i/q-]# sE,VnȎ`AapźdK(aob`A|be4vv8.>8b78w 0 Af@@wpWD@ڱQ]F(k~*70E֝?1i cm&c`< 1퍠DЃ:e b`P .'+KOt;g`zk nr,ɒ۱ =䣰CY_#S[Rcm5BMpF#Xov* NOp ^'za9[.=#" jcqׄ [á0Ӎ Û:e0 c0AG^=3_-YDx @xBށBP/t[!>)@ZD0 uw) ~NTB]U [Y?GJ{M {,Luii`}ש5P̓  cx$ FgnY"0ۡ0aht[w.8@ >AjDЇÀۑi"4vG1C{E1XwwEV^|pUϧO'jO׋>54SK:vi[O LsSCaJ|`:x>D7@q UP>hCOPd$W=obQ4ˆ 35d8m'Ͳ,M $ TP l Ym^DVֹ\*9/`Y⏢(39*|\D/ID$%X=^4d_bb@ɻU۔]0}@!,o吁 pիB]v_l(xQC+QuaKoOf=f0BjD m VaWD_CM"Ӄ:? ]ᷧN>Ң|<:\ Grx]N\j ~ryS63y ˗rTvM /#x \JLܾhRt#'׳q}Ue8ϟ-6_]hN/@^yy'+]èxrwp[UDe§Ac^6I391Kg<ģKQ(u4=-a i,;Jl7˄ efv~ ǡР43q=D뿚62Пl+gH]9FXqU?=4OS]x]kBkF,;3ܪ^}>rmxT2p*X],fk(~M$Y53,H8dqξl?LS4&E!rh<4ʠ$$ᒵXF̣p4Ya1[vRnU YY,r\ZnhYύ`K'Pf[sC'ͷ>Vngk=lM1H:QE '>{:_+Ƒ!P~gOW]Z~.`$(Gp<y_ 8X}{ڮ^6pzNO /`\$F&& A{NTmI=NaWimA±-,3B1h [S+([ (]P0Aٞ]RK=:=Wپv͎ݩgڇޝKuie))APg]PgAu&낺;.0A=uw[APowA )b]PbwA{ *v_APwA ) GT Bwa*#|qY,$(z8w* c>ic]I{:^wf%ݬe Lz&:UDI7`.A6 @ ê,|&)t@e^eM>[~Gwge t=<8 u(V-zi?IrO#jW|'Ҧ 7b,!&ZȁhPß8 *KILZP C[@{UGM4Ļ"RͳDSvf ̷1Y£?a0h>B31Vk:RȒРE"xM~?j<[8gVM>"UT<Ɇ tϠR,/ȡi^Zwpm^Ed5W0xg͋TsZ+-'0C2_B])ԗ/iVS0*MEL*@ПY]$MN 5hWY(jozd)%21Y5\_qü38f$ΧJ?ip䎫!PV=@.oz_1 A;mC^ &R)6|Nd\egS$D&{5m)deLImۀVA3a#q=16*ғ<V6].|.Lt񖰛MF{,,gMP +iY InΨ(OJ#0+Mס)v2`&8`ʧ چb4טS VxԊAt7 wWRRe(J(mr0p)}.Bj0oMӬڧ ~ۄZ2O$P]d֚yQ sAצּciz4"cЅ<Ԟ*y-N\Z'G8NҳLA:# QNO?0ةwpMVooRwwomKꆏO߁Ǡ]z_icJU/BS*ave\T|ƩZQAMKs;smwڷc4"Px gQ޻KQ436$B3y z~E) `㸫ͪV*F5ϣ+YoǺrӾIi t|3HZ40cH0U2F\!ˢZR674Wf`Q%9ɫ;8iq=Pi~ƨƉ!;Ly}O$u8e,5;DH3bW"ér7zƧ_5AE"辛ޜȱXt F]mGpIZ o|zdTXY>Q+ϼte_PajU~FTkR'Nڥfp:Y?:{-k8 VښW4 ?>v d\L>zJض,΋np!]gPsԶiޡ0 /L\Ya4-.j`l#~1(APCryprc-z3FکQS}`a@Œ,=WFb|I tU*3,dPoz֡r_8s\KKzr7^e:Zga48G|$9l-z^ŵϸosW*em/=rf։s碌7Pٌٌ#E,x"He/Ar:׃k_gk]sZ}2IلG2wil[eR*UirQiڠNxPCK9E#G&>dگx YuVѲDL-kÆn Ah,BPZRK-ăŴ"+[WVŠ0op+%3>#H-8.sɤLx@f$?t`KCYf"8~?򮵷9 5/FSHQE۷F~-W%*$%9 ;{H'b^{ f $s\v晉ۂX.| Pi:`v?]ǾCT~m~* /T刺gh,(X?_{W/~6gEƂyY^~rv$~[ݶ3c_]4 ~ÏiOt9^1E?Am=&9,/h7v#ec:ICQg?Vۭ/`2'8Ŷ\6Tn-\Xk23ǨD!,E ܅t.־=L5!"SmX#14%B%|-gX+1!E 4^xSEяu+6F^œp )8 S$A]4z]BX.W<FnP^0'1(\A!{.Rļ`Zo /#ꍷ "o9^UE E utmP\, W]AE4P""*ѾHiQܬ|˱_C%[hvS6!@=8|tK6{AgqSj6g0FKr+&zQX* T t1i悆#Eq%?]uNk r ݧLUi%mhi8R$xsbC;*3b'y`R;\#"w7U41f3r`t?PMX6PDS&Hztlx%QM:R\Ja4\c2ICiXlADԘv)kceJq"(,wEnvOaSړRwm7 / X^H<VqFq-'3͖Dm#v.Qgd`\*hhV**KvR`F 6uqt&1Y1@IAC .YhE"#l o gƖK.W=ȣ;rI/+ъ)Gdok0JZB@G[58NRA3A$ּjJP{w>n 4ҝUlV8g_}VZd!e|[%rr&~S(|]&YϪ#5'Řꚃ0yfA XXBj1EcX9qV%2`rf'Mݱ8?YԪ솔RgU|3/'B׆|{p}I+9fJt8tF08vsZkE=%o@˲LXHuV}3ozq"vql4)c n)S Dr c҆OgM>/8:G9ԒlRrxcVCZӁC᪂JCCZa&-ELoD,x֋ew%o?oP|qRʗ4U,V%ƫZMs`̛&3s6n xC"-6vz9˭JD=87WT੠'5Y 'pM5UƧnw'mrz?We:k3 Q2r$1'&H{ϊM?6z|qP-=a3M_DεX0 8ך4D >JN;ψ nmRK!} -  ؟@2:hni[/fB@Go+փUo/=f )v#gLNC>kiAܴn@׫mYjA8G4T 74,+D5$RHAG(pm8DYXC[Y܄G3^ۙḛ$_YM$IG QZ A>H`6.Fx%7]pT^0 `ɳ6pPtPJ p΅'bz. BF#`f]JdUlgIGG7^):W-fBЪRIF$ 5}y(xK[ 6-\PAS?/쁣;ff'%HgTx|Q`(JH݃X1B' 7t1Dh1+Hݶ-5@XeӤX$;:RmxS8T>YџGX6/]<-qS5{PkMq{ j% XGQ)V&(z!~Q[+~k B(%s!G%[ޒ)z '5GH&kbw?Eԏח Mϙov~`"fl4#DBo fP)p"+tJfXGyE͑*8ΪuCR_4]JQ,V[|pk~%KuIϫpdH8=#ܧrh(G)qG;fICj^tp^0v^Ț4}3)V}Z~].; &[n:O]Ee=$yMqpZ.Ң zoKQ][Ίy sQ3[,=JF\m`ĜWBamQ|QT0~~^3s%7̹TQq}k Bx:%t-I%m1.8^2=8Yƈ((3]hl8{N֛eK]izGY,+WA׌xbF'c,茡\ӷ{2e}e e mI0CI(eWIG"\{(yn\eZ_Oj9rqM+O RUY]k Rٷ\ɏ)ao˫wZfߔVz%n!h}nB&>]Ht]_uѪ"[1V+Gar` }$hfj2Q9ʥ'EnhPX}Yfҩª;x:x"12gdj8d0"nI4B&%=u^#Ize=zQۉnyVcx棁ۻ.8ڄ|bҭᲩxЃ{j3{7y0`c\ .qEܤ ҃cT :a#QrKJ#,e =Ad%q`=-UOoOhSfTrױX]p1p3X pGUv~ꂣ:(Y8*%y-ާp$͸Dse8g$K~긓y%O_&rvn~rrhR B)F@bV >XV(`z2 ]kN*u=hX?{۸ ! ֒codΘI.I6ǼȒVf ݯI[e85"]uu*y:?[LOM ObF%N 81F0dV̷V;*ݩ]* LfH;ͅaZNe) Ӡ>˄A02(ALصv]7IiST$wO&a v*|YD07K |N#6RLNUL*_rI?MFd֘-{fr]  pܭ*%~;u>ĿFpS'K\ǑH= 圧bxJ>3+bxb>URWie4ZIT\]HZVK LsڗB&SbT1]ulA;p@ӎhGF!L:o aF髄@ U]8 Fp$SzJYQFc-+6~KMMgNqWhjj c)k~b1n3+UZ3~7:*T˙=$ >Y:~"2NJ3-:mcٽRr+*v;Ka*[e˫-h.+[8Z7͢T7U&~xʗXfY#p/is|0TaUO>Cd8XSf"S1ĥRFl~J1֩W7KTU_g hĭt)$٥uֺ~5t0028*Ƥz'k]3JeD3 Il/\Ӌ"N >,rb6yn*v%u_3OISJYCo5^;tQc܄#& xhGӳEJQsD=J12c'Xc"?l.n>ˠRX!E;x(e"K @xrF\zv=Ngl2N4nB/;1S>ѥ~Z=O_0&`͆΃U0 ݈ c] tD 賩ǘN}7<8sN#0azhP5 0D&`=Y UN<'>Lq2]Yvw絇]aܢb]IoF&ˆ"GF}7JG&YdQÐb%7A C/Kbqp;3부vП<J7~-M>>ZUc\\Mg"jғD%x wcܒFG%N1+٣uG0jM5b9B*}M4wlRӪY0x|li:< 8#W15@|o =5&c>&},ct@%THZz{3V(K/#u=G 7Ov Z35x"ladzX^shƌ:x~ty]bN;b4Jv Q?o,-DVdm'MB w^\(=m@X 0mC|B.@E> ՓWZ6;ͬk5W< &^r 2/Ϟ(J$ح}B|:=TwIjG{tMm4Fn7+o'c~TTRkGBiΟcGacq;NFit #9ȁ_\>Z;MKy0^YczȓTVBݠYUwG~ q?$9pƀ}:ps+P9aϿWwz糥͚s \u* bgә/r[~\\,blWڷgQTYlU)ikq޿fq'K n-LdXZg jbWWDQeldr\n˅7ܦb`|nغ?uJlt$aŖ;gq|1q pK&}0860E3f-5vrf̶zr"gun\N6}?wo6"ޥ:)'wHm@k9NFkoWH-ԷV-)` K BfC„P0qRa⨕x/ "63 &HR*%bwKlg%n>Cg@`A?FW4{+ιzP0Hs9x;z8L{A*?Wvd \Nq.mJe#e`*HglYGR]eZtVɽ#o !@r?ixshs/gCw2+ѥUGc{d;NVX[_Vs08n~]Ӷ5kp2DՈ<1xX8"\64"/=!p_rQC@!ʱnЗtҦB,'-~8ד\A]"pIAmjW~:- Ȕ:CLKF#jU]hTxAzº 5O&AwJZ;wFZ7T8Oܠ7K^yL:/,dC WL?JPP~T9HD`;O$QQB1EKʼn{6dS€a^DXF"3:!N4eV0k13Dk b™AIP-C%J$6/-O!k ty0 v4t,`ٶ|@ /mSx7kwt p&򐆑QXaa9C'Q5 m HK"Űn2ggg(Y윱l.sF̈M0)*c#`!KJb&e:NXE,aeΨQ-Ψ9CRGfl pFH"1X'j8L \80$-Y¸:ggS?QJ={LoI.,;[R,尅ǘR|ɥP3Ѝ?xDo[0WԷSf[0 )C;/Ԏ6' ˤ8p!1<~165ns|M䟗הfBxP;ܻG?N&O~vX"d]+]{` 7z;-p+u _5T:*!?9^jS\ Tu#XӶN;UЌ@ q(A2RYJrq 2 -AbPĎ )aHH;hh!^bW qE#9LWO:FhpWv2!ߺy a8qhBL7ػ%]!'Yp!}hI:gJ ao*Jt,u7W:ESxo&Znmb^} >OpO ?;:RIU]]o]];~Im,=xHoR]9g'>9;pѨT06U_Sr<*AcnL@I1w_5V!I*ʊ$j}[  ~vt@W !3263SJ3XN+RZJ~՟lVeOBDs8Y_BɾypG#%2A}n%뢤nl^H M1'`շ "=&m@Ƽ<ɻt{onWw$-wW"n>_,Ym$z'Ãt`R.nH}031}!q'/&*wLxyOP^ޜA-pu`%QqL־4 +~g1'* +)8NC93R>1ŲPA9*YJ|`PEH!lTFqHwKG5#ZwX'1AL8cQ L$V%pJ@ eQz)HIYVd-Kcec*dI8  BI0`3|BkhT$}d䝃U N}EЁv2(%)&.@US~s+:Q K$X ,K,b쾑"$#C|n 5LA[FM!)$#XVU*W U)EeUHƱRTBOZ^0?P:IjQsBlN,[9^k^\*gY**dVAZuTBt,X*Iӱ`~py꠾hs,) s;r}s`rb)L)蠝ﮖx%&{mx% jXbN2ly eXLo%y0L>\-xJSJ(WN?OnP&-*:eSRĪnȌf1RIf6BEAKBQIAC]{ڣ$9-&FFhܾ =ВG5F-k w7nl4f?#PKr`DHthGV]ӞnJoPW|ղM'dt_N-3)cxWv_ e^c߰k/ІO/5*xw4[3 }`6kӷRz:L4sa2].4#ؘ3]yufTBҘk]Kn,h98r_$cdX2teBr;Sf_f58e`o?p]>u:<;L×ȿIB8dZbA9ETFWr2-?Nj:4V|[cQzDy[MU:ӋUDdDYi*2*guf[F%yز(aIeݕ€60"婍qI4&iG# 74AW ު^#@p%A؝j 3 SӍ֩H<#HRlnDo:oG1ҩE9 #6|9FJԧ˳ӓӯ[as,R6;sJ*OݑLdfŲ߅(d' El!yE"?¿sf föps} ӧ]|}N9(͊75)zdN`I\}W_ja\o~!bO.BjΎQZJ %Q=iJ`_s`0|0qxJ-aNJ'K6D4PP11+oѳh `,V.0Ҹ'4u{\L' ΍J4AN#b0_1#gݑ7!'W N?XLĴrf7<m1dyt4>;q倴u69:N_?ds=Xn˫PrfY|KJ|dm?L#wE.QS_Bv.ucՅ"nx ecK2 I,mȈ3+9DOX9"- `i dB2a92=2E4rL@F$ 1=!Kv6kQ ߖQUq{T5x ډMsAi立v-_HjYUIZucܿ+jt;󗻼O|ޢPkr,LX.PΥhd4sNꯞ=\ AK7R$E$C4ھd$LMLކYhTEj졥Qf1֋o/*<&MC[Lʕ9x(52>H#7-j@h铴Gz9 H+q&FdKO2>iN&XNV4xդ#Z RFh\EZ -4½ie@1M ddK2yٞ6~Lcs]?L-}g,#`-5z G0EeSMS-ku+S-V,NR&& s`I}*hC۴r n$ ċh/l7QY4SE ;ڀ&zt#ӭK TkPCn92(эwBT?fQYK@7Ѓh>GzfT==9pQCH'A X0V 7]Ҟn*ۄ?vT;ZOz eL?Es+bgW·}M߷MjaLUqXx1ѳS>xy,9VcuT):ob&;]舥׍~z/,k箦 B7=xk?7MO7a[N7?ڟ2BZChͨZ24۠Wu䎜.0?X j42Koܽʧ'~E(@ym5Oˣo"_Q\t",5W OWga$?H8֓e Ӵ ד28K=S9&6 l+u{&ɟ.(/bbjT+D:<`[lX-Huϼ06=dT :}PwT.Q-@"=R2J_f@i=bH56ae˦IӶemY ! oFN>sao]Й=PB3~`/*k7w g,Eo]/߼h/6g * AQL1?8^뽋GAƌo`+g*)\^:n<=_Oʡgd{׮" Ub*f蝞9_V3)U$$ȵGv.˞>Z u!E1d3T9C8 JOIMxS{8D]ҋ}Ǻ]/U-yEjAYAci&Yj0O&d;PAZ+R~U&BZ?u۸_l1p;!nn EcYH=3WMI4mi*}0*Xwa!yd7ex=m?zf' ѿLslM}nygw;;heZm_㾍ge~^ޑpm3ΝS/Eؕi?x#K~l0aڵ}^^.'_Or͍Ej[pnM1[+=$M2xVo.'-kr,ig_Z_<޽sJBb's}ַ?>'ǯ`_pa~8@B/BɧwHڣaַw~ټ{$[>P-I ^:\a1f=%ޮ7w/?]n7WOcj/mzm_Zd痯o/lKd D%yp`Ut/J}V7=i2=Jxr@͏pwj3nRwFrtӚǠ_=0{!dqF'YW׫<+U;#bl%ok_ʥmSn&\𢡄BYD&*#H Aɥ#!)G\oS \eZ8^`"0O͆H% JYA7`;ON`>ՎXF'&LEke&,VHHL,1>J% `K ZH4N%N~(g"d`5)DS6'lƬ>͌y-}5!pYhL 2j--NF˙bY;rf4Ygtݵ\ E!YbDQ@"'V{JXg OYc2|*|G'hyuGPRo"{3;vF7߭o6qqn=ߑMw!En‿}{pPL})zof.-f\5oFk|l7OFd} 8ȣ^W"j!)u HBtCMht̍aRs0O=E_٣r۲ V#^Z(㈱%&Ll`4+p\ѱrD g4wA#?m׾o)5ka{p05$y 7^MRKNZMhtGBOd)v!@Că+ 7*kKUI1NN E_2ѹM ڗ&^gi,ӷ5kRm~@~B/eEJLb4qY&8N{ 8mab>J``'r?!շI+f%~ռD>e>DOBy+rsJ>z>UgMHa<5;ک9@#rqTqA9R;aa&2Aht)~~>军3hih$ѐ@$OFR"hJ˜( [ |*sT1bT@2LZUI`zM,O{:{64,>߶]l`\4ҫ`HywY+|*_=(߻uڌeh&%J*EF>iteT[Z"<XAҕ\zD:G 0eSFGKY IyeR#'4tG?r.L-,Cv+կ ǷBlj(8%J(VTz| ^e$&41҆q ,C̺_q| T6R֜47(,2iMd.q%bSb]pO815epɚ9Z@Mmv='r㾾⊛i6?Am4{0pG|(`f5` |"KŜcL\%%KaW!2$pF}m#=bXZ~F'pDya0v x#|<%7c4Y^Bm!wI9ـAvht>~>0ţ${1'_oӞ,ՔH&܍{xNL8 `9#+ litb{Tv҈-)Y0PNDdV$r sZDHĢ~ |*S}rLJiS>LJA @*kB1h)ESK{#7F9fY|q=bF F/#jG)!܂I!Y..sIR5Q_[ :mJ`iӓnSR'~G{"p(,Ѵ,Bn5f쌺sEzH)F("MHkٛRHo T֪O#&(U: ҥH!Xg9 |*|m @SKB[`0ËnFt0pD$U8%Mx/}ՏL T6kWMЎ5Tf6 RC3:3lHSf6S4*Oi6/J#P:PW ^xD>&3K͂Oz7#m_38O`kJ)߸ϒ‘YPjfԽe+`ْe\deƌLBK@l WZSr@h  TfnRq@͎QHmt9[n!**\${!I[y0 !Z: Xn>3f:>^_1n3JY*(  C?i; 4OE09m `k.cDH#1 րD Qe͎ QJ$ |*ՌqU,|:)s 1qCqԘ65ڀu:Q= artua*FDd0rYG }.ZҤQ#htS8SPY>紊vo!|pF?fm{yqww3Q<@fؒrQjШ8mČ-#W(+8qU]"~PMV5dq|='L`5 n8u(,3f\Ojl6pYP}V>8:QqH+ŠPs1v5cX%dQ9Sd+'Fzq-Vxьy^ӡ'U?gTce7`W8Dv+*9f5bT=̜g@X& xMTFuM[r%ՏZ0Sju5hܤDrdr^z"6rJr .~Xۏߡ'W̘! dzVzf ZI7z^,Z9f}^+%v&*F{h)5[{ϏmAfF=2V6h0VnWp¥0`oWrŨ_oG`mc%7`J ҥ B+mmtO p31(,3n|~bE^q90*p֢x3{3> =Z'+4%0cˇM[g6{7Hsapfq߈COdqbՄNzӝv9EB}Q3hTZVϗ|L7pU YG*N:eLM?R<TS9]R"hJ˜(a Pkf7ʕ~a]#*O "Ӡ|=/ڐi59'CL GV~+ |*gl(ϡzQaW'slE;g4/$&Fdb6qdĢ~|>r6+KL8ͤugGk#(sI%rQl>c$L2<>هp.άdp$9g TdE D1p4 daՕ8)4B_všǐd7x h*"̡>.㬚;_:%h GdAĄ[Š%ƆGvE ?J X#>g;tHlZ{0J|-jQXܒ'DQ쥴ȴK2!QR; -~I1QՌrK|LIWKrXwšl,m)1w\ćx&ƀK$ܕl0œb>dc.Qc6啐QCP;$crwG;PAgP~1)Q_f^ -ɓaFqF32]=jsAȢ~+2J@_uu8cxĀK򀝚هpyaf/"PG#>czLԷfFW-Hh4lԩs|D̘ |z tgB2RV eJP_sb RdQ"E%0/%|K խ2`Ҕ6 -o2 Cl 3c ۏ</PLE:SqDL*8> 99v |/@|pÛ`0]Mx~x Y_YY~ Xg7k`k-5 ^,՚/+a#k/A=jVVw*l] vɳ?@Ofӆ0m蒭zRzm}4,o@诅_yK[J 6N܅ALwh-3;ţ.skve3]=λEjQ޽{$u`+M*[DANR~Sos:H~Dkֆ(良bn;v]T`8l'`JS=en 2 .ktϛЍ Q1}q~ h=iF&o3@Gh@QѻG; U2vY GC9roqofEyk9K,$1Z$D^bYTf ')IiI5tAn9r<-أ$\Tlt#8Ih^`@"w}܃ۅ\e! xS \4ᶠ/C]As^s( fਖ2uCl 8KvR :fZqOAz1wx[gX'JIӼ%1}T6ӸQi ]3&X _Tm`$*)$Ω>} bFo4p}-XV UaC2 hZXivނ |-Q-L[(P)4;')H#bbLh@s^s( fh0UGp8:\o|3loKtwD@ĹF ;Gw ûGw1nyw6bj,Ȁ^Tq)d g>cӂhK+̖xã^s& mxӸ gxY.Lk1_a)I(KGZ8z4Ms')LrÚl!bV<q=+,,O\ L©ٞ'-za (`gdq8;m2)=7M|UC @j\SexxDz*!uwi\e 00G'YA}}vfc*ɥVgf7c@mnwc6译c1-v̶\|ۚ5c1h[UQWM=b:?ED3cxWx Hdd:}]UM7fѡ`᯾Iӿc:9/`Y'~f ` ^w]X yMhͻᾓ7ҝ 'q,M7[y|o)P1m\ͤBGvy@J~=ehALw%23ʝS*#̻́|pe lt Ӡ~ # OAsb)4K$OG7R`t^=kx XJ9! \Ju T!/0v"P[Z5+U_BBL/&e5tqX-Ue$])CfH&6 =Wp8/l4ZiRmBSv]VYv F"uv4`)\ʩ8a{ 1Rhn+@轤8ӕҡ a\FS.r IAm'e1Vؙ'aF`GW>ITOP;SW9' *pbkrW{MR̃49IT'h'@Lb!]nR&ap/y㸍1ؗ$EdEp6e3%RmղǒgEɖd[P9bH%⥪XDA|ɗ>jL")bR댈Gdq|~6ԛu|}φQ$J_U|s ӯA:)"+qw4 ua6F D8E?:5mpcj[bO2r|x2 +cNXԜB/bǏ}<1fo^JYp̈k!y9=Yu,MG1U=^ů2g, Sdfc'qH*d*tĄKr IX/ 6znHȱ()x2T(ΒgdY>ė;+f Kq5IvahgbF4dB!y-n]M/N1$8EQL,=Šc41eV*`OƩ]~O`PB5Fbﮙ"|P_4rY)j7v֊7=o6ץ6u hh"kj7e" sXpZ` yԡw<|aH)HAoqf%  %ĥb!TPoBu~!b-ac ' `v]f}0u&%9x+O]?/:NM0u!&&$.?tc dVE<{qOC|){U7!dDD)eH -smzA/_-/m=JxцΣN8w]eUWy-f]ˆfj]#SޢJ/hz=Fw/ퟞg/$1*("P&C+3RSx s_9ǗH)Jj13~ޛ<$ͮH5l@" ՜ӑC(j%QB4甡$~ħ"%H@`WYm{E-Kn ?EY7:hbZ"]>.)Ͼ=/<+8'rNVQZ^fsÄ3F  cp2FcF1qj3AXL;LO [ &Hԣ.Hԇ4醅1v8B"[z-{ʷ (:9z&:U>YS1qayt>S, eoc!O?S{p+?%\[b$d~HnOk{5l{)yt\ωkl s| < ppݡw1 $Ka- TJR+@3"[w>,[x; &vۣ.vۇwi $c2DuHJj'ybkaȳo6D _3DI$("a ҹLs4,M,O/Kb!<5_8ضKR77:Kr`+#(ψ;&$E:J81tAz$OQ~'zç0GiKܳTp +~a3]8kliG0 SoWm up0#Fj\ P=ːV(g[[:vm%.L*y%};Tru ԀjQ((.f(JT*Qf%Mc<|z5jG}]47^S/¯|&4cwz|x'# a^?WN9D)ga)r. :$gy l$G#t;G "ɨ[WJW_w7Ըnd0ko%6 h߿ڊJz>Ǐ]vOud53?܏jt~~RﹽEإ?fV{jkqa wV^V?̋J@eS?Pg_9DRxeWQ2}:fTxA.K];a3^vIm_Kw[kI[ìNVדwӧn ,@c^~ja vt4oׅ *2>[龴9jWt[@_˩CڒDHJBR!ZL4ɢ F/S4IY(a,ՉLcN‰1oI]9wL~ 4 Dm6+f%LUb0Unjkb'%7-&й At&}*YqsRl~+n I_V[٬q ݁֯ י8ge3Ya_~M;v?h3McZP9nԁxZUWӶ+[W6fyYj/E4Z6-NWrvo_gW_˫XoU0w j?haW5xM{m6R-t(x/Kk7HswZ1No[~\ٝJq=F 4Ey}uEx?r` .]AI.&ۤnӵ¥܄WRsKx?y u"őqLQP2iJΘ)Qr[pWyP9(!ڵ|jeE|HvCLqX`$0H Ibd5r}j t#`swKo;1;x z9׊ؑEpG~W"SJtaRzeJ-M5'<#Gu&y@8gb::Ky ^&H3ュ.a6Fb,n'uzG$&`B* 5z * ; Jܬm>k.bh=,"<\aXJf煹Qn Mp|~Eĥ%3 :*~J3X-Oeyݎʀb1)EֻL#R\ӮE}.͸z5Q_5Ck5l"ѠHOL Gq3z+HUIFh`ư3J933eqLƽ,\S!S!yF"!Sx=Pӛ{ u8+Ty|.N/cQ 1D9!T%. F <2ey|JB )"RPhg~U%Z亲 wJ|^=y(ŧB霹fwLbg6o0 ="V>חs':֩$ڃd$\,EYELbDB W:*wE :8 @/E$ߥ?Rq*Oi]AOHaYo~8ٷQxaS0(93:V8k%DH(3BYݥ,Kԛ<uy|z 1 3'+6\G%ҊNl~eu,`n-E:Z)z5z !?,9y=4<2yCz(gHSD\-cF.l|Dp+ECR"YtaRY`#44?%"Nc2eɅ?%bŎF@}J#/ x`;5gmE(q#aoWRf@CbAk4l"QBElzYu ޺%ZC)1>| ?[K*NjE(I-G:IcODnő (ǐPHb!K0  nc9u6ԛңn!]c6FYc'BtL!ƱD2eff)z_EqH[+͓,i3n g!I9TC Xˉ>IH~b4kfCIQ_.Çwlct9mBi(Ni݅j{s'42hr\S٦h,wX/4>TW^zžHmW^f뇱-zEK+gCtQVN,<3F v(H3J#P%J(: gzSzn_)bzI7`.c2ۥ ~oh#"!4M̗E3]‱I`H)MN\:gGwzykEazv[5P<j|bN^[Y~ᶷx6XL9((Ze`<)RʢD4e|^6zľe(ez滝CUxKn1AߓIo6ԛ|u7|}%ڰ1Du12-}@cR댈Dwܖ1ux-LZaz4 ^8&o'm1,m(hU6](4eHi)2ǑK{q&hY3{/O-7 ?_YQ=YYck}_*A?ϫѕ]nO7aut]"|AT^*n konw諙i*FF(!8=2g Nj7h뫲+/;[T,]:ܐ#W׹qޏ5 {|냟 (P糯[([$6(C/_ө5\n`r]V(7f=si0A$l{~~r(˳{f7iK4yUɪpp=}w}꣰b1^d?_Fi0FP;:7l72>[B99jWt[@_˩:(:g'nR#qƲ1R()e$?QFcN‰1oI y7+/3"fŬ)~\|PvͤWN6-&й}A/N&_;{VuRl~+n Pe⇑f+to~eHY?+ ;o*״C\nw94yp5? f+75_u5m :؜Fegg鼬soP m{yӁrt,g]qV:qj*jJyԞa7guWc__e zˇvmBCѠ? Oy ,Tazʱ_1ma1/K_fPj+%E^9Xu^*S@ C9p.ODp,Ȟ _~!.WyZ%BWð|P;G8v P}1G?泼 ).1}&%WT1Ow/m;1aJB;!onJ twq7JM EpJrC//7N^ct?Y]c4;g??n;˷[lICG޾W,4ejc"I0UU4"^;^(nsC_&A(CԸbad_hᏣa"vlmW=]4/ŒGګp̦C*FZKIyN}%4mOZ0fͷ_E65t!$?Gz5Ob(&(As(HC %8ZS+܊ʯCy䥸l1l~J &eZ1wAl"&$Θ ,xGT:#('FP^O@bp?b>Pm^"{bt(ﱉƟhϏm4Jɮ R}3KxFR`-\A/rW h]%ЧX{xjgPAIW1@pM Qn-Rm$l&ˉI+Xg Emg'xkGh H!-}AZ^V0E\ KP$;$e`QjE4Ux@' F&%\Hg ߇HQ)$:W~Q~,# ,L F$GzkYrp(3S,L# m52FQĬE0=#__4Xb.JpJƉPgG18tp¯Pv558xkvѕ*nBǼwBtfK3]l@bcSif[Eb !r**2-X,/sxO$-V_~7Z [ :Є%A&D24h 5%f mCJ$:`Gyxoۿf0|E5m BkfrFFIAA8k`6YƭRAsy/Xir*|Dd@ f(pyT )S HBOgvcѦJ"0y&)@2 3qF:,H)-:"6 m0a4EAfvؾ5naOIQ!.6p̱(@sG& u:G3؄aь߷7DYM.d:xBmgIeKjzV8yy%/|r;|)g%V! f++V{^ 9'66ufQQ)O!I@| ]'NpDR+wIRzMpE:N;# Af]K=zxc8; LK 1i)##F ()s@8KhA}Jj*fd" 9^N*nRh>sԪhҠ^3 ,"bٱJQN1:D1, 2F6|!ha1T$5Ӂ"E`8J%h 琯 ̌(ZH˲)0Wqce+_uǦ9^ s(2˺2@cOaNF $x2N@ FhRrM@>z M' ]_a5>>>vb`pƩygc, SEk3S/xa-z n7ETˡjJp 7dFcphÄw}FkptobVdJUg?O89"EL(j U(zhFcp̱mqa32ҎQ9~c\L} sx(g"p,+J6`@JNRfn =М AT^nMCpHB"x'MhUH7eu}Eɉ-'7Ej.Tg638b$aI$I.ih0hqlQ{7p~   ) ĈS(\:Gm~0KPD^ r;?M=Vj U.q#p6N+jfƀ_F#ͩ,@nskOjP5=ṨoGDŽgHw@Aw,0!o$bFkptto> RgJ#4qwwYQ>C z6ФNpHz TX9[)p Y+P^ճV@ ?_4yVuVc9. !w1Ss\zNu{oA9E, QU~2٢B}8S!ڲug=vXٻŲQN-BϷv>G-vl! 7Iw fFcp*ٙ&ڙۂ]NaRR8B)8 ! _nKT᎞KH,sl3L[20𳌲(A]Y;uJs`ÁD Lz0c@i$-]~}])פHp{ʃ*h1:16: (O:~c\dS\%#4wS%nMBN)(*w4ڂI}'_])3B18ڕ<6+cϽE"`YRӊmzZןN չJ r8$7DD1 ~P;B_YmcIIoo-->l~__,%S-S_2epWiPH}7 R2 R3 aD$:Fcp* 1KQ|6o\C"q[/VW2jK]#4*[)%pI4Fjwr7~~c\ ;S[S A !ŕmݚ7_efb]/h5Z0.Xѳzzw3S_Tf7i`Z:Ks.a7ܐ%s#4#kfcTsu먫YN"Ӟ5hCo.1kR{if껕:Lh ,wjQߝIb=0`Pv)R:gh^] SI V JGʀ&# b8"Op2 mIѹ9e81ưȳdG%1ƁQ/yN"H* ¨XA"GrGQ~;f6.[>lZ d\ooEa."'Grԯfi@[ԇ?@B;URƃRX=Eӆf!8 |!&⟯w^ oi?,8ŽEz/_l9C& |fG[?α#[H8 v[gKE B%N1ǃ3A`jRs:~\^I`4cty7Xޮ=l>^ufEv"/N%q#e!<KzwziФWpIn,DT-pWײ,}jP֮fp>Cr XVC |_uߋ̷in7rG?6 j5_ hq9i~1>$5(~^ΖcXs?,؀ yQ5#mM2h7y5(MzY{@[} ak _E'~7β|߿r |{ްKΨEP8.(1 "7=~6㷡}, \A̻R+JU@-4bQmx!D(-/F5q z3Jtf 1 SL(zFEU4hA Qx hB"'TPyyF[9OrF˵,*{MYŕ؏yUZ+a?eb|l5zq}}9#: ϫ|%?HgɞT~fXT.QNilimJg Dg3h?;MlbtGj?^F/Q}6*{G\hòif֪O羝/ٴr{7{dȚM'~1@Yv"WW9 moYq 8dlgf;™ C_ߺj_Yr|ykć8xjRڌxU=_T{z K?ELjee=^O|I|ffGADW`*$(w/(L=2)Wnh</3\}o,tB_8[vGpqaDuP 麅?#hp(R^N)ܒ<]'ӂ=I'}6A(f'QR tmΛɮH)Q*FxusW7іo4ZWi=x*kO.<];KȔ&Y80-7Bc3Z"wiňHC("5CK"W%rwn"wLGb9nmǚO:TxN|S:B7]f$^cw  FL W;"]U1kyR_"rPT{鍄vWD\H ["Y-= Ņ!uPRT\:x?_$\Mrk_mXof/ϳ3FʬB !}czͧV&;86a=Dxe`I[S.aq[ąO~~q*7wA~ τ|Ka@LUgYq4_{ڭ5d尵rlnz|CyiƕWdzo!UZ_vsVf!< {Պ.ǐn`'*4Ny`X-~$/XhP^fawi6&eEO|uF-}0b\X_[W_ l:տiucF&yopaXsAx"`cK,xjt '}FB 𜷪6Sd? )"ujǚw*ǔ2!!!5x1t .Xδ~CuUhVbL )k|9u? IkRY>BB#R8D|$BBӂ#CwڪtVX9J )S*HHRcwBBTꬎJR҄W(\ќ CyЎ/FB @PJC[ ynTS˜14Ô3,w% R  Õi -:z>FB +5:GHHQ:Ŝ>BB0-W;ep4uՆ%x4„7 SJ!QpJRBjG5/'yuY-L/&!$c?c*Z:TL$Pr00qaK h6wuHH^2[B #_!!x-Xj> D>$,Kc#sE~r2vLHHRJ+"˅gsSB! |1!״c>FB D^ZirR!nC"K7c$OL>Rg^9i? )sRxƜ6>BB  眖ذ: QVv1R1t29x`h9 ֐xb,dʮŽ )+|QmQ ryC Ea,K*G5eIN5RCRX[^Sdul֍F(tó#$$Sb<״!cspQzᰳ]>#! xR|/HH0:siceCb\rB3e @]:GHHZͰ4K$R˼[1.5Y]B|`#$Eg}:TruS퇫}X_I~YM/G\Q4.N vן!ZOACta7GGo9CTdA׀yv3Z,NgG)z[goA kz:.cŎ1ZC"~KD x"#A'͏YH *N)bK6SAV۪Y.wOAJׇ-ɓXseqTIA,+v:a9 tz)&͹Dreo%g"TaS0 JL.0"~޳޻˙_ mvKpȂ[# /dڱc핵pL!i hK N KaTT{p#Two}Jz:lsU0+K-ES j'rm ؇B`؄RJԾTQ |I K 0`SP(zdG#ZYRڞ$m͍/]WZ𜙅=[?}~֟,G#k,l$_I/sC~FuoW}q7SW~7sӹU_x5f<ͻnv {ov xk;Mx77nt~%*[ )|oZZUG|gS>g@8+s0Ӑ6yӒ& nz瘟/W|Gl-4:oui.C0@_ v_>[b1T)-bSi0yf !(GeTi^`<1*6xƼziBx@g'.wdT;uQ s&S9֛hޘFH}v&CMzTpm|?7Y̗k]ڗÏyb`\o{t@ oM̸9z"Zh.:ײf<|q6~w- *)k -j8`"a(5ۘ.1 4v]>c@ n;FG^z"Cǐ!]`s6dw#kl| #YnҎߊk[݈{4\''ta=z/M8KBVSZ͎ҝ;Ӯ0t^&7ecvmrkp<+,lzLs6In"'т?G%,E'4oNؤ@h(PʍxaP%âd`[,.VJْђh6G>)s+ 2b? nd(ayEXMu,Ꝙ3{̬%?qK9-xY'/۲Z1Z٭.#!ݎe:1;NLi0iϧy*ud%9&}vagSDcP$%\2N>{-%aԱ\gm`?]B!D.Oل-o|g#Uo<{vrګӗBRv;Ex*)kZRRݩGU6u*vcdQN9Sza=7:@z YI[gB!chis|,ĕ)Tk^[׺uQΟCq Օ79ChY㿡r>W_j_-וIB];=ݠ%hyԺ40u)oRa: 1q+$=A֫ZeN{% ^oTcWQcテ0YP}ˀ{Ln4SG+%^Y-q)ØEIhsCuᅦF: mDA\ ⸠F xbccr:Ꮈ8阦9^tLŗsLdru+m#IO;S fƶ1FmeIKI K^"eEaŪbEfFDxa[+LM{EI Lshg㛙2zv%ʘOWy|{@su=2kx_ރ5?og42@{&F+0,񊷬 s@PHchBbfxEwd!T1!QAЯFSU5 ҹ3@r=:\oZ\}Agr(rC-JTdGM\{h5L{Ϭnnm]9z E+v X)?tӟdzWtq\:hޟ/{>A=c٩=:EPDFImDFjDFjDFjO/{+J(T#CjDFjDFjDFMjDF(13q͈Rb2T#J5T#J5T#J5T{$_9r֏J'׺"-zlva`AuBjc ~.wϓ?!E[-~yvQV8N;TE7pwZY^[i2yjcUKYD/4,&W[H! RFIU*g:fyŹ۾\?*+S >*{o 5}6+ik4nN3wbt?ͳQF \ZFS]\N$NRVWHk3xYwe|~Fb0 #urXA%oF99RQI,*Oy8L9!$m@@@9M#0DiFiFiTn!M4M4M4M[ r|L#/fT#0 aX޾rn1Y/by著[o?i#%wXvifAsnP~ӊ7UALZ,1/X2jNݠ+bU,1+m |&X/G4SRc'tgdN-"68?:?;O+,!^6I -YrSz(' l~E[vi /ي1^ h3x{fJʀ+>d\)kyP[a&&PJؑa5%m."*&Pl=ٳĹVz c[ w{gQKjSYTS*¯zx:CZiU-h(>-\hB] uTX] s[Nƴ*>bdmkՋCՙՀ;T1FNF\FL*fWW}j/s~jt22YD!{D66)qg>V'"X7?tuȍ;L9P~vjYz.V˷gYۿw:[+t}h-}a1V6yn,¢}TE;¢ZXT Y#ษ!E¢ZXT LjaQ-,rT ~C1LM(&ZXT jaQ-,EPHno Z -WK@ǻ_:냣2^wzf W\fL}j 75|b% : 7v*n4{i %HK -AZi9Љ3Ny蔇Ny蔇NyGy蔇Ny蔇N9Ny蔇Ny蔇Ny蔇V뤶LQ:mR:S:S:?kģB(C'Ŕn֚Zkj)D\*d-9u>e& fvFdmKqg,+=_~k'~/oQ`*U*vh1:0j *Z'}əbyO>"^S7~kZPgL3zZO84z&B3zZTj=Sa^lr9"VFl$cf>:j {V '늴|Ӝ .:GPm[s}7M;׷:5̈%j :n6?w:4g͗HW׏vȴI8fӅ 1}ݟt?E79ݚ52F8irs$VJ~zv{ŘfpLʀޞu4dzKKx Rq-|GwvZLا]]k0w|YcVNHK{f&]@rfm{^t`Ekj榩G?Zx( vVnz;h >wSjj7c>3WӋ9 WotWsCNǭs{L2KQۉ4I7{2d"7Kc/݁rʓޭ &.kdp4C.İQކuǤ 6a#6a#6a#a#6a#6"6a#6a#6a#䉇w-a#6a{ֈr.$!6a#6a#6a#6a#6aq ;@)!*sz4)†bޕ$uA[c3YfEF\`ӵT|7ZzUah?׸gO?'=s7% w]g6d#f>ZXցݶ mFbȳ  ɳBnK 3* @`JPi* @4x*sŨ4!* @4TJ;'(UPZUPZUPZUPZUPZ݇qg(J J J Jx$& A?TשTlD?ƨTW뤖\S6TO('79DFoh߈[B{iѾѾѾS/KljK+uX}#woeƤ ^WSҦFQ*9`NsaZ8pk/p Òs;bق =Kd"6Y"xuDWn]+o'9GݫG|i/PGZ1%~Ջnrv95Ceq q~D8D/x%On B;o/mM݊q˶ٵ]8=]ȋ.Mպ#ru2/si?!ML+& Z~v%r -oW#7G]>M>'w.3aݛ{IUZ_//ˇBM'q;876Q8oaM>>8k q2o|5ɚCv0I٬cW'cޱC\+'FJf\p, om:Ek,#MüYxbf6>h!`=0=_kf J\۬bk۸@ zus|wݮnG+$ ,jqq0˫ nݫoU>+ʰȖR~cg!5f u_{ oyy3-3tvGﻯF_P9hE&ze$Gix2I"Vzڪ?ǧ|\5ĘdBneYAKFE5LKŵLF Nw9Z/b"Yrt!IJ2&9K17feل0vu6B͝pTUq&2CJkK @Cc.:K%s`x 4#{[hlWZtK>|:mܵ=T)G!qάʢHJ.\X6 ՒPth3 a8ٵ:xM;Sk@tJ8_S i!-͡+>[7.<[(Q0cm7p6V!4JHh&1V < o[Dƹ Veȵ:[6(kf Jmz-#{pK3W;:a[i=BK-C ;_FaaO̠b)IX>TUf J} nj6Z-b&^a :`ОW)D`vdVKb0Eq|q&@+/uZ)Vو115 (_x?0`-ة搦 lgdGN 噴UAŒ5PtUhrJɠ-]%OU8ՆmU&IOX B*XCJr_ :[%jFSeJqe$BF+,Ұ:k0\cd6E;\r;T%U2@j) S)lE?oAVSY2YC(,l~ j`&s\v, a*ʦH tS؀( Urbfr L VJ (Uv a*3jƀ(1]1);k`UjОUy{h)l: TC)FW&b*^ץV݃FX%gZ6_a>UCʙJNjχ̺R͵L*Gi(Y$º'@65jۈ&1bqfEB}UTXxꥈJ!DY) DDjn°kMB֊֕b1F Lg޺mmtka,3_gZ$8_\TRX*p|:XkxZWU]՚o ԏʲM]խwܦu^ #Vx Ձ/K6!s Nz-(] &.H8kVASc2RgV[$_jPH#@$bB"HeHT. U F:VYѰ~$/.D,/xҁT7ⶈ YɂN~D} DU;͊~f[Mv2, 7w҂^ @n@lyr /k*G6Vk@)@d$e`; j9A-и ǀG\ "e~8csJ7H0r#SL~D g`qQ7byPdҧ D2a)I1ȟe q~9(V8/?jtֺ%ȾwhU\ȴ`0ܲg@h= w!}?pȒa8{ H=B&bY\=ܭ_5=pi0jfC1Iتn 5cBtJZ5"d$>D~ׅKp9@Bֺ,50] Az3!z crUvԫ.i7kq|~--/AUiv|ׯ~/A-$5 A dyZm\~.fWi7Pcjk7"+&G#ԩpPb߅:R$yBSWxP:$! uHCBP:$! uHCBP:$! uHCBP:$! uHCBP:$! uHCBP:$! uV̘:x:kh:@k uў8_RYjʓks+ 7wUѸ*JOY+'t}Z.j LֺTL.og2Ӳ<5{'| nw9%my3h~Bi7"g^{;g8V}w$gU/ZR@jfWFM1Ֆ-7G]uh0}!ggWMեy7Ȋ:uUzxoؘ8whtXtEk̾jrM+x+;&V \UVz4jUE]=Gw0]Uj<`*\*Z]Uz8uW7qW+3gWъgW䮞%/Jp/_f??bNy^~ac-;q\Q>?@>'u}) 'ӣnuV9YYIcb1u4 hOt{Bo\ꢴG%Wӟƽ ]Yi!w/oWu5Od?l.b}qn4&iq?rk+ R,}7ދ}YmOCsb>?JƐWNq{ZZ?kMZ6+{&sv6_L^=o`oI9{nUC9 xt(Oy#FY߲i|mS7mNB))3@ՙ_;i%\隓' ~A ӣs.1t&ⷻaE+VIYN>MOMNg][#\ k~`(fs^ٕkG_cC=s8yw~aqJ}5K )9@U&qttU6;Z5]b9X}z]n{v!Znzj;;{: wۼ}н+p[W-6ml?|^XTu_#tٍ/ {ݛ"풬RnVgu\#ԯg݃i `Ll̓bOoO!ؖ;˹F)X\hb-XKUDb[we[>[Rߛݍ i1mp15sVb}usFbrAVxøQz*gEf4\"W;řCQlK]!^ U:߫zcT݉U\7Ckpa%0|۲F/ٯ "FndFfMz#'bGFx&M&]ʢQG3nJ*ydMDdiX?I,*O>=%rMܣQ"*xݚ"6m%^ˉPx.H )3\t \h Q0h@J4)e/O܍o2k~kl Q'>ԧA_Ҡ$b1RY7*9 BXγӠ^by=%}E;rI|sAX (=%2LƶuvQ mX2-u,;U(%rtҝ:Mӗ-7.P 1uPL4wkgʫ9M.vG szo75V0`VKמqK+E2 UdɵP*A+yP})Zȣ=ZZY(RZVkEE[VnJn){SCp/wWa4/DhƘY҃qX+fWo-F2L2LQh[4X7=VT \ \ƽMy7T~uՅU;~sD]Xn%mpx֒?[%S}l 'tV1 X*G4P{SbEiHW,z^4њ_hmQJ%՜,X<شFI#Wxz ߆heKeSxR ZF%;#yХuK?LJ)| (ՅN9hqq*UZ$|vq*A6Q΄3QP)Xbc*V$.(KllKVs_\JܫlstL!jƘ5, TN3;]oDܷ~f} _<2 ,Bs pSQ9l:PXoJIi߼tMmTb:ZTG7USüC1jJ:堭1,R"#d.m N૸{WI%#Uf]I~0"Jlgx՟/+uY7Lٶ-"g*WtB! s;{W犇NN^ݨ?[YL׋ɿM8-xl3nwM3umZc}H+&\2fVAݮ?osBU(Bԧ%d1j.dw!cDD&ɲtz?oF-\_$L{{, 7 &?û 1?a*4c_aCkKVZ:Q*,)q *_cRގ$߇9جl% s?hj_W6:e|>h iys??Y_i<($ɷfaUT P!PJ05)KQ`$H8iѦS &5=3a̵-B%iެ颍b"^1w;T3?u?2[q1wp^J"o#M#[0|:ֵc>w/܇C&1G}N(?S Mi[ 9]L3!H'B2,B՘g!s"TnL fYFisgf<~6 ]{Opt:n yP`z01 LCax@.㵅nz qJEf Yi>7ͷ,Q:^r=9*yg+PքL`bm3 i Cm!_! >dzGM>GΥenQAϣRw?T*7Rj*f_QQY3i'PGۓiUwl_O[liMF w`/gTɮj a}C`Z 4KO@F|G7(꣜(Wd׿Cx*qoj GO 6ټRZEWkc\vUQft]KIDL`HͿpc?3viBMmm.ْoWѓW++͢#ȍ@N57+&haOMkx f*9P5};zr>NeOǭ/7K^~\_WK2-%'?^*ײZ UB/+(TI/]?'l"ζ-9zMzmٖWuDo{u'\e fo ΃ý%V)lfWWWAk/UvoFkkw|=dY)5HkBVФ״l-[:*hcENʘ (d>>pf""{4 <=LpQGgO>ߦgW0<%c+RJ\B=ų&r^FY`Ծ`h~f &N΄`--6O`4'uʪÝ% HQ1qf :76ao')!Ew̓)k*ncfS5f]@v`tj%KmueԀ3J,'Mj&&F+0>4i!Mo'M*Aump yUi0)$TƂd#"ᷠtCtCWw[+Ø ?l$4oPmԀڣ6$Sbp⹠2v`"{&CDdGah} qJ7DŜ(arpJL)zZmZƚ>yg jӧ\Z {͛'/m𶴴2;# H2YXXI 18_4 5JEu .S]BطM~zۗ_s6Vm kL1G2PKQAۭ<͕6beEf4|>Gfҏ^_fE  F7)HJ=~UPdhm0$ k"+tIeSV?`*n2_R`͵QAs[ͧ/4ۙ9e Z: .PHx3jA5x:Fz NVIK2۹|.s1l;Yyna>.;岕ztjv? v爖+FUDp\'cHx؎)tl2윖xxbϰ>Ys&orwdXX6~." 3v"׈]qA֦U9]y.о)=UH|*kRr^ Go&&2b_φU%,zQ:y)K2aśAƕe.E: m`)*113mCؐ?bYQۮj )mpxFgG]*W`?ɴK;xK䤜峙5=TL,3F,дr ^ n&R/5eDDL ` Xy$RDHϴ NWۇeOԠQĈ=Ղmlq1-Xhm 4wF-XQ9b0XJBDLƜF_TnggEثxQ}ԠE)ӜKպRs[{ed_|%U! 2<ϘS6ud 9 rxR֝4/zX =D%R?fdeL(f44p h2AcR!^jw(joFI*N/ "R[ERcib g?Lts8}. ؓ1ă$m_܂N3yrt]%0F.pӣuM Qä:2-4+"$_5ϢcƤVy hJK*%(5Qp'ɋh a2fEZf^Rrh\+:́;A98@AAޅE\׆L2J͘";f>Pn:-kℚ9vCݱ0,l|HXHKiNpKp@D6E{UUq^y}=|G3cχ XnW~ < JgRݷu+8p(Ku%_#R/pL99gpָ9ฺ)Kp>H~YCnbwsX|: E.q^7oQyU+BJd^`F"(e 7MԵu=8<8[~6}<[eXRFm_q10x߷z s(h(۰$C~ZezU_xF[<Qf4"E$8Eȡ`qnT&MEatwĝ=pٷ)3ksvb=wHaBBAS;F/%k-n"W .Mr 0gK\dۋ>mǻ o<UX-zgMR&ye4zl5zBZ"Z1\o>b9Uհ;t/m,ᕭn>oڢQVm&T0E4DR%nŜ(ahm8 g)T<: #'5vC8Q 5} c-,v6'/m욓vSb^#-wFdڳȱ9Fobpri0j=] .Mn}~U˯O /W TA9~y kJbUYS2g]$6.٨pMXwh%V7~YMKK~WBۍ_`D):"euʎ~ϪGpVUlEI}E/DIyjJݜ H%J)Uvs6$Ф[.Xu[?Ww7;}◪ٶ%vr×y+{x6OMJ*OLn$ޕ$"v>/b`0o"3H^IwjR-lvd"UտdՕ@Qu Y2v5xc 6ƒ}yOm9F7Y' ǪWj;oTBˤá_[)cm> zWҷ.}TEPNǗ7G7ӕF.:B0n`ɁBigY>FfSK=UuHVuDAE7L"f;]:PxvCBR?/\kՀ}%&qjBH_F[+~hh:3rdpY~Eڸru|2eۘW7զO-O`?y^BjZW radd?R0Ԏ[G2K>;$Tq ,q0nyT#>d-}}d KZ?Pf_q;K-qeEHZD.j[8 SSA}-vWR 'w~Zwh/U>Z`v͋c8okha Q* hXʶ JDZ)mn:D]Y8Y%s?\O9S=x~\mE,ƸbHG34ȨU&D$c! W4<¿n/Y/ݍJ ~egAܤDVͿg%mV;vk,<ރ>4J{(Ig}U9/!so6ʺ3Yp,Y,^ (ҞƱl?sΚ7(& tڪ`7sP.)aG+#:B@$hfSЭxL@'*76H2 bԚꑜh-l ;pvHs>Ls2xxlӠ dml\-e_k8-o;^A 7;4iLVt-Ady 9])pUY!y\Q,52FJ-pv(Cd(z!Zֆ99%I?P η!NpSfY\LD{ PidLii,m$lbBYpDHPyLZ'UœQJU6gRk̹Th5姞RGXRT &$(m׷{_J{4~ĥ8`oq#MWC94]n)S4vj[xOf{7ry?9PQ>8鶸=y E-ʡ/3voܵ-nygS4sg^fޝ g-fqX}3sXև=̉me. .断$x(fa&(AsEbI(PxyGŠ>23WG4=߮-9vxbޭ]@vC(,<Orm}9R{F:qR7-N(:DS4oR*]ZAς )K1qJLj2BK*Qj@U!ɛPKgBL l"l,YֳZM]ԉxR%wlG_7Å';R96Tb}vC'Lv^^VY jrETj:SpBY*Pf%[pJsC"sv!PoS:|f/[+CGybX:moɾB\(0.,y)|h^$᳼m)BTD OL3Eb"fNE:3ndJGQFGυV()5khd()!&J_<9*9nzGzRQZza4.'$H蛇Y 8I QH}.eDGI uSΌ/gFN8O,K wq.@.5 r$gSG'"<6j8=#*yL/L9N79ev*Dm!VX9A@F\~WgܣǙ}m&w?r]:+To'ei܍xRo:`YACa5~z%O(p4~Ǧp(Ebh18o3 RHF%S69k5Il?Vg,K|7}:2D~MG4ςO!Z橏rY.-I'Sh7.c}Zs$GHC]^{-THƱ`c)U1H*TM(ڥ,t(]OژW+YxWLt?yܫ pgY/p'!(fJ6`d1t:` @^%!N Sq1 rڨPB$^ȴ֢BxDQVX.} V@&II+A\CLyV1P똥^j"" -Ί;y8ۡԺB4Ni,²4M5evc΢.óIx^֤').=:::D!h?qs,DF"(Ed<)MMdRI+n$ xQ'@ 2`hZ{` DDTmAL&ʌdXGhk1 H}8_OݎuubD 0Vl۶vq+|~X%bqMA?h$G ULɒHY`{Fzuuuc&PfTetƊR :ˎVs办㮎_kWLpJ]L2̅bW K+KɵJ2q\[ ;n1c z Bxcs(g^td6kjMZx&[~jF;J5nRLfߧjcY\ӗp;?ZoGz~}}sLΙMy]u=R}&c[`{Fø5)yūyo///wen\ B!۳l|gT\iqkdp;pqi4N2")첌enl0g+l;̆F6L'*ucISiQM}cN ܘ=Ͷ}dMχ b>.mfuž#:w ן:NN88L,@ &>{(be{{ӥM7LR/֒;B"Ay^0la1BNCuc|Jk}Z[zy=0EOLew9e$xɔHL(f44heB^A U@SE!BI*N/Tj4vCP q쀝)&8 h7r:y<.R7dOq \ q&Q2v8w`ہ8w?"M$zez`PJ1b%L 5 HF{F+AQN~(NE4mll\J^jGi[;]cZ>ɿ"tgoW{A9xt SؼnBkww}(@އ⋇v+R[0EX]#ʨ#>tUVz `5 -~WU{O2)? _ҮouSY~7*Ym70zB)rlj/} ,~.LGp_2?ÌS1foGoK/ݜwͯiSP-=mazi L&vsW*@7&5miq - !vZֲɔ%a+ K e:XDŽXTpԩ;ᨓuQ! ?"E$8¡`qn{7 4h7 {tcB)7j<ԁCGEdQ0A[R.k%)$9퉫M/tIp˲S;_iNMЌ!C3@t(.0FN36"ۑvsT+y}:O;J#C0!B*Caj3jX$"﵌FMFS[GgCxWcbS_+<+R;PL׮w_(Ê嘩j[gC1k,m r/iUQ°TA-b*DNw=bw gyV6g#?Gȍ;|63j;{t~6V6T27V- 54zJ5+n\֮ ק,_(ECJ\p{VOڏK|Wi. e7$)[֥ޔ{S![{|1+ך=`C {.8rK?pmA]RQx()e)#E9Tw*Wdonի7\i!~ !>B[W"p+&= {zOG巿+WBjϓKA-tQ9[x8‹*:Fp&8Sv8:{H>_/Ӥ/=?xNQ)L}YM<cJ>N[`TK:FXNbE^(Rbz a4sYBH~>s9衢Pd QNr5$s}@bclP6wz m"C|k\O.L>TB*",% 6`y zBtCtdR;\|(adB`).Zq0r*A %Y.'oaC7y AX+ RWZ=_:G.c{lEll 1FOw`q=PJ\ɮ&jtw~;w=~ рwu #]F8uaT6*/]ꥢI7nڑdשlv?~{[+ʁR7(\*ހݾ1,,+a *e&b]dJ˾o '^̃+QI,\ZH*0X1"Hk4"A2HG13cj{t.YN,4ت$&d6j 3I jO,JӞR9#c Q,$D;:/~E+_ &_7^zw/)"j(^.4֥KVAx=s(Z`2**CyΕq-+~trh4XfOa^L88+L٘BQA0F=pEQG2(:$6r E::pu#J#ÿ1Oy͇^4ۜdDBsT|1v~][Ј0bQzٗHb eDDL((08Hz|~W._ۺƝ&ᮺ>vMŝރ-]|_L7IMާjcY\ӗpp;?ZoGq~}}sLΙMy-nw]^cXO 9PkFøЭ5#& ?-RǸ%L×evsv%T8 # "k/%%L +XZYhX*n d@;77j2D`i 6a7EKP/5}Pl]^]w- 3 sce-t]ݜٰQ(C&ĴC^}L;i*m#)yl ܆sٶl ,.mfKතA HJ.Nn'i\}XaL6>, _C+c3ܛ.`ooǰ$+ ?aвEzvlJa^i%A2rVk/;^s?| *3839J;6Җx1;{7kN?QFG=I)O\BXK{ '[T3zv)jsttHgicyQ=6[)aJNkh$QjJЄy 3&Bu+0KMjSSjveK%R5g[鍜' d)[ VAkFSbR HH9ŘÁ6K4rY@-&W'^ڍ!8v7p*6v@x7a|w^SxOp] ׵qPh)Bx*w&%8:UtR~(b'S=:]TK H;]чYZbtHLfp3;wus8~llO: z,W/"rU%7򷱓+]] tl i9qnퟪPnwkMmBtpHDHˀai72!.n- ZS-VdNWkefff"*q*{Y\b1Q)XlQ,Y)B Kf)}JV¬d40Ϭ)鷸s4|Қ87V9ŢNmƆ.ٟ(EBAxʽcvH\YaNeo =՟l~ƟnoM'I 50Ryo@b<L\ D^iUknz޾dvx5\s,{m;^χrͥ;TcJGpil=-m,pnz^v9$yKҕQ)UܺzooP ryg&4nf7Lek.V{|_ Zy %7χ]p}})pf~9?ݢ3(MY{ƒZ-zCDԜ7^tȋwiaۚSorA.g?eVwq~L͍I:j1;%%>@ N6X?+Z5{hhrc/^S|9T^C~\8<>VSםm%\c4^nInRlNG?;X|g ZmH2%+br<~\Ha 0eZSژknsӷ:u۪71f祝5,/|B#uR49)*cL˨KYv=bnlcqSpmf@wqKf<6`dDG9s]J^AF-Rc3RA6lCX@]e_e HSq1e EҨL1Ѡx&(\kQ 1H"ۤkClT:tFҀ5Ifx.JiLg9h4}6tlͦeմiUiְ(EKz}t|:ÌqEJ =IFX-c+׵z2s) b%oh!5lekXgay=ÚBSA*+},ҙNmtUIa8e"4|bw_Ս)cWNo{ .CTO/?'-J;I)Ž&d \FϚbI;T ظf45 W*|Ί.eK i'uݯp4afuufM+]`>Py|g| S{㯿w1>\iQ/aUj{oz8{߉z;~F<^0 `ѧG߂痲 ;?%+k^>Ō>6Ō> %39EǢNX]SSmQŜDwy 8.=-ڤN{v2S 6B̖\C"Go(*cuV)FV9/$E555!EUQK(vic|[cNԈ,n<p#=@ -U J:I 2@-[,dN3Ӥ.{@KkR2D'R(", H2&e *PY!)8^פ f׈Yޓr6El _މ4ɩ`'D_LZq^TX2"_I2z*;' ;ۈ.zPdg0 Q%9ĠEYZ`*^U(Lju;ţ&K6⤟1X"7PI9=>&\;dw&~- i=EGBE *:"d] **q]EJ 5w mBh炍]tT[L0`]`ۛWs,ک΢nо=ɢvd+jХ ҁO48 d.JzrqO U¯aW2\q@ C*!g0W_JFš|+D ІEɰ˹'IALߥX$1)vMh>MLh$u8#:r4q4!_ K׫ OvtxOuԬ{S9,ѡ.w{%pR2sCGREXֈ+eƥ! J46R-tĵq~*Ƨ'P9Q|['n]5feޕyxʹ7W`:sqHDHKJlbvRw&fTWbvڞ{Ncvg1ٹqL\xu aA@?##sB3*PzxX6ͱSm4Rʬ)@@K,&*C ?-Z,ucAln{ S>j>j>{PrEUW/!||5_W||5_W!fkS͡nk/~vEBQSZ m߹KW-kr SmA*/F,WZVa2m؉{klƫ{>7.?ogҹ+'ȣ7bg2?Mڑ~b8/o>(Tclwu:epL5Y2[O ́7p}1:hF#ۊ9.<ܬ5 "[8 18c]tտ9#)sxD\^M+$L3'W Ȥrpy4GeE)RhI[a><(+eB 0((hġpO,vemyv#G4BB> c2 >*h!-щn]+:::z!@ N".(=EJ)jW+LD\nmot+g N?_}hs8Hu6P\N܇lsaBtxOEţԨ<y^t颫8p/Bd3ٻ6%W}GCH ~Ji!)J!eqʀeyf8>U]u*&yԖ 2eV1'Xr'j"Q2}RI0*28(?rѧ6"|Ԧ&XPd.= &:.`/,5 fm<].M!؟_p|yY άuvLt~#`V~,=g 2Kgxg5׭wQ'kwڽZv T>Ư|1*̒-CA9JY $ ڎ \\Tpy&+WT X6C"8[_p}eQ3LTLT)gڡWV,T=+}`u\~0uL^},;e)$7-}kY@7jۇ+:^pÇ4G+wLVn%)mSה^ fl]G:+«.k$jaeن{>fwOk@;%c^v P{齁FfpM|w,)1f*,Ce- [|*--E> eV&|tR I7oWZlƒlnB|sWgI!IrޕMxՔoK'U e%U 5dJUw.2Ԓ-њ^p{lDDt(Yκ ^D4hab2NG E V.ZZ"2/dP< L4Mm8D\3uQ9m^X"%9d 4pzb,RdgyFkͬឌa>΋$4Kh컪ӛ{7uEYĻ:[?saHig`^K.jhZ? )Mg9biծW]!.M!.%qs.I-ђ(z֕yfcrLB߸Pm72EC``W- 41QMYr)-&4&| „v!E bITHIm|&@NytMCħH 3:;-Kevʍ t\RXQ;FǤc&FD *,B Z(-JX^KcGcJ0Xպ-ot:` u\ papM%R#U;R#U;R#U;R#U;Q!i )ۧSƵ`Cc>FcdtM9\QoPobw_q8Jr#1r#1r#1M"1r#1r#/+r#1r#1r#1rp"܅ `JLP3LcySAYY+? ~0poxe; ٹ?} fs|񇭤[iI,A/j YOLxg@egM$6"y8P&>6¹4a0f W3w݂,õmhs] ~j՟~_ _g~\^_xQI}>3f/|ӏ&< ]>XDLǟ.O-rnꙆWsɼl*Tb8W}hDbJ8+xu9;d_s#F"ځ<5-6fϛ*+WGbOi~QEVD8F]cN=9Juv<'/Hl$p_|tDA-`$B,ᗒq$U-!N9b6iR!ehu#u;z=S=#yfGKG Wט| u<8nj*r = "l nWz8˷]7|q;/]~>F-nl3x}wll>-]}QێKz͖\5;熾|s8|/ s~<ԲWm[K[?kr7@_>w4^ Nj/`R[΄˟bζc⵻ԺLh% hYu$+m&WYUYsUj\ 1\ͮ Uolr"}%-S`Y.'~4nµϥf\ןף՝3}Z}J~@\Xm> U p]Ʋpn0xi gW!M׫oͯh/\CimEz^ßtWwk 6möBa H3ड़?y4l,]L%UdeD JN{K6hqB%gsF)>)SmQ,Ju,l8-ǔ"_5ςO87$F#3엘k:epLQdIm2!P+ByD}_ϓ Oc||T߅|%Y[ /u|٢/F;Ӑ/+l%9j?@mZJDJ'ьH"IEH` "]$v.EH~V]$.EH` "]$+zp2b "]$v.E!fT)]$ub "]$pLv.EH` d.EH` "]$p.Eu u Ha "]$0ua R{Z+#D["M Rl&Yg\Fg}NH('}v#w,ҳ<~˧ח'5Uuլ IĹo15'a{PlcӇ>vQ NŞfU4Þf {aO3T@*,+xzzƋPJ;̅yXk@LB:c*!|2JHadҜbo+,%E*M*F媲"dɵ$I-7\L`XQ eBdJpAEI?UF~Swq,3b[+BރTV=$v}lڄ}Ϧc\BAݴ[FH##MR[*]աeFI+ǣKǣcȉTk* >A)%H%p91v+J}Wݮ6OjҠi]&/op<&Q6A0-d1̣\0)93i&ܕ~f^Obv~\ﮯ3^s?m__$ܿsgrf/m}j ɤ]7^Jﬠ "|s/ndA;3sc~saf;%#u̬#6OX^-g90sͲ8XS)CO嗊w^%NVJ- o{,-!2O^Oy\ܚ |tom9!mp}qIY$5XCr7dž#y X=_߰VT/A7ώ&-ss']̂vԖ^ؠvvRi2vcv6oD7ֳz۟nT )PiEYE@':};ԡ鶫-ACpp<-8)|yE&v.eq奔xk3g὎$Γ|gWGyhv«ϫ&C뽱䞱\@x*Vx'Jew,"q,К~oB,T2\k?;28Bi@ AFz%% ( 1 0E&A# h;O '+76H WƁ>PBZA=n#`A [ͥ}z7?7SE +Q*| MٮBݿoQhUn^6 InSFB5C\(Q1PPWt l2Fr-5Nj"0  PW-gOB?98߷!^22^8ySZk"ш$4NA7_PZ."X-XT')U1ʖNq1ʹJ&bI3b+kUMڨO1cJ*+ (pj] g< 3u>o$z4y6W*w_}}8C|o1^j /V⤓G"|Vl`NDP5XUXr{,GC0'M"[)4E₍:5uLЂXsi#Wv'R!g"1MP\ )K$9P51Y[ΎF6C&ϺjIhps;k˷hnIhR@zΌ[5eV >tQ٦[SDqiMa^]5m4_pϧ6g,I]0h̖|uoWݴf~\"n9LtϷ=hnv?Wneⶅsȑs Wz4Y@wyTUs~=&!}=ءH#!+`6!9ڱJ͑QPN2TsW526*'{<P v{~uHr'5?%#>Zc.c~8Ĭ??r2ZS+no'M=7tA1wq4: odqY%>Fe4zmZ wU{׫o'&w׿bkCnl5޷~t塢s<+쉾VY޶(_+i i) L]P3`/[^&Ҍ-Ⱦ7cU Fflт(ogjgFwIvOȳhy? 5 W+t4Gaܮ͕tEe"̿BxmrzWZoʜ٘oZטkS{˘{S|0~P^=8wN>Ipw?|o2IOܨVcdqI<˥SBy^ d7,kc}xp? i+\|uv煜=Cd=GCd=Ǫj'ۻJQ9^DB`LRm)M%RDd1d:E 4>G~ǯ__Gt;e3F_GmmHY<ߴi} 2}.y۶Yszw7ז.΍7p+o+HP*'%م+ݻwʌx߽3C9kqEmȨ"PE@1TȜO&VZHhܬ`,:*'m'.b-!YMEka2o1@⬚(ऍ9 ӁT.2%"8BЅ#e~gU;hǎq@{q@yy$$%a $S H \[Q=9ErNQBW/$t:rC,ͩf fɖ:HMDk1rl {r1%% 64ǎ묊݁81+:Yd6 ꌈ:' D9nbgu(VF+M FƲIRQ5D6Q{p inϜ<8mഁӎiaG`/©{a6` g ]P(`φ`S+9)NV{*` \}|'&YfBjG#u o|)u` `A $ZZFb< G>z^\6гm\6h9%~l/'ؔmƦ/s8sFςF5/(?/XI˽4~\ɤ}KϫzHiT IeMw CMǩem;W%Y(ך,Q78 9r ex2P{ke5:JasV Cy,@Fk\i0+bQfWH]ArJR2*ԩS7HW'kƳ`:*x EiE a1В!Jlt qC 4v=h(΄D8* YgN)x~ը tnjcGI 2Z1O?~ξec61UNxHD 59N&U(K$q Ninbt`"ۢ>.^_}cG-'j0(dms^vodPWho#w㸴l,QnWiaM^0tD(cS9Lȁ̩CCZ|MXf#]F J=>XD;D!"O(QVQ)1:\k%$㩫9^? GK׫nmw& 'se?U^ځEU}%DXWVDX+"*/*NwA_!^։nkT ϫT)`=Dv.N83#oqX˓£]ڤ Hr'$eEˎ'*0ǃDl=0"W]6 &eF2 ,Z#ՁHOȱrvMn#dG׵ygL>]WH%#'r4Q?W,@v>_j(f0Fc)o|M>_<Kߺ!s1mm/[d2i͡R{5u+GYe |K9Z>YgNG2N9oa8ldD`yZ8b4m:NNJj4A_/k=p90c5z:3c Lqbxv <_*U{P;Yn[^*4luBڶ<z=^6br}rk.Aҽŷm #ش! Zwt~6fTxs%c|}ÊZuRL<;̷rtIcځG^RncQ{KpL؍ysH̼!4qlnR7؛vBeQ PZV ڬxv(É񘷐㤀oxo'bVۮr#1LM v踰"UO?RԪcIc :D'Zzku0%:C0$DI'h+yCrH"9rЎ޷がX! %el$W"Kiv ó M)l×V(]` ڇP!k똹GcC(A+MmK<ĭC#'_0 Í tXTf Mn.⠍lJ#$&8ĤG_wE;W ѩTh}1Irexxlc=T4DFh_ڪk6=mji)׵H4W8SJiFwaTŗ%UCV7Ucu!t4PyJ=n(}475־}:_?d9n A87Ub;i3[f>OQkO~+.4t?ͫN_ Q2בv9^G-RRQI(bF(O!5I^l#Vȹŝw8>Շ_:pBf.T=L#8=ơ]ULvW'vZ~Yhqw}I[dC'uzןfdiwYu/jqd^~3_Lp?{^}I!wBmoXzVZ^S^ W9V8a~Db<.s<=D,BXA2^3"|EhwUY98W%.A9y2">9'85t;I o[^! ωPˈB(Qᬵ I Wuuflg/~fz?h-T?K;/Uy\ ~,5}Qï#qs?e XYJ0H(ObT|( 锷NYeyv0cJ+,P 4<* /!lGX% [Sx*R}۵Wgy+!9"- s Cp YxJ9 _wk&H6#牞sJy<\].+"솮;-/>EnpȵTؕ@hs;ȠX*[ baf K<-Al>lqn]7zk"ʢ祖C|ݺš(OsltC]a[N245[Ks1Eӧ:ǔ>5F{c] vQi.fu,6ߓm.Y:7ke TOMa>q˴UAH 8،)p1[ap_4c۲A,R.2JsRqWb.%%sQ"j){NhАf+tx w7oe={>' _Ǐf{s bSf[B Pg&YBdG. s8zI|6 Q|O>5_$7?\$Jv鵵5%IT 9hѤL>'[ xZy!|!-[Qw=)O+(<>l]]ǟc{D{Ŧr=6]'w 2=Qc8{-@Ԡ FTuɠ!"dxb h55xH3>_K"6'ǛW@DC"ﭳ!s=72R RY2-Z"zW)s^t$ln/NO~[lD|<(>xꔫr:oh o ɸJ ^kYv.iJG<ܱ$O'j01巋-x+[z'&Ol복y XpJ85'/xV(`r9%10oC]xy^n4t=9M6$`WuVPʆ#v؆6@mfGL.GyatsRf,;K_YCK?r= +ï &yS./fHp0&FkuR ;U $2t)JdШT}m?\qoo 1je=gZc>[(ݴ-c'h I-m mXFx6mὥL9o-B H/%@WA<^6z/wTy6.>֥i}k>J}Ѝq@G{BG t\>~&m ߁{׻K4ξ6G@BPr3K!Wx8L5m+2RDo!PQ!h^A*RL0( _,JVyOQ *i¦64 RE%EHkmSaב[ wc3r!y|dW7TD8x5>pسnFf  b~Y:&%\!-J+X4bPh: T6!1.%|A D_"LM98:&tvBhYaֻ +6J[HQ+OLU"999Qjw5\qbҧ߮,=YR\~X*[{4{G?$h͞x"d,9=DaPEhNHupvkJTvƗ_GltyvV=;c돺}M>trR8ԏ=ys?/7gs_.<❐w5][o"_}9܋3gѦǒ'ky/O!2̒,f䛭<t ~SO[6!?o^Ʉjw=n./w3=p#90c{=|qό50ŎnchkyU1,jjMD=Ka3԰3lm[ȸ]] 6i;$)wo |tmknۋW@Jݿ$%vM3:AI53Uy$tc{ E:a1{f }H `[^%ke*zr\=Եڼ7ڤ1ocJ̗8qo')Zl"g3 vu9:72I"PHRri^¼BW_Pgw{'9,#fs޺I ,9Ԍi +ri8I=ݩ4 ye \[sRD\Ko T.B@E)KtaH xƼҌ'z? )sJ};\.P6Z@r(k^Tdo I_Zt-DhB] /VEc Q&I{%rVZS|M:/72aQb- 66:$))LT"1 }\!.DRaƨ&UyquPzӼ3o}h h 2^/[ZkVYp]lAq3T`fd|Vσ|IQQ5xSMVIqC%J*s1B(yA8km,2ChUi-׌W!/۝ځwL"gi"+ 8KC?d9? 87Ub;Ck>>?}#ɿBSH~ 0vq.\.#5E2ʶ~3CćġHZc69쩮G@}3^-~}?f4`#򏿠+o'q?p5{A5ε%pTJϛZzmt#GC[7Rx1m.]&5s|;s`{J64~]WKկV6}KrFFdX {uz-o\Q/w5ˋRcnt;:>\ooFv/ k#G{N 6l NDTWPY0v2$QP4xn-lA-:\!ǶKlȞh!G)D;MMEhuPHY"!@mrp4Tɖ 1Ң_A瓻Vk*d|:3ٮɗ]Ä[ԕ2[;9[nNݧ՜?\ҭVZ]5(CWm۶' }&zytQnn*bA]vz|?8Fn܊h|۫4tͣ[Խ]tM'b. [sҜ-6t 9Ci߷㮓KtC8jr\q% WgJD͂]\#ŕ(&ʜ+uWJT(uNo>`Mو .;7Z.O]\e({ŕ&0zF*,EmWju*CWF\"&g$ TW.\BˁB(N\Fqō;sRm{F_^Hq$oU;*8z 㾙{-F} !Ab:uIqN @5!1#AzHD*\PM Cu[b휼rN-h@3餬XBsjO0̡>,)GB%QXax/ u?Y0Apm 8LgkO4qKv@}B)gvkęM0PFN $~ؓK}Y*P_ge*V6JdFΣq.% NhhŠY>1<Ȉ#HTSS1[ hH."%$OTzK@zx>-'-PKf;"kTdvBCrz_Q- $&}$-77t=rCm C4/V@=x\ dt;րF}j,'睭^߭ 5`)8M WQ YI<hǼ0o3yw/TD|/AKV;gI]to;8 #\>}󜂕{:T g{l% PRj"j!"O:QARpctV*;?~xg*V],pm/Yu\f_,?OvEnSQҴaY< r8'm$ D!5@ 5:JEpvz܁TiW#TiKNcҾ/?! 2H,I>xŃKƉ )"ΙQ[ur!(\Xύ$!JʘT2/O5:S)Om[g;%t$O18Oyc0GV#L"~i'~l(CXcQ_"Y߇.+y,SrBϨiS]08NR+ϘsR@47 M.WkX[}iZUH@i*f)2I| kO Y$e c$4 Z!1ʢJP)l2,q)PCBUksO?\GԀFJ0>&6^F*it6 xnÃfqj'NahC;鴯j_(.ᬝ̴{p8mq҂oCig&p.ȹX-Ii $Eh:&5`"cyʷzf 8V=VfEhcy)nU5ZoYm:yYu+Bz݅)^r/k6qrj BJ VHZ P/>pՎp 3;` :fUza?c|9f[kQS ɦh6( Bi&@4רaT(zL)%x4SUR ЊR-2B\dP 'S"8c]h ֣1+Yw!ǢIσ& !p-}DFg Yĸ [FK#y7E+>va$_;p)D Z,䂱3Q+8EQ L9#HqsKnN&ĎNd ?/P77!UP?~GYa}^(weu´tT]{?4.N?{ԬX&J @mCc NoƵ3&&%hi.|[|a#WoQK¼U^lX5?WgL?s)P>8>W%cMjyP?+DkgBJЕQWZ/WWзԏ{rUa\MPG/C'~D3^he#9acrq$ [ .)r7q`-I#yDg#1 剪 ?.37!"~R"QP~?l@&W2=muPT#}|;HWWx:(3oԐw-.\lU}/ϮZ\˂e't࢛"7vL﷨bP4ȅi oϤ-xWu9^y^djYeh$u3v kQ ڨ$zdSRQR'Uүy72;.n^@(c~['IdmEx g{W7mQ|#l.&X`oȒ"3qݯݒ%Yo,)fQ"YU*gD Zsa_ '?Nm4$,)XEpTiVk J1HY,+JQ#g"jh帛3ym%1hl7h xh 8 B&o$L~V*^*GgL \o`cѭYtY!QBKu4Li= ]_q7n#Wh UQm$"*NY[,còY*iLfJN٢4˂B:פ35WrJln%fg+ 6o>Ow<4x(ͱNdiԸ8E͊ʠ%D("-4*0D3hy>@{&2Gj:rI <)hLSLJMA7qB (DqŜi80JRQk"b RѦ1N"g7 >{tyigpQ`9%8i!TӘ2lsOa YPJІ˲~i7.)TGcp q32PEb"kz0OO-ThD\xxr u O "}$V(<|bL;:G@4* M"Pe 2&{չ捾HS(8Y`jM$BaxH*Iᥗ&'E[;#FKU~Hu[.g˃Am8D@cG;, S|G*9._uM: $Rp]T)6ЩjJ9(e664}k7?K@5;yv(WRqc|Na.vr6:~Q댱]2su3vNZ]W-}6?>_z+ҿ2\slvKUěk|0޷&^nwxGS1ߪup:Νl[K41N}8sبhbg=rEJXhX19X:V:/:ү_;MZ.pEɱ!'F"¢ 3 5p!(\Xύ$!\΁1 L$ SG'¿LIExjdcl\-jsEEVibg_|gN~|lMS>Z*65}jdT?Q ÐW ~֎VWUYn;OvDŽSW|#]FxWa2r\j'1bTէHY=jR}m=su[PWj ?(#2APqrf[ -кݵmR?W{wƏިRX٪a g5SHrc8:?^vj_3EiS{xGa~eMVXxiA]3 XrWv$_vIg׌)w/ZwX`(O Euj53A bGvi#ۨê"zQ4NScp@o3 RHF%9P5m'ca1r̴t{ç`ʊ ˖$iH0 |[t%SvmL(lnޅ?}gKt#2kIP63ufm3'=F2O%]t7O˭uymvuM HNzi&O7|ϣ]Գ]cܱ"58ڑ!4gF&ykʚOn$e[cknp9pQg϶98빵$t=s!?t楬E}cx˱VC;D ~Kx_:jPpc((+PsK-gJjm70g`n? bB\s1!Tsf*jCo0*XxA*lŨ+$(uURV]Au%d^B9L.'2RTj٪73 L!Jr+V<\]e*kTW30d]!]uRUV}*S)[uՕBhqA ]]ejT Ӫߌ[.&<3\]DbF.?3Dݨgv2(vPWUW.=5 u/F]erOm]Fݨ-+g :KuvX`Ò|gyzOKL7aެH/8y_&'+T&W,C%DR۱'N3K )1½6\hJ\Q/R,>lnWW2G3]w_1.(ӲuHKj_>H M TtڛB| Cu۬=/1 KΓ,\h#;"$U\G x NA57Fmv-o.TtT/_ s"@)wIp'y3фmRh(GTB .&u,/ [R@ }4KpqUA\+):RB҂i OLCf/xguve皎7t|7C*MlFIip_JK xZR\SL* o5P/-Hs#/a[fw Rruh$p\G:x"G.)@BF P ?3%#C'J3U"1g!i4*CD- B\d P Y @ܮXa ]*p@ pja݆J&!<n(qoA5V޸GFKeA7sh̪fiUuk4 Dk.%a̰K fv߉X)*)-s/5_l7 Rː}owQbt۟Ng{nnOq:.:7]t2R V_57c{)>w!3VDO?7, :?70:e$<4+{bi5=wCb}@9j18'31aD]+ 8?FYǯnqǝW2A7p%3:x*|m'ngoaN/|gˡ:\ N"Z#\S^O. ;%9_(>hu욢~6%Ps$ol$F:c*5eg4 c%DIqdhKb(op`q{u@qAẍ́Gq88}W65s9N|q0(vڝ,yH <)+VT[SQMXQm, 箎 jBhWB܍S(ʊ VV@Dᢶ @8em1C kAT׆Q%$$ Q G$!ĝg^2KШt4NztFUkN׾2lHjE)8QU/DxU7::ud$y,?P~6>+j|Ѐyd!2pR4'8)2Iyt$bżŒ +VUxV Z!1"AL ea =;eƐ,ird$ڦS J &d 53x ՁHO-2retbE*YTܧKVgyuqr;nnGQ?IjH'ne`bRD3 3LK.Rh;Mxg[r )_xκ^XOT}deUOا{V F Ɠv]垜x>ɬ]KCuu3Zi,*H\@ń@`PL(q&e(%*G9K |+ ܬƬͫyvݕynyohaѫ)ʸT>އׇmr\w+K$V0O' u++Ă + /oX&J^ߘVWNJ7Ӗڴ|6 @dCm:.te8n9ixm]wMbJr6#4ᯙr#PoGk(bNޱYo6֦eG&)לvT":6a;{Y)M"o]&qe;Nfh:i}z&{hYlP.̊'ކ' F߀\w)%%+4"4xeR01 StVTPTKH"5;!P\流^w񪟘ycG~G-Π<;P1?d1&3 uePf7ooVot/n?_2ME)i_LvϗILˆ+e5{Je΅$2) lC\έy޼=o~;ŒҀ.M/Onc:o[߶?-O{xn@L7̵s_}/X6}ge<=:/S/ ۗp+mkh5n|;ԛǃ9rӛ?գ}˛i9tt a854:Vuفc΋eF.%PЬ`&2&Y* @HȆcb.5`YA U= )ёȨʈ \e J9Q^ӏJ3kTv:%V~<o#m|yG?bGF&,#Yj}>}w: C̐ vd;;E:(8nbE]3/@q<@\fQ B 蒉h9n %J9Y_t"șfa۔N {hu1GwV~J{*#jWM|}Vyϖss^;Y5Y+wpK.^_8Kz0 6.{7YGtkAnZ^Fݸ\[\0tEbi+6v6dGnn7M^rz([ݞ:_;QuKs~='/=#ۥxE].g:욤[XϓV z0+gd5H ގ%⥻_m`2*UG+邶oъ+6NJ;pVuܕ:c/9XomPeL4Bs8ts[ulՕDR.c/ơ0tOu4L]Ҏ,`m׽}t5:ZZ _vOIxnݘ>)>T rbo<ۖ㏥vͶ!#!-k_KyϾJA SdpLh,^BACABAm4$):>)+S^0m(QIyP}tDR>ЂWV9s=Jtӡ=dqWaiii2rL܂Z _Q;$!xmM1&UwNIv:󧙹,79O v|7ro Iz ťO zmwECswC 0y!f*Åu:tdh&!:J$=!ɘHs&Le4Bi)WF^_rtZ؎o}-q0nӶv;FAq4y7{!aHԍVq^T:edG8 Lo8nOAI>kO"?TdFdbgVHfsH90Kbqv NSK묒I[ЧϞ,Q]']8M>~-&BL*C˺GGG%5_9g,q2Hؿ+Nfl烷:C'YpLd[DK%׻yE;zxǥ<ZHrht}H)ijh/rf/.e, u ϕЮbV27@JVK֝ b,)nh+TɃt=Ji$B9 Z .Q+lCl$}F^>D!/B4SkLlWvMMrSOVu:[DlUௗt.bU"UFx(cb=`A"Mt+8GpGÿI&OٚH0NO~ߠsdt@;72} )^J^AF 3cr65t=遶˾" ˨ شrRY*iXP" $t1@$[)daΧhfo駛,pkIEi03Ja8)-6ҫ._aQ|$Ysi(M7\ڨ,@Yc+a_Kig%; .Ql8?nuxfkPhn!ҠqDzٺ]|Xzٳ.Lwd"pKXFqL>(rҳ$ugs!"[zn7Mo|~|CK-7w7G=.k~'wcڱ>]W_MnY_4A4tmiMc>?J~rkls!-Y-hKb[,qT/TRm(K^J Igw)ArRNq&4}J[=A)}}}AD0I#g!sGD'OJGzL[4JDTRg^7kRA"%-j@):2#`9BAT6҉Ӫ?xFt ZTf V9z4thWD\mnA#A=jiˀ=n\hh:Mw'Deu4&=77tz7`Zʥ)#C>$fv)وv'lO(وDQ *!-h̖ҝ9Fdo^pZyts݊>#s! ҕ-gjmb*~Lu{|4HRV=oXVu$0c;%ԙuf:ɂC`&ۢZcRoMqG/ű߭2EkJ޿)ƽEij"gÝL5<ޜko_3g]",2_.i'+%NII1a 7 *A:FSJb!U-L( E!a6>#/br"h!QCƩf&T g񌄫l/ cY'+'Ƿw5g',*FjJ<]B88W`dp,2Mpt$ fTF)cb R(t̖K NsII֌Ն]3VD)U݄^ IKyAkB;LOt8Y?} _-W 4bθ(^HQT:( 6L 36 *),hSÎ8kׄ#jHXF]9 A['23L)&HE;+ !]p>f#!a\~Mƚ"34MH&Js5\G $E%+i$Ԑ5N&iMҚSZ gf< Z8ɑIfBW'8RGຫ+1H@ 3}gM6%lY ,xH@'@"N.lO(lO"l9C)/m2J j@$gj{8_!H";$r6~IIqW53(Dr(Θc<]tUMuYg%, Afb"111Hg =l5`nFLo^RP)AYT{D_BGJ !+T QYAzžh=IWHM=ѤW;&]Yy(fecuEuu$H]NV+"G6Jg/DGbukqf Jf\05u)QHB ;f/2$L-DѰ'f-5F7+6{D2 ,^1 %-eyi]6Axi!r˴--{OISGgkH:1j)*^ (0ildhG++ & |M"nm{PioLb`Z%9mAgm(8ـ,8TF$ZǷܮX ZizMFv)8>E=KZe)%"7QJ=Λ#*)X[fC4rYB`\^O".i_$E2ȅˠ%^@I0uc ţuZx$ugOSPEg 3lӰ}&1"֟$qS ͔sI&L'n65z?* B ‡a :pp<cz9Q.1nG(߿`fŤ*E \/pq4?H=OߩVA)rq>}.K?@ʦ4%bVW·;lz0'ՃM|%o2H}3($eivbbNSY+4K݃?:]x'!̧.?^0/"a|J /A.?շ[d@u qFE~uEߔ׿’]_p򡛇D/"ߜ5a`NŔ\\k9}/`pF-oINs`=u]sR9 rVpgax[[d4^BA՛d0qȕ_Gyr((uzH9J']g$iAEݧ7u{T,a;6՚ n+݀QÌ`V İ;͞\hs2 sҲ&w("W=}o ֺwH{5 F_)G3QNm+gʧ,Q(/:rsٛr-MThn| I%D+ad 5hB +lW/(̢=7zʹ-;v}x*o ^[,=4&Xlp%,dYϤw Ql-DUV,$i+ b@ ʄȔ PV-:VyBB=BJsY 8\yzHj3:0̋s&y9J;n]{D͗uuG[#ѥqCQMD  %H!p91ͽytT;x[h+oV*# >HcXED-u\(K`I*h`Zj-\Ic֒][l i!r~}EaH",a2>jS,9܄K3~u~!?Pn OFhr}x-Ku\gxc`? z{k :)=.3-Mv˽|v~o<+?ߦ}ۨ0k\M7E*(tDFYj֜IP%Δt S)βؒ W k,=m<\nJՂ3 *vDpoC?nybY}e{{ܠby:żYE.OXR00swGP/미@T/'=_NsKx0-'Rk5ug=.ޏY] ΉOS2Tmpyh7,/4k*- 뤷[yݓWeE7Sކ' F@/]Jsɕͥ*{6"4D]8%6dxO(ԳrAg}{-{qUE_kAY>E/ x'Wr#ۤwyϭޖy5,/5#ʞ2\rCvu5⠺U}prG0|gq3+ MMj-ˋH3+#lP[{O!BϠMѕPBܴ=]!];DW •+thU+DihOWgHW jot2ڶ ͚S;,*P͢l~'ƃZ9z%|8SN; Bk\th2!ZMWZvhϐh-Lڅ7]N`O~]WƣHތ Tzj=]]'0Wvp(~< _kwHBQ,fQ>蹘\ (-l U}TT]AH[\zkF> zɇrB ^|~8_uR1hv_\~+m&YUXs\ YIU?Me0"Q -L20,fْ~֓$ۨ% ì&VNbaE09 }N(GE<mpIW<@H#݈Rޣj<ĩWDimOLv+j7ǖmtJ6R;ЕjߩlxSτĹ &?ݟxU !JՓ Rz|r8[88i<ꔍRy$8h&g 8+}"S-9d51v3p SM mg|DI{?GgZ0e;DW?C*VStp ]!ZNWȞΐVJִ;4elu(UOWgIWmDk{K kfpV-EYWk|}5X0+LUO8ۢXĀ+ɯ+t4G09wdA;de]ĕ4J i{?X!VvǪD3V4(i iZv׈ҕ4F{8]JIDOWgHWH#C;+thycvRN9ҕ62!Bt&ݡ+DI Q޺:G2HڥXg Τ!; HWrB "`E:CWWw&ԎhmC퀒WCWS/ž>Ծ`O W8c7JvC[VXQ@W}*Kս LIvQ;ȑrJs|:]ƋzkɫM"[_~ ~U~r3kt Ac !,|2 Ȥ9ex?_ʷW`z freV'< R~ V{]!vS n;x{O\?HśQ}7o|/eʼn_^ 2IAFٻ6#U~0`w3r $6S$CIݯz^")D %ڞ5Vgz~U]]ilOayPwlkrQ>" PTrVEd ZGfXby"ePtAF1)A`(">n߲ / b5*Cz ^ wE%KʮW&'}j|]I,~:oyV[~*ߠ3RZxhf2Jfv:Nm/uz,zUۡn_ɡ`yj;?궻Ǐ"[3@EpwA_>/Ӫ^ -r8ff6ʡ`|c7 Mk}ܡ/yrVtь+M2~*ә8`*}T]kq~кa*Z,f,] PKFP^,aT`k Q_f_8G2>>,[ЕòEVM0>U5R9a3Бr钢ѵqW w>ބ$1(irfLc6 -~܌o/0=Az_9RPQ(Ɔ2 &` ;"4QHOh|4SH 1" +V2""&ZH0<)c"#ُK0`?#4`6DE#1E4^JJ88Ŝ(aHr|/  6זv,~N\[)ZPbw5ISonsvCe{ `zfYE,XPP<8zx˫i KP;I#I*k< i@dGڶ5.t]q"/sX9rk.~$%NHJKRq_,µLR]&-NLS) NLB>wPT|LڡؤSjaayIrQnG܏;.{*20kLQ)E&(Hqlw4Ԩ'NPiIY'"@'=:}#8V< E鄷#I@.OJ;^ȃ.r*H.FF:v(hqEwX"^B$EIKA-Xg^q|v}{s1$7\yv;Fڟ.3D?!ψ^2_Tt+cSdka 8RqM Z aR -cN{둢 &QvS)G:`[M)0pZFe@ K":s`lpgsDWEAdrP0UmW׍y M$jՍa˴hn@Ur:\(I2g jҘKxčuJ)%@\X+0@ V G@Kr7)rhlNHitL*"E 3â V%1  Ay~#,{< {&J{Մ)"z`*g06͛A"-:Bza8J8G@= VM J0q9e amʫ(F2`M;S^cܜD1hYY#AmDMBi)fƅ0/ q(#[ڳpml\3BX'`1G^97 #$ Fj*e;Q!(8v0)&8 :3vW}y5GLw3w)Iwrʴ\#p QjMѳhh1)%alcrNw C唸X,"RN1fa]tN@ m (h9匳 uU8. 0WuXEOL';TGQ2tK&rwG{I*s6޵J+Հ0b(RXff~FGu)!N8VR 3yp"o;;@z7;zw.w:;9f _fW҅frsOlnB؈dɩn4!͘/0b͘11 s5+&(ϰfXh`RS%~3]2(:ojVc&ye=6MVHKD9UVk7p2&j z2]-'Oͬm ѪbV{AlYgP=UZ𪧳EYglFNҪ<|fڟ*. ńvk2yUr<:2jw  wcyn~H[ny?]t3 [+Sq}7C'\CeͦMC YJM}Kq9&}Iաtrmײ6Wٟ`\dK@KpӹO3nIroʟTq7Ȼ7??n" E3ig:9?MA?xZó%Gɶ*\[J Eue ~)gٱBNx}37okY?A2u`ݳk$uUky{*rw5mN@ݕkC=kT=*~WFfU͇h4LT6|RT~}}:^=K\"n7|z <߽]#s90rE=>_kEP!IQN)93Z~&zGqΨ|''HW';fΠ__õ[cR;ĬW:Gjd~sVk/;^Z>c}rmG$VN/`*7_h׫=(0ƾT)6d8$4]Ir e‹,[:WL |nljn6_y.zؖZhӌurBҡqF3آg  E]άGM# *{Np<tvVSOgqzJSŠO| qy2@_NaH:u@Ѻ"g8CDg_aKֵ /sO 2`-%?Nhq˪߲*ە{mzu~s8)5ދWxYKb% *[Hˤy*r1CVRQ*fąk(-TLel?TPl%6i49:'8uΈEaD {7;٩;Y}j;G+*oQNO(Uֳɭz+˒7fNZ ɨ2>oeˮ\cmeUK8ĉѡd+JBHFnlQKًHG?UM"Ϳܲz\ Wr]^V [::FUCm]-e|vً`_B޶)"oe[so⠓#&ê.FRNIՖ/ΔHV"_Q+HBQ\=<%}g)u:Z RV_:;R9R11)"IJmvYD?zӎv Nѱ^A,XevvƨIpv8:߼Սﰁlj1<[s۶9 1!W $ϴwۖm[6{mHa(sԾVd8oI J$ycG2mc4Q4ɧV69)l%[];&f>}ڬcwWle<|܊{VUZ)9ow>`az|}Kb|޸عe%KD7ځL1N+ݔS M >s$ܖEBs&lpΎ81k9?z#H^RIRFL۵N~)vR) S*AOVajP?o˞Dܧ*>uNɊtO*́3%VkAfcX8>بjWe"Q6LTen=VƑʚB 6Q*9iU2y 8ܯHO[v|1%8kvwGkb̧D\4)`p3U#}{/=JEj\>kER:Pt}Y.>.^fc1*ȞK6,`SUݽbd:k1ymòԓN*r0= PzAZz{AJcOϰ'R`JWM`k+hPp)I#\}pEƭL=EPI]^4#-* 7|w1$1J :O}O*O}P4-,{a]`}wq0<5LA=f/ -WHF.!Ubpc(S+UL=0E[3Ѿt˿ytCI^;4+u7{0w6"7Uc]/U_44WZ=\oܷa / zMG3.|۪uC3e''ܭgiCӑgw jn6Iss 6L|kKxW |4ެR8P ,~v\nnMؼl2>7o7)lHˬ' %þ(_X/-wۚh07;7q ^7Ɠɟ/᝝Uuq…J4 *&ɻ'le 4]^?;D/N&)|wu?/c|]|+ ]׽vq'W֩Q~RCC^C]^LS8Ia@m~vCׁ;g[/Z~ l+{ |wV#rWunJO0kmӲ6Yc,N}9`=Ŗp:W'8!٪Jmf 8 yf__rSU[FK? V-5/7+Z/Ny{q:ɳe6i6&=xKs[nZe]q^?ɛVySms^NK˲ag{zӉlI^kK6FgauËX,/5f|E9K?Wm]ϝwSgseVhV^>OMܕ{Iݾo|jR7>}4#6}آ~6{Y”8z䫕mϮSMM95T$i_Nf7K3yYJ9?Ju"XXWIvw+}*ij胁&ՇMZI+?C6'=Tj;u0pP ҲWMJu<9•Z\5$M\0I0]]*SjR*sI# pઉKîvU҉#\}pŤqW y0pĕvդU{Ig?GdWM*=RMZOWҮ6:ՁÕr qzp(isI)]G?SoH\A 8q =Jf+/p{Soyuˈn;> J5> _N~9;?hUÅMw (c- VAg_o~p>iu}? "]d!-kapp>vV{k H+}ЖhEه.)\ɬdz5ރǚ=qZS{~jjKG3 a30DJPRʇs7z?O f56/v؞pkU:#|B7b}{!D_"p-ҲȬѷ7ȨLTA$&|ړL~+f!wafa}mYz2+o~;it~r6=RprUv-E h+A2ƪ*HG'B"dئ2.ĘUZU D!S̝1eK4iXkVZ,b=uR`GY1'LDE* #{WYh]TvTZkȭvˆ-jLQiR GYUS 4:Iț")!I\M"dk̘ƪfpjЍI՚#IJDa_S AXm=A -aI=^S*% YfV먄hn:YyIiXʰ1VLFe`UF\+|uŘIӯ̃YH%Dؠu1^x߅ُc9F+s\(qsTټ*I'JdU)ꌑ9QJR-,e J$Q`Kei:YVi5m.dUۜ|gtMC$%MeԂ'a@Ƌԁq@V߻mU2(] 6Wr*=h`j6xLF"&qaCE]E a҃c HĄ$1-"J2NGE`Z’ŁSpDUeef[x(nEd6 ࣇEV2YЩ֏DA~X[QH9̶ )F"`;>. ƛhfwk.G~Lj`$,kK4X(;KBf| R;ЋyQ Eޅ\| P5j*P P6[l0d(a"m.du B4aRY3I:@Lڐ Dz!peR?AiݲSA33)LueMF)!@hSZA`-!ӆ-m,AmU(f ,xBV3x$z++B]?)!pp!"d8V=zr1 2qe 8“ -1?kxn.dv1RU.:8*E;]jl1 >艆0AM(]wbp-`xT|]W¡@z'Q"(BVa-h: KM4<4"%xxYz*/'_ 9וs,k n I7M~( `?͇*z?gb7 f |qupA}h7#_~Li3R*_n.,8}MPtz=9?;L'= JVn?P2)5MnӸz,=_ݺo _{O_Ҵw}^^qf0Uz.oϠ*4H \ rs2$X+ѓ@`2HEȀbH! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! vI ~%H 0+Zv0+퍋H daPH! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! fI JJFxB$P1X!dHbN+D @J$BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $z$`OW^s$ɐ@`>~IH1@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H!zwn޻_eiv{}Z\?PwH)Nf{at3>)KeޒT5'.=Vj[>zm\zE9/zo=-0ezףO?c 1 5'IAM6[VP"SD̘s$D 1ef2UV-$%ȘAbXH?ܟX!yVG4t_ FFgKxTW7g4. xͬoL|7<^s_Ǚhȟ|:MdX #|9х{i2~it-M?. .QRxT7Ay&/q ݇Ql/mߤ ynn/l-vo};a< ̰ߺА.gey}klmv嗶5geX:ߌJ&DgC&;UK& jĽh$qԔdȈrܓrK{WzҘ}esJ(e΂aUXc!z =)tGׂjE) fdWoCP_>JemU۫|M/9x>r=ό||eizչ6Tv {+FS\=*}z#{ M<_wU=zt=mVljd>iveb{եOf̀C.1/64Pͬ9ޗ֍}5 d%yoݿmeݕE5ƣіWXѼ!ss^aĺ YfD{8xGxҿM_-ߙ}}9fzn9ήB޽BF*߁a񾛷ѡNCyZIZľBCI I0 i{ɢm& 7YXsS(mc`BhL7fw;vy4Ƥwן^nm0ؑ{!t)tzZ^#EAamL ҵE~2a\]fϓeZ1meq2 {S(W;b)ֿ FAáϞͯOp$?7dۻ݆]؜Gk{׽%9q/T\ "fϭCNmh[m=آz%EY]LTY=7oҸK7b0$V҉ŠAb(YC2U ^XXϑX-q<凞j;?o"!*1!U,bBHrD.DLY,٪58=0 QO[輥7Lg^1oem}y Bif_M(T@?A>$}2phLͷ4Fz!r6񓺨dN$*r#6mقy jq::t L.)5)pY()y iFtT IZHri:L{w0T͢{LZu[g/+fcٚ7#Kk'6s!̅\dg?_ ?Z>]䉻~A? -;&MA^zjM /,nW]m;U(t|,>PbĔL}B$5V4\aO.jr$urD͐ԭӯ;h&Ș$#k? 2\v+7F RBц]*[xQq=yƂ o sISn[:w4m[xdz|=NV?>q.B (yT I膄%i@dB{/9Bb93NxMEIjvt1nwj!w`]cnZ~N6o~o#LĎu#?'6ƪQ//?󜦪ƻ*Fj쎌Uʫj=㩱VcAcڱ0ӛr٤=O|` FwdD*kÆgEFѸҡ]Dl21blf&aUY.̪=l9 jxXJ!T7b"ALP c啯$\ {4}>>Xw'?:i'apmr‰,mW{{)[nc\x Mm 0Ibd4"BզP"b3s 07DQ"((xSx5d4-GCfV WYuV²d`&&RuYƻ?+.T3gcbCV!W Ur?ovJ7gqЋCb^RC~5MjW] %s1W_IB+<݌CjTB !hA fmԒ ]#8R4qhI5Z5nOCcjvSŻ1z%IeTklJY 11DF$ϙk%L%+{/u&~3,l`F0IKόR:n}R;[<#JXL랉L\ND+ZaB^9ZHΔteS)R e'ʩ-'{ bxt\'=6O{kt;lE}p~³m _؄}*_pഓhaȒڲOpL 0JU$0׆#EsvL+~nC(;19_Sk}+704xnr݋ibO"7HpO5^ )LSGT#_:/ਬ!Yr-I6y W2ń3G2!2%(Q'?+Wq|bi׫w,_»[Rн!ȃ}L/'spDu: 0:HzԷWQMD4 ^fGO`#ʏh_(?}NG|$PET:.bf%A e2W* eyhSBf&A%*h.Kµ.<,V`̐49-XG]™dep!!&0Z3&**{p5sv{FuL~ēƣ4w$~k'_o{O.psN1C ,_)darpbѩ}F?EQc=C̍4 n?¥D1|y$?<92w*ƣB@ f0(mчKyM>e }Icy{'_wz;?==nIű>߾citQn{|ߦ܋5ҵ{cS9*~wn1:J,CA9J9oY䡃w:6IUo' TZPb{5mf/c%<AfWgX/ˉ!G}y}`^522ꛆڲwKF1*B3)GM1S{0n_ɽeH*!iv"sw۩ͱ+z4zMK[,':gFB g`ݽĸ"@cI'jlO߯4YEYl6Xņ:_9]K6,/ R}xT,#Uev[טRJLTs!M}E~aYβs\SinN a׾v*~&u\U_NGr7zz۔r*h.FF:v(hqsX:")NR( Xntcd0y5%dc̲!Yk65|RH1_^g Zg|ҏ05pVzq.=F9 #/4ÎcW^j@rD'.H/J<`$1wgvѭQ2hbR%{ y.2洷)D"0ꀝG:ض0!S&J;f2ʂi" <; ~48[1[T_1($$= =_}jj}{svկ(SHo29V taUp'i9`Ԥ>pRLK̹V2(0i'XI6 & |NB\ ls-*#R"%1,(RfJb,A#E))F l]) " XϦʺ[;by4n `nu^IppPZ?O,iԴ": AK`0#,A9lM* 굑*,&uT"Wqs 3F )xp4C28N9 [::A֬>@9u 1yg' w/ErgEWY1/zzv%ufZF\dje']~>e,W Kc23هȲٔQ]u5 lְhXzR1wY%I ~+kb(9>rnI`u&SU51pwX{qRD?ȸ.t$3s!XY+ШGܙ=UtVzɀ% ~6q.+[&l[˻2JuEnQDH5[``RLݭ)`O&y}oS  N[lB\:$UX3럠#{]-JK ,fxP"SJy6~)'D9KlN'"IBx_|-JKH @Ow7=̩v_s:,9]iSwķΪ؊3M ߃߃S2Ҥ6{JpWtg$-s큢sC+%Zqd#=])#`&dF"κV2 +R*Bz&5 ?oIKyk}5KW VQ \0fHB NiePʀ3Ϥ0:f`z (2[Q\ύ8LsQʑGξ{XhtXh,euOeT ;+) "gFty5{_3ɕ;(e @a2MBs!d,^F;AcƤ̶ #|i4ISTа:TYNID`bCD),&}JhcDA)gC X*őprUYU4.,ȴCElH%I]7{,~rojjF{*tT%C24k#ȇ;N 5H MȞ+oo5.ws9mOy:]cH>AI]vW O#>&uQ)遛7o/y}\)+ a9;9oXq8 ܦ7VX;zCQazQTgd68R KoM$޼kpY$h)-кKg)?U'Ϻ4 j{nW7T>Uw;oӁFmxf8vw?cx*--L3:Hc h< 3<$%K-S0|~zv3.$N5KtBITKWHӜ+ Xˋ+0Jhl:]%Ttut6V}PJdt򋡫VUB)yKWWHWR0rQU@.g1ಋ%Rk+%$ -tTYQ%}ŤiJz/Tp&g91үbӆѫKy%▱ jdTp.X2'Q?Юhfޥi:QdčPUV|F[fuAߨ\_|Z(ڜ-rm|&zG,C_;'U 9d%<@rFϵ,wJS 2żc ly LqY̙C:jtNK1:`JuA+<6 H\ʊ&%w$+oeECvCO-|n祫~V-9O(Y=]Ue[g +JpJh9m:]%tut_|/te<`p8~n-NYțe~{ǚ[/UΔӹF0?W25[j9LLUSC 8! bҹha\ =3!x Zg< TJ^ 55R?OFO&T\]%] ]%] ]ZOW %-]]%]q6Mo؞IcLRET1Ygrv3%$?Btե4U~rRoizh+%r*R*tP+5ҕK+\\Ge@+n:]JIZF*b*UQ*`BI5ҕFStAtCW  jtPlKWNWxˡǷqFj7=uړoXEW t[w1vЙOS-MiUKW_+X/ݩ~WRa9Z[Sx!ԘbR:ـr)r30ۄ3X((4w yl4ǀ#LxA=iƥTBp/aQf}IJWK~@{6?n( o `"8>zmy})s3Hj1tfgi杳'oxm~-nD٧I7:N}gD5aXo"3Uͧ{["S\%մm'oɹ 0bQ@,DDꥦ`` Xy$RDWY5x-IaX0|.Ob0@}3v휼p*>]EpZph2->UC3Z2Iګ{iԡH Ec, %?q NXSErۀ ƹ鐭3*; kdz[U9.k՛F ,Rg2DXt"SDc*QXZb%0l9ޤ UEg:k)cv{YK-T-( ݼ~F >hcr%aM^z,V3hfbO1V֭ЫΙUώV' v:ƴMvRm6fP0 4Nׅn}r .m(b/8#)QRTLƒ.TGcJ_1HCOk\͵CAΗeNҩƽ-3'loxMե6,bCePou)VDeiz웧5/N *vJsuq 뤵ꞛΡwY?TZ7eE)'G^^V(i{vVkrDb<{UUEh3u=քBaT.v;J94##AL;D8͢D;i,E!"$ŌҀC/usf8&0y-9۟&ai's*6Ahq& cunjآfSjF* T mo22y&}ʳ7WAfu@3~ zCV6v8 7yF)ӝjVCE'Q Ewd!'1*RE~1_Lx4l7̑_ޛ:: $ l`gLE*'g wq˂C2#;/ͫ"+8@rΑaDZctuip NT#(]eJ0af X;=n، TEB/)Ss1HQ&Q)V&8QH:8̈́ A2 ǨU1kQL | al@ّM26x&ೲ &8~f++l'$= =_x^}ޞ$"++O6W=u05G|gUN]atN(IZ-5%\:ӒFi s.  L V A$:\]kIk SװHFǤ"9̰4Pc  (B' Ll^7ܮX }jcgSVvneF- RD 18B1AD4::?勲>Xy}oy[Fkzzz Upl7pg }_kwM? H>ar|dpY$7yD|$␄]=[%9_0޻pUj|n$Z@͑qzBD鸜\@3%7G"@(Ed{pd^G}[5{:\’$Wmᬬ*Mh(Xj#s' I =)ZcKnQ'#g/Y$;("X:z72)jr]Z',%S\&%xx|GXb. *?VhSJ#>:?Q#?|ǝrjwy>vN+vd}uirE%q1ʹJ6{BcI3@1Y5ﵯF: HfԚ$I(?TK/MN,2vF26>X$WK\-Km_>tc7:N/<}K9O\u:wެťիo/OSoEbQokݷr˧Ov`^~gUv_3yozgڹޯiiiI;|(sذY ݞX<+5GE\cIhi!>K+I;I-IB8dd dr,z"$HC$BXTDD Қ.\8scc I+L*$ <Q9)O{#gjQ S;>5ٷJ;@b܉/-91Fx2_X2*+9 Z2u(œe+6[3vl%vJC0'M"[)4E₍:5uLЂXs69XTZHd{B.rF)D;MMEjuPHY"!@mrpįN*daX-.d8><Vl6mLm%5 -$)PXw-ݸSZx0x~èsunl @ZҚ&㙺f '=B2קWWovl~նvnz '#\r&g77<"24|| wy5S֜bkC6cilsN<@V'wrwmF:0AHBh4qhM!U\{\ڡtqF:;;#].HѣES;nV4jqDŽ2XdN ;& y/Ьg*Z}3㠽})F/Oxv "7{ =8?Q?z8Km?j?,3_qr1l:Sq,K„^șFfon|:UVxn/m35<`X<Ċ^n:?5C;73?woz/C}=zFtc^i{v-8:ȍ`Jà3+ozSYO_1NBh-Q%wJ+$TF h*T#dWLWsW=1d{a+<A%R̽Gσ/P*&uvD+Ǚ$%LhDh>PJ+B;B-gmtTpAˍp8ccD% Brhٔ qC 4 oPKgB"l"l,Y֗N(Ff_=AyR]>}3PnܫAT9/}u]C8^oVY jN&P YI<h+Ew޾fO㧩C%Zep9;g,}s"$S` Jİߍ˦zHn'y[)V1Gi'r™*1Qn <׌uy<%msJJMdC$"tJu3le|q߁Gyuk=ˀw{ 2jx6X[w"'4因ްuQ}w8IsՔJj+pT.#:JEp8@qٵiwO">! 2H,I>xŃKƉI>@HCO(~E݁<jjNCS>1פwl'}|2m|y7Kڀh ѳQ?\yOjQt_B{~g|('WSy8~qpU 3¨*2NiQ8ӯ~6P2]00C>!?NH9wX# cC*rulgꠓe d8}jNyH!I_h2'ppeєbdhM㯷֢n,ѳ "4oe yfv2pyJ] ;슽]]{T` Wî2v Jή2 !N-zCh&on"MחH?/?E+~V4[պK)w|MG:*V'l' DF

;돷tV:C"2xd³k6DOcn[͘~S4ӺY<ÆqrN҂@ԡ׾Ws3z<:P+-j8xkr}t쾦B'HX-Ii $Eh:&5`"cyʓZ{A|Bk>NMWnۿ6ijSyU[ ,Y 붓N셝^jIY{ 3}_iѩ(-TYa iCա D/O{w@tz1U=P6˜WC3/5({T)-H)% >EPZ .5jD&7 EKmM\hm1RHp;m<8vF" .(CQ$jVkg9׌+_m]gH@ъ;?յ>{CWd}e]x[XhQΧܥ.lD#ۼ ?y5<2I)@BFn9TI w?FkIɶNy99gJFɃLWIi 6@+JRDHQr)@n7cPe&pl &]߱d H\{K)u,QlA6Vh=㒴횅;M;IM#^؄TzJvp H$*pZ$uhg.wo PB4 %K%b3, LpT#LD NiƨT@vs2!vtt"m58}e1C;.!:n}8R1}Qߗ= 'iz8}.JO͒%8}TIhIP /nS&%hO/oX0D'<n, *uOf<#g"Z SSP>8 E]bTy~C㟯p'ߦ.^)3k|KnU6{U'tm3u9YHdC9Vo/Wm'p"ٿ93he#a$bH.nraxz 3El$Z@ΑqzBD\nd^K~YcRy=?_6 MZg4<(G~"^]mBIo1} [0sA"]ol'Kj}{,Н"" *|GE#S/*-2[2regeB{ *\wWʈmU9vE፫r4$,)XE8iJ$Z\@*g5*~j"(Eʝ&:EI>ߚhxt j(!g6ڒE4,WODh]ϵQN:|/l88[;^{#玲{XZtXaJQ牲pU{PfrBhWu !n&uT8ʊ ZV@Dᢶ @8em%֋#߇E󢝯W]mIBJLtk. ĘK1K>0 (Qj I:2YS M0SKO$hxI`5Zuլ'NRc4~DZ iS94Zso^=i\z^z.X7t ;/h%4jU0rQjgJ( K\ *57h ΋T7Υb Lro5$gz=5M}(Cj߻HKE6&"Rjj@K.!iCcBie p KFg M#{ igRi6r:wښ"uI{ST)Ƅ"Q%Ol\8)x$:Vl4LҺWK_ǿo56E`W393 û4;$ҚngUN1Y<|&(T`&}~MJKoHgl k\EK6k -M4iS/z$raR_w~D"crx^䖝4uVv. VXlq4'ZKk (WKYH ӊj5T3 k M Ιf/+ " 7(!jrzWKJQbh|]@$hfSZZ$`OWIolVC}$&zNG4H6F붴g5 Sz]Ь`ƃ9MhH0PPMcC"D @< X $Jo= EGc q32Qb"k|0Os=-ThD\xxj u9C "}$V(nq׏5|pĴ8)HqIߩիm QaRG-]v9W"[/=PtuVttљ=/-$ur*yw]>cB]Z Zmw&8./(rE|pq̅,N]ƅW[rgJT^-[LG\}_N+ 7VGL`-l`Yu(ת9GJcm: vX||C0'M"[)4EujP1A bjrG"[ο9PRU(Ebh18 )K$M1YX8;^]Nyw|P]sE8mKn+vuo`q<;5rqɫf^rFfO7sJc>omB)*ܡ̟\ZL9pxye]dx[|&# Z$Wh!48 =y.U[AXFhgFD@-q9T;&"sRXI% y/9M1ҭZ6OiL͢gWH\e!Ab2h`;W$wp ^kfsxX*GR3\B6:yktA@8/WF*ՙT*&yl~52LZ7P h\jdzȐ˕G>,pT1 M. Z^8l!)a F;SJPNP'KY0A[_\b<`ir4ܢc1YK*zS*ĥTL$А3A3CMP3@p&e6,YםRQ8}GJF}^\52+`;gK2O \!N|_Ew`wg>,,Yr$yd}jɖz=JwbbթCu8j|oxY!K6F#3ˀWe Dw񾆏+gY:d id% h@"HCtȒNf#K` [q|IZ'iSZ tϙY+)XlRUa1wFͬכf"$j ,B1sWG-/HrL'I>+&~;eu1lu]պaVn\oyAL;@{Bb\u7is^qR1]bMrmˊ@) )Ǹ)  ;6B2a1VCʒXSB0$Y6H$33:PYVug˨!)I Rq9oǂ"DT )3KF6ҫND0V;aɩkؤ_XtUp+*cb^"GM "Xa סt kى>+6 irg.fyd"JLg$5 }e\&)TNbaVQGM$R'c'$c4:U'kY{FJ/*U܂kA;H@)_b7r;bc(F)KsF5= ̄,`)(@R )Kn&%'A̅ƒH-e*s2huV hp}A{m*2uTc\t `|*_X|/Vh/W}x q'`3Cl( ޶dwbi]IsNAq5RЀwqsS61 |UE/Q񽺒]mmkcXv]'p1~GgV!~q֑RZ|vRVظ}2H?PTXhHJX+ bD2 :X/XPՕ+ۃUWȘ$NA =!ȒOSʓ°Tr1=}ʯx:Jzś1\l͘WuN`}}yˍD27>51J*x(L`NG|"7 'QzG@e\r/}qhܖcoIJ2$X]⦽1;|JE7N ?m 6.}H;ka!ͭ/@h}ӥvA\0jNO7Mi8 ><ќ6HO6䜯ef W-?>Ya-Z^xrz kڛ]۬siǺB+\y?,6~E $6XۮvQB[g67Lj3-{[_ t# Lq82!tL(=].P\ӥ 6KPߍ˺ p^[eyzPv&$Ӟi&gWz!ERS9KypbSsekaf҅i](齐Tѽ$t貵{;<=v7 3m^/gw~r.`>Q󻽖3 ƭ{sw`}Blh5>F] $<7F#īr @ 趟+8cMI4 6(h—)s8~kZJVE.8mcpړuE=[?,mVYG"P ˍ~Ûᠠ*-6Ӓ GohE4M/?ޥ/{r~kۋmR'muL8SV|LerO!aq̵ miKb<.ՕjOS|"#c)R[؊xA{߯Ҡ[H_o{RlbLPN3׶/91T𱥬@1Xm&H͙0)9%>6ssspQCl29稔.$7\p^#|"P̼n޸5ܝ͑kGe+>kZ=ƍv󠜟ĥqx}>UVdmB(؛C=ܕ@r~iVukYP9U4Cn 2\i.B,%_<  99D(gS*W,(;d tddtJ:gEԡz}=WI.~j+Y7KO%h6l;|g[ǶR$tBJ> R305JY ؤ00=>Ӧ}`'W'w? ڇ (CdbH Y9'$s9xa*esꈿn4)?=%OwMB)Qj\\!kmhF˺E_b b 3ڢ2H?GPH 18BPg,zL亸ll_z7ſnY%D{D9L!+zz};ߓcak,]7ݘVъM:|Io׃ɻYdi)%斖!B'+^>a" ! Р0llVfMW)qrGO_`NNsVo?W-rI'X!ϊVke vit{͹vV!c`Nh˭viѻ5ȪQ[sĊhk1KN)h}B3xPs-= |&%U[3Vf͸EV[BWYB u'wk"%#8NՃ>~=|$"06d#91hyс2&!٠B=(!%Vf(dyL (jSj5 hCce.儉Eckf\0.OEkW[;jmjm:!*+m| Vq @2pL&HFM]3Hn2Q}l>kT'c-˘1m @r \)`NY>l`7b!c+o:I$}JYKb89z bV`#+I%4qVǴ7>B1rhRm 62ȮFtI{ OR'$N.(^PP:xřLm5΁2eD;&kp"FĪT]a UttAH`ڃN P\٥Z;L/R(8Y'%b)צM6}H sdUG1~b:z$Zjvxu.hE(Iٴ^ Z1ycTW7 gY{F+D>B"3;v ܌Y4Yj%Y/KEt["zwuvj z<\t{wD_Q}hvdɚg<#shkY_q)GĠKʢGK%d9G*,-Q6A':`'8m6 Ϲ_b&y$LViǬQRʃp` ֶgxFJ?:+As9$b*&hz{ī_p?\,wAκ%hH(v(ƨIs.7)4"NsadPhOmK * |BE^bU;"E RrHd QNA%PD0@AP(@2x+{.=4*I_o8boS.oI"rW rclӺ$ly% V9B$_< \~âi'BRRO@% /)KuoSGA6Y%k:3=;B: %5$F ħ&3#) !D6gyXHGpa,&ukYA\mV~̋ 9G -#jh2Jc쯵Ѕs/f^5,Vaʷgw,=spDlBbDBjv¥Gްj`Nf mFJ/I`Yzڬ=RG{ʹE$9yq&L wawgw%S idW?3bO+/gOkd 23lFoWW*%x7}HqRFɠ]/HoGv{R%]HA)EhNjy}oW  85!!YffZ.-g5*}H\" &%R`q=[&e|6%I>L)(:~ثMH< ӷ7ѻH}(,b-{߽2ME~rܮԩзfŮaSqT6E>{pr'Kk E–9]@T^>M1 浠Z!uq+Wp$̄HDS=n%(A;!gBZ@FwKDSU'/k-! Ia(HHB NieP3Ϥ a3 ^-ð܈Q[N5{[N<ܬnylkmҠS S8MHG+2rl)׵9.4<0s ZV V0.VB(J._Ī!9TQ@ H#N@Zby"e(B l,l dCz |rVk/1:Y3lg%k|mDzа/|`p%i:'K܎͕ZKdv N;:v$ӎ#`3|HKIE? >@7_J,H){S&WfGq8~8`is8x?K ?|ws40(\t][20ÒJyLO|RzsUVo\(-B l0VQhtQ[߶2UӒEGR(sevsɲ(t ]]:xaؘDˋi J*;InY"ig ?Ȇm<.tip#/RbI\[EuXK-HJ,$뿛jMfx2&,=2Q-Nci{fbKm7)l 5L R]p`խ*jΠߍxSMQ*E&7^4 .;ͪJs:фvi_j_Meh\npBr%gG#;wy4ϣdˣ hH)5?'v%BgĮD J2vv]ڟȮBR3bW"lUM-gmgW J!:v ٕVURESy6*φ]%heUR]}3)ɉճX]=ZNOîF`WcW.=@zUsaW cgW ʶiWzv)۪xZ? ܏(?~?_ȹ6W}V_aay2?bH:sfm.A{X_*ISw3w0oJbMmG">zQ0FF́[kl#Z{9h.#~Uq+ "'!s8gJ7hi uT:`='I锉\+#BeI f 9dZ5D%!Fs-V%jRNBJ綤\\gQf5*u\lQWTWԱC$',C+Dem6NfS?s ,ոBW_ Y BsuFljN^y$hOi<(+d,ɭ $φ]%p>vm&A:kdW`S3zNn>vr|6FpNP΅v+PgĮࣟh]%p8v=@PǮ^%bJHz|ݛ8,MoO'2JpP5;r%]2Al-5a҄_-`+6o7~c5~g~wo0 V 3XTyL[+f1>m ŵhthjǽ$.r0)bNȉR4gi$G~[e+洢Ӯqê2󢃊<*eF(a\DKcRs 4h э&cB)7j<[֑aYqGkZ tBٞd (X;#>_b=kl.ڄLZvi56ՠL GIW}c4c_IjiB9ƓpɮfjV׬DYIRNK vkL\"G bSa #U*F C- d* r#QHYu2 8cƌL"^0{l5jD46Ύ* Ԁ~>~ {딟:?n8zخUYي骭ɱpdJ?1uj-}&[]G:)ݚz ȍUt$Eh{;U\ŤmN=!ZU3zl>]un4Oz~lDis-7x8㏾m8?H@u[:ݔ \F`nɰ؆On|1=V~ͩgպmn^v֩4-f$& @Qc=s1h]T =a C.W^?a C䌂+$T΂#"Sw!|5ґk"טvk c*H_l$4oPmԀ٣6$5Wx.Xw)e4V =L~6nAaѧA:%60õFXKGR$3A@R:2pM;DPނ\Y pg#9Y\*/I{DD020t'8M gvaXR."6JmF:!&DURYﴠ+6 z,2%b:xxw 6Iu4co:N8SrEs qsa=tJgrVhYq(VoSG+xo4L-+p캀]ϵ'^-ܓN$ZZ/Z/Z/(GyюycB ,2'5T`3"iԤje&d{-v4RSL1.5.0hiJ#A԰IƓ\n+rM48g36#՚6 iK9 Cq}uVAZR,4ä!ZbFXp\ ouhK 'Su=to s&FlMG !$HP&Aj@1A +NBaPM+̊hX)6_G0M1t5Ă; YWD닌ŠhmJ;Hu&HeB(5TH-Ǐ;U1Ȩ gg$hr3sLF|/[K)iުN_$^E<9zok?߻};=}hr^ܱH豯i;2̧,oYTʎCBt*\()@2b TQ?^3c,QKB:\|lVO03 AMT)-H)% C $J .5UȤFQL^ lT?sP)c$pP8q6V;E  .(CS0$@M gϳƯ +; }[^S H6!~a=%^;Fmc8-Cmqꙫ)5:F:4 w~K0fX Gw" VpB3Frb)Ag`B왇h~髜Ŏ^](m`B2Ͷp 4R1IOKNƳQU{b_JK͊&XjTI([a}e}(M*J8D/.W\3}F"wQiJϟHy+$ˊfeX3= T-[ԟBAYoSDݽ?ڄe3[|F?[ޮH>,jT%R1UsN<\Mg]Ł){$J䍣93PŚpNd߿I]q8+ʷd"[RYF5ؖE٢\)fL]:3W$,rU,xz[%bۊ}>1%Y"~tzgHsGQIW}|8G6d9K^ϸZh$M(2 ڊ9eoĒT`ُA$ZDHU.h _qA։P(6ZZj[:X(i93p^mI "C ,n("Й1=O~ܖAI 'pAJ[ú8ʴًi*ѹ,ec$]xYG+|Wbf*`+$`kBWׯ*'7xv "wPrП~9-@Tpp{-WQR8CV&#oQ|VnOpyq]w=^(] BrQ hKIqhvŴ|77GgaDlwTʧ_[wsjBX~2V_O~_Ul{[QϮzHCOFQ.OnrXzN2GW/BR 1vD WVDTT " -ZDi=h$C92+eͧHIBJLt9T1q癗 cz}`8P2lSy8GRr%F}ذT"^8j|ق=R.53 eAv1(7N;2$1'(k{ >o]BdT("'8)2I<:QrM~9[\(&oErM #(QC-ea@=SBdXRPeƐl5Oi"ْlጎ;xX.?v72u3P Tm=Q6f^GԀ\*/J0>&6^F*it6 x/Vk\> ޛXMBI6hHY9I1`].dhTu͗*0m oa?~a G.>I| \ .@{ ï uoܺ|gg`Z?(,wWLa%}q1?T} އכs}|E&&Կ^E<Ϡ6L?m?KrWڦR.1v+ FYet|J >P1>43cř@VL ːKU s,PY#4podAeaw{+Y [l&3ݢ~~=Ï4_/Kqc1fq>3cuLL57%-?ݘ6WNJw Y[jӰv mqF_v7i.>\܊ t^4,l6=21S'B},qoQO&͆}>ms/V<ቇd;&n.;:6a?{̞ӊL"oE~aųiYYA4N:u(gng΀ra(jڝ8y24`19\J\J@#HuLjHD'+NHo6d1'۔$18(I"|$6 Z/Z)䰼/]h3E PK^F}u7T,3S x[O3K.zPɓٞ,((+!jr-9(0$0'z%HbR,F'rsU+P(I!F RѦ:45`t^a<CKKo & Fa4H'(R \Wՠ_B6q kh !u2P^8y5X-ThD\*]PWhA :Y 86q1ʹJ6%&(F4P%Gy0_D跾rsH  $iLB.^zirbc-IVusYg ZYo ́g2J\ę7/;3}܀AP `KV6%ܰ]6q0`:-c{u4g5pld!jneQ5ZrA$Pq16$ RlHs!j4# mGPjD~?wo~E˜4Y+[57KSɬvbJNG)Ȥ2hTk$*J% q&煷x9y#\k 3^[_<ߛpƃ\zsECf*睩ER9$[hUo.!82N+.[$QRXM-'VF0Z1f>$̗d%srFOOX˿׵AWChZ!ћzܑȈ-b1J&ڡ18H )K$(ptnTy2&gG!t0> oW?muis1٦xV-\b uEZ31u3ߵfvԭ~ݤnDZҚܙ%3$֗ƝV\ͣftzە]*n\i-gKԜ<7r=?5Mn;;󈶷KE!&nf.=k#YsӡۦaڶmB5ctsV;yϭ ^  fD[{=%st* 9`XpIP&s*FVƆT<T۾u{>9QF.~]V #ur)\ІHo$с>iE=2{rjתʻ Eq)WŽY~Ĺfkb?6 >AoI!ĺb^o۸ȵ.Uy7 +JfzgZ{Zz?PUYVG mVii 9=Қ-z~{76l%*MX19B,EHŸNztk.7`'+!H2229=`Yxm!!,*@p0Ca8PZSt Aznl $ TR$2A3⩣FksN_"w>6?dsjm~Wk AA=͸P/ k?qG8I>@%5@*%9(p܁}ykӊ*ޗEo B!hdX|1|L)"wrQHFmC ]-hs%w(pk"UN9֚|0]Zp5u<bb);p:ن}Q66|x{&EpH[|%6xva0ٷkZYN &>Xn s+VdQKɫ}hIEt@0Z8dO9\D#KUǍ@TG:*Q(Q$wAo!Z橏ro[M'Sh_-Ffף:M(:K{+!yqB48%86:trOOZy><:VVR*KIM4Ƞ5.%ԱhՌRBruHpAkB2xkN AR-lB%UZ3#gkQIta1Uº:]xP]PEa.Qh\2{i'oT~ߎ'_fV$H d<$!\N2<W> ik¬E#]5baӈ_F,HF$AKY༆(G=: 9%vW]V}'i]@( ~ +r. $;M 'Q4wF&o令ly !nRZW(+9&.ze"?LALWOWްul=l/[JSs!EQe9뫻㧋L0u9]*@ǃ:0"ALjM<(Á!eF2?7': DL$RlL W[^ZHevrTXIC54p͍N6p/3P$gA]6O s" ڍF>VBH࡚cc;W%Y(ך\\/dKq, ]oƢK|5Yhc>/ SAA*!U\ldFGg\ 臣@|m5M WϗWxf˫?w ?/ѣO<Px)glŽ?~RsXjP`?*aF%b`2I8R&ʣ6 ܣʗ`E$AhE c]`6&gk3ˋ =׋/kk&^`W{M4~gn47ޠ}컑Oz8v'\kg='TލQ/z-vҠ:`C?i~WIGYi"uqf+I S[4+ⵗ秎I!:ZC$0Q K, BXn\[ty3V:FZ $WmD$.}b"ܟ438 gRXfA?Q1xΒ\pQ1r6XN ϫ,*fb_{𾚦m{~PDybbǃo}&3fo>%Z儷ʊDIP,r[ }R2+g܂S턷9.^Ҹ}TtE!B;epg (vac|mǜh;F#y]ځ}%GReF\Զrh]^8#982IBH\ .GH⼎;ϼdhKB.&1utbrɹeJXӠVyCj̒ l?6}i k6-h;tt*tdhyą"U/z枷~fn!="^wh!H_^7Ny_p}V3>C3n}{6BjS>u|Oåȷ`Ȩ"PE%J O*p`Sd':Xx d~9+$Q2? d #XC-e០@=SBdXRPeƐl 䃥E'DffD,F[">v|?Bׇʾ4c v0psL-tڔ`}L(mTp .al+6ypJ7T$t*R9XZmMr&NR^rB<)]49%QuNrcp8I+ٵ;S}jOQF{j.M \pq1M_"w EW}O֭iw7["߽]R+goC/(Ν_Qs9]M'7g)7)}JN@"84G;~8ldD`ygE G ]A^'Rce;|D2;tYwBX:8P6:xˋiKokJlJ7$Z6ߺO -LNq.F?9i-N=&m򚜁H* ;\CWʝvX0c›К#5ZY?;7̹8pO:eayvIjGNѭXvf~2v2nD7=yOkn45 );eҜD@'}ͧ7QTmY CQp[hW N^¹RRb'Q&} ^YǤ]sIW:X7uobg}1k^-sFfb˪n`@$` :ꗏ:")NR( Xtys"]0 _)]~lvF2;_hͦU'|R1߆^7G 'Z^?T?rc^%;>X=ƹ.p ;C.= H6{H/1@al@فMal'sD+9DnHIZa ίַ'OzOB8VL|1(iz3A FMs N)Ŵ49JF{4辬`gr`r-X3wީHIEdͤ03PPc  0BGLܿ> Xq;BhјY7%AsGBL0O3c +^*PB;ICM.L\8Q6k#UXXM]LF:,#$54- [ M&( GS B3) Hr7'|CG'њOS S-uU~pn|t}|/;+/-s݋ή;QnQ3o~Um_eJ_WI`៱Z1/2pjns99s!1H!5\ur[',{1A?y+b%/LZ%=q+ ѡ}PaMBdwָϱQ? <>T?Sb,_؍Bm/*/Wz@ԕK\e31lٿm|oZHFbH ?E\>Ql0(bW2ZXb/:%"j]KA)e2mi{J`HiC(XK e`L0(%Bo@t2LdSMI"Hri:eDjx׻z}1Bq"ݙN/+~i%=.t] .02#GS7c˝#jtQCs :=nݞh +y2pp082-4+X+QjNЄyV=NvwUҷ_]Ĭ];&Ln6H;WV· ^{[WMւle0-Sx: 9 jont +Aeo7뭕,MȄ>^td}}8J{nwGNٓwʗf&BpTG`^ a UY_pX;ъ~WNhBn $g(wɥ+%*qBS|5ɷ#_;Qܣ(J3VdxBٟd*y'"{|- +{@.7֏a~01fmXPV3Z>F\>&<e@<}S-٘j\Tj%&nE* mL'h&)3W@"#"lVZv+H%zJ &>#q flU$WsWZu 7 +%$U$XuU\UVSW@%WOP\iQ[JӞmtCclkMt9nx)51YOY}m^d`k(Af)'CH:ҟZ.O]F*%jF `Oߊ`NՑv2u\qG\mG<⍸ud 3W;S#WsWZ)O]\E*nWo%d<d~w58W%Cķ?*L7Ű__ ~ έ:g69":hS \9[fddcyQ)=ecim(bh`y .}ƒIrENJ C55EnT9SNJ( @7v! +"1raL`dÂ2 hrFp <ߗMY^i4^^0{N>W =G5yDsLw I0I;IgّNS-5Ln.X,qݼۿBf\_`E G|ֳfer{?Anr%h)A) |>b醝hnQ&}iIo{ziIOafdY7 ߜ$ߜB:ߨYdΊp7fD})s6 --XMxEp7pv\ɗn8s g60QϮ]WGf))b{<<ܛN0v,\g e;I>T {:6~3{QfeU.z5sXSiFYG"a4׮S*XQ+7;Ǽr73wvUE)Bbd3i("NB"4 ci3J"'"xLf |{V)X)F1iL=6*6&Ж֕H m^X=ƹ.p ;G*OLN=` |Эc)yA cxVCM.L\8QHfV`:SH9bdeoH `f A8I8 lf}T }~pn|A^e /2t\}v[FL_Ww҅c^WI`៱i-F*=Efz>nS͠r+&bd^i- lb}[㠟)/-WYLv&R?-HIU%hǝ[D+X?GjAŲn_S Paٝ=+j5slO90.,gᥬʸ8EDf{d-``S2enlU@ݴBW_%P0 $G! H ˥CR2oa ;c QJ^Ud|7ʪax<0IQi_N>"^QHQ zCﯳ/[(.YĢXtU;3ٙuf<.v$T7f~#mvDp .+/WS'WyR"fV=U{\}#Iύmf mD:o|ڄi6J(ڒ&K(iJ9}P85:KNJ㜗>++!P(쥵:yR(|ܬ;g8kLx۲*ίJ@vs N1ۊ\+&q6!Pn7M{ YyP d3 &=P9HD@Hjܩ;w 3޵S_^DJeobR&&bD6lp4cKM*XDm(.y" ,O3.1IQRDBѐH@HE^]^ǯ#M89AV+ti'$\ވEP)WLXI"Zk1ֆ B13j;~N j"YB."!Rndg.4j? B}M@ZזBT|i{T#rd;9>8 ᐽy^5!FFI(v(һb+< S}bt4Ѹ!969vcՠ+^vϠD7v|;pEhW~.YcU AJfce㥑/%it5ij;)0dMܣ/'lguA- '-gzo_ݵbלНv^ fmAǎĭ%9jinjعzߩߑc{KOlC }\em,_A *A) Χ2v#{#v 5o%ei\T"Iا/y<"||ȓ0ȿh}Nnh~דoW Ow}WmazyB~kd 7ñjl8=\..U޾mH_Zhvu$Yja7qaBSE *=FV@b0` &CXt(-$:L .&jS9yFQdeBkN? A o45SZKPJIC`q)X" !X XJ늡ͺyW-0a#}EZu'h 3v?OEC#AR@{!?.'H'RyQOPV}(†s$T]#/ PT"*o4I/!L ci%co? J $kE-V%QER ;~ޓ9܂ϙT[oN w)u2%u*%ko RKȁ8BBOk#W-ʺd! 2 D8D"5!0?Gܖ4͢nN7=|1'|K_vx,蜴j˸Ji?`ngo'ws?p'BP_\_oMlw iRRڐBHo(|dotBFk@Ѐ٫!(,NBʆΆˆG!D?*9yʅl\pi2Qȶ{SkyLjKk,fvBu hgY}סʹ7ۻh( bp>DGrM(tpC !֚:U5UmHAtL^[cHw2H2eӥ0&***i4;.S~/VI(xQח+t\z}xt},;a[v[ 4}TD̓k5 a%m,LUT0$1H)D=WʴwʴpK#JH.H Fd‹S*%\A!jq] t;*۲y1%O7kş"zLv=߹A΀P!1HSY#es}1n /\7@ 7̨8(F.&t7Oǫ}!S4ڛ'o soXXhܵs$SOzL979݄WRX*WϧvsXkF)D6 ΂ɨdT^yR(!f(mq3 [c4<)YR=D37VDL7NZA1m֝S4]*x$pݕɣC*:W9NI+F.KNOJW H*ɫЦxo4tl Ł!IiaEB)eğQvzFCTB52$O.̱TjdQ^Jjugg\R YƮ5}^}j=[ |7~/..?_^<)" HBD@JP[K:1DmE C>P!6`0,d =A$->i՝;\Nnf;zmm6;][˧?{DdRal3RQX͐|x9 il[% $ ȡ {{ˇm#vd*p|/|=Q *,ꂹ(I!e_TN!0g3$ \p(hm*\]f`Ta@b/m35kS T4hRpSja{7[9>>b:@)̱(°]N#?ˏv*IbxD)z{yz6k6g&0`/M?3d>9Sdo%Óyau {[1nt'b7oԒrl8KW>t;9c_|+CN.9NyWo^=W ݐdH?Ҕ~N܉*qoN3 ẙ^܎fT Wgj1_OvYU'3U#nWi{7v6/L~T j9KZhUw=n lϐk# 6b5|˒V;b%kdFjh<ײbUSE֩DMV6韮z:XѳYQjmiZhovG&jvQO)O.žB4} nR%Bd<-ulԒ,qjU0F8@'ηJ>%ɰ"oA^B!qy`裱&VO} ڨlTspD59x.?.{e}E\J;wImj!E Sv$'hF(46 nc$:}Q]Xj M* %Agm8:"Y9פɑ"$hFE X T9M&ಔ[Y/VY0"8羼7TR hìId Ab?OᲤ E2  IKhp``wї0hFo5M#?sSS alӴ}fV+b)#`vL&ɜNF3% /eЛ)+VP;sUH0yax/|z?OK8?]7M\M>*nQn'h^{R?ב5Op~?qAw熱!,ԈBHVMi΀f_ϻ 姏hOw`$NΖ6su<zu<`]0`Ѭo=0z٣<%xGs T٪}K!&lwru.|(]Hi~ !޹QpMu_/5%#S7*[nЛqnn0[~(4<&[ߜrJ2\՘]j!\4}Wi`oXrH# %R!֕:0]0\lÍ4k#J+S\[\$%r%<]P-vU%ijHFx7= m+.Z7 d5J|!s;gw7{_ܡ7+;ni|tcز[3IĨ"A)RQ)ᯗ^]iY߇=QEcsZEA;Q)[Pnʡ"KQ)N3([Fg3: )%|D33"'d'Z@V?.1 pLI2Z!XXƑcD,-9E =zRZn'/!A%.DIDgifкg94V7?k"f,&s6`I& =dAOWG^g|Ņ|IyĞ/j=ONQ*ʫ`ʙ _+t-sV ɥΦ7׋%ny^Ǡ﷔,kLIdzECW9>_76})tct~R"D#9S2H50'JBj)ntgc,s^aPax-|-14g-I}b,!YR+g#‰YyL*YBZ3ƅdGμN`H`r!b9PNl'N/l_}͖{-lPsٌd"5~b֌Ǡ/E5sI  סqc|[]3Ykq32ū%fSZƅo^pךT+Ks>uL܇ԥFR>6)K/?xSy1?_2ME!Jq4?iᇟȴO&>ÿ6Zi;EIWH/s~h&.[2I"o+f;,9^] SFb~4kQUB"sI|I4 ckNfBB^N󨱴؆1 9rd,JyZAg)b:#M`!f!4n̤gݙcCܙe4R:7Js8Q NGP,yZK%rZMkK |TWͣ.ѧv-L+OP?%Vn@BC >u-Kު1Pc΋e?^v)eKS-5O]x[vgTeE}Nn<݀._,SiG0:0J^( ֑tW6\ecDW~ f\[Eyh9a8% 'K&"ul4DД3Їr^9,z'B.r&%0xmJ#刉b:Ә#w; ZYi/Tod!x~Wr{m >~@{ht@5޳'u6잮UGYgCqJR rD{et კ^0OfZ ռuR1mͯj4nbmJP*noqu?U#h;%on=70lcnw/[M=Ƕ=P~Umz;6.:^W<7&<۶D!Mq?Ll|9CEtKъ6k+6 |Ϊʺar Qwn)۽U2G(H_SqB Lp΁1>lb >i)xkb&B QѺou9xP$qЛHS }!AmO}fԮ,iEO&}۫-LeTLh=!HVm=]X|>T ٞm)_+ll,5ھvH뙐ҨDDVB2\`/zťS/QJQ/B%mUct9vd^*U%\?{۸ ap 60Kf,6A&FO[;gEUL˒-sfU_WW TuɺUI'UBP P*lJS-nx۳L}\cH^N82?di鄇IX9AWr-c13^ٷ#LK^8uZxBIhJ@Wv=A\2! UK:CW -Sm1btz"]`Xg*+tEFXJ*{zt 4[K(((\Yp\k7INXٟ~믲lY%КcFgH0!5L0%)e7_? D3Np M'Y4 ic(]%;>Wwf h9FmtJ`]k:bHp% ]%!@I1ҕ$D.YWX\»BW -kd0伧WHW ؊uBݡl>]%폈&MOWWvF3tDDZJNW %UB?#K˽ [sD;}{J;MdZo//mk!r"cKx;)Rf쥝}Pahp]A(yc׊S0~~|lc*X kJp9 ]%JWHW*݂!5Ҭ3tp1&]޺J(yo]F k.;DWgqDJp ]%JzztnIg΅ڹdz3ưiL8 M'3VeBuiP҆CӜ ٠ůnvq1&! XTw\n+XA)#+*c's'?Mǿ]֎=>|#/+7ٔſ&O)c䞱沾eNAn2; ]vۀdlf2S/Nje8?JDrkq*y2㷫GRZ$*C2''׋qD$J&Jl'7EvO\ 0Xk/|{:cŗm =?W74mڱ4Vc2}cHew« ]1ZBIDoBSMcޥeDg*՝ ZZH(ҕX.W`GUvZxEB{5ҕ.M`;CW KrZH*l=]}X•zSl}IV|Sކ -Y'գ;R9 y׃׃ܬ_791qx#>2eMjfP[IrSJ>t7`9K^ Z^goyuvQo׹{n|U/'Յai|ssد:7,Y&8Z;3L ĺcyz&y; i.@k>i^e]w3_jߏkk.R~n B-I_]{M &\?R_^xk})1;2ߺ;w7Z62~‡o{wdmvKb&kAP~uNn1gHu\156H0 ÜaG$&j0tPv:hko:Ŋnilp5y~( L\ElK*Ljd<)VdDDL ` Xy$RDoi$HR*2x#0zbͣ%E4, ZP2 n-n -?NмyP=rJJ9] ˈbS.҃|Oi,'p_ Vl#]-t+ ss5x&=ƏlvɿyYFY|: d *U{󞟃X7]W .Zn){YhX_S@j^eKf3A,֞83$LJ`gYYjeHr10ׅz򾙰\VY r^1ⴝ⑖!IS FEfnWa1͵ƣ#Gz|AwV#Y}uXfW`B ˚NYgHߘS&F*'Ǵ&,Q\W.f>^WTugT t?rg1<֌zrΕB(47m]d Ѽnxy]NK&aҟp1L+*?̬iWC|i#r]jG-z%W +P 8JlqK6RQRqa~Lvy/W ߏcsܫ}k'>a>.(qZOEq*<1MH("CX8O!%Vq#{FxFH\qAZ63O e*(9H bҹha\ =3!xqI-3Hdx&CIOP7IY,-,xfa8rdb8ajm"os)=*wdv۞[lFo{*5Q|gN`5 HcoDYEq3"؞xCGS!@M12Ĵio8͢D;i,E!"$ŌҀ}!$.EX^Ο~7s [;'fFZZ ^Vl_E}1ouƑsUϬؘ(*Ox*4i^~\+HۿՎ' Vzq.=F9 #/4Îc\Ϸ۩vz$gGT?aϦt;gcG:{ 3!c t,(xIBK%d9G*?2p% :`'8G5Бr&LIh8FҎY`8င`!"ώl Ξ0rNhl/ahu[z}ƻETW/n>z8aG\rOz&;A FMjs N)Ŵq k%Jx4?s`@ Y\6k+0#HFǤ"9̰4Pc  (B' L<&g׍(/w2.Yu>V}:뿔`UX H~އż#Ӓ E *KJV1'uf_W}iVzIR3V#%Js㉯[DS?'wkAgӞB ܝ}k5}J觘A3/_?kd$F1:^bK?*[0Lg?F̿QwyC0:%jZAZq層>SGLɌD4dV6 K*Bz&5 ?oIKyo}9K^ VQ \0FHB NiePʀ3d:%:f`x C/2{Q\ϭZr4xԚħ YwtWY^;WYlT:2WYqsL:e}惜+[ޕ5R.GttllG/2AdƲ,cc:r鰊R|p:3B2IChm P:(B 8VK5}&d׮e"Ћ_d{bEk``UɸDʀZg8MqR1MvR&f$K-(dc$A*ΐ ]lC_' GEj& ܻaHj '㲱V4ˌ] vm5:ڄ;jzr-oxxԤTQ1FF]ݥ"9,)DYY{e0aKWiYXpN9y4X Ж#]rNSPۦIOHjI> k#xHB9;'2Q80aȤdkVlz:}iz[&Z2'^0 5x d.AhLe&$(AgZq!HV|훋.{ۋZRd+pD֫^%Yk,!H+ǂ!<t#ϡ0)$] E*>$b&ᔳ%UP c]cJk.kFHhĤQ4Jdx_xTx~b0d|M*kGklSmWs=>iܷFA[w{pZǢ]vƹ:𳏷 ULDl(%R~ahGvT<'乗,QA4! I(D2fY$ K"%S,Lҽ%OЎy&Kd?d)+\H6f%)$9I`f@gZ4H84op|-}Şۧufe]O仲`tr`2}&{A-ۖQv@Z'BZFXwHkI:X [̈́Vaf>f\/|a_ EK?XL?Mĕq|o/V9<+8")5{ x]LM{4KVTb}9_gsH;Ǜ |/Kp氄T;̦#¯8=B>d]>A8l-aCo:7l]czՋZ>MӅ}?p[A<T=lgכO;Qo'7^\, ^qU8ˆxt0ݶ-H*4{"v1DՉI"<—xXeehe<)Q;BI%e f^&a~(!((j:d9?e7g{S٧/6>IԙLJg Yb KUe&J ^3~Fe#f옭={͍gka^pu;[MWomMȣ;p iczkQ.[/$vH?rhӛ>K&Pv:n Ro3N^-nH&|>o2NÝ\y>Lnn6ouv;r0oe"`[nÂ4e'v@So(<_]kzYӫ!dKP7j{`&K@*4'{*-Yl(Y7;c9JMQ*#Ig)x\ > ">沇BTbGEKU۳'Z*:dr@4:H +֚8;4܈RN ͌}uo aЅՅfj|rM9QiZv~ta4<~r kgHB$&YLֺ5B)@LbntRBǨjS Dd'[>7%"Sf嗧;gƎ7cz\Ek7{jm%jm%= ؍?(vF}tVz3, D*hM׽ 1)ISMaAcC%P֞Pٲ,QYd8J+BdiI$v >l&v}8wWIbFl6?Ո};O hD Qa̱&4icE *8'];8=cظ ;)qFc& T=)&d#.%Xbj85}ėBՠK~&]e;,gyf9q2TӤIm*7qVlG/&{5 @pXb]ķfǯt%1icC%]q٣Q ʉ-c.jk"Pz*h0x(*%LVDv%(O-2Se2EFap&@05ځk%D ܧZ=c?+Tyÿ7uP-, F֡Ucbo7Tt@[z?0nyÀWOWcnVb5V@sEv-9i<9P/Xst>ŏ,8q9]~R'ڦwq˹6o3gL䐅MCkN{ СL:>G-̧sg}]UΑMvpaNh>ʒрlr;* Oe+*\t"t9j3R886?b.1ͳ#ˤAꔁ:e+uʓy6p<hSEGgƕƔD <@ßg_G?'>P*Ygw5t BY=}^   'E\p"O,JLAb>fu` &*a=:EM~ì'"OLou;Rr~mY?;9UYml"*.f,K!Z](:l_fHJix,%zݷXb.O3fXS1jeb}G<*h1khxy5ǻb&vXfv\"9*a3*iÁ3~GLսg%zOlc^M,8.Xr&iX« m9%9 x1)IX$L$E(g *rxNx8;>M+':}}`{YqK:AL5ruz7bq=uw]2ô'jl u5xѳ$sQ Hu3fY abxYZrJ Uo tt>A .N6zˀ0Ժn3qvPiWRj/.ۋZR%1WJ2+e!H+ǂak3f(V*tB'av3YX¤,Zt%$)m*:ekR+|HL)gKūL Iǭ _R~i9"mM3DѬfA2b/ k*6:Vi2BCAv>#HA  V֪wUZY~JZFѤB 4$)Z>j[e$t.L 0k39XӢߩv Kߗ\{bι~8bIlLYRDrhQ6茕.I! I{F`4<{֢ABJbޡ۷|b+Q)+PTS&ӥUԖ&}ú2']YJu֏<׮Bٻ6r$t LOHV 3 F_?^ٸŸ7V.nP0@مo?]2W {~ݵ 'AX $2aWKĻk^TWg3LF@>e gI0>>!PlOe1{W-wEKP]"-WQ#% :R䍋= ^>3{w̛H Ψ"80ZpY|tDF2MpPr=ϒ`*։PwR2 Ì7ܦ=r&8-GL\dՙh ^ 9;F{Q-}3ecLGCFT}َxxdu>a‡BC=׿LcSY@|2|}ݴ/UL\_Hڤtoð;WˬG" n+F74}2<=-pe҇1߼4͜[wv~3K006̷9w08Xl-CUH?\mͥސ_ 5i#Hf1Ǵ]%S!Jv˖[8{^ҲtTaht࠘(wqf+A3J)NyxQ_׿\@g(̆oS3tK/ )V}ї^K/ҋ}žb_zR_N3BR׹7#(jq) Ɯ{337#e>axW7hm.0xWRS^>O$dq"S֮̀9^t! : tR /c6 {I= co?KV0[K D% !c8F)P{/DH"gauU\B =sEBmH2ϒO19xHK`A^kfBe\5rhAشz\ \sTJQd%o=]bSbG&噇U +z~Ee(T;T9L,sse}d,rd*Y""5,BUͨ1fYiy8 `:fKv)W82)ښ9;߈RN qƾVօƒ•h}9R4c$Lş h`0^EvGød\֮{jmjmz#؍yE.FhneF9ko g am UaJH2!CMtML,( #(;PTLB K1G(W+[=TqVon'Ȃk>|S[!8dFf)M6% L 0(-ǝy$~v͒5K,Ir&1@1p9ZRR[" uVhE2}&NXrԌt}2P̽Jy(NGJ rm9\/S؞ǻ*[zU|{sɈX*g(z11Wϱj--UVc>xDxH8\Eoh6# Q{RƘ̛\:/|ppN*'TM>B[UQZkx).K8X_ҪBDj9/ut[j9À070 :QB)t4d2 t^3}!mp&5Ʌsm]AΞM7aNdXe%dt?RBĀt|7 &$rX- WZa:@x) c1Ə^wzq֎km"vJ{HuHLREڐjGyJDMnm09h<(΄Oʾϋ>VG}~hm4$-> ):t`}RaS^0mQ"J:\cbD2Nƺ\qW?K &!:2LEdL$9& *$Xk /#wr+(/ 3޺YzYXc~r\ԣ+c7߬_hJHy3D\XgLdqW( % O{ēec|*+X%K&_" >"rAB@L2 N>i=q^_l[ᶩs[Tk78/OyL?8~{`eXأ#d ꣣.a~Ikѳԟn>VGa9'dkУImс9*pFU ,8YAeB:Zc\$A#BKʀԡsu,H:Y5?pӜo_}lBZUbgcI<|P 4ܢQ-qؠ`$e"˒rXU2J"Lv팳ٓAM&'ǵB*Vn5rK8x.3h; {+_wm)f޲~snقpZT:gmO+;<;\)TIPQ) Lm`%W!k7׋ bPb^6ZbhZXB!dVFt)Eiyi\pLJUr>3;t$<~﷋`tS1-FlfAm8N* "#Iq\#aGc3$G:g{3E9B{:+|x(x>Sih'r wDOGK)$Hp!$k*uoR{NLTВxsbqRЅ $Y;#3sKxp`&O&!kR o%Y@6=륾?:_AcB *]G P6j[v~;43j("ANA7Ii3}jܧtIѐf.SiOIcq `Iek3Д04S M7d(8\KkS'o`ca&ČL|\$@|lV36NÆ-s9ܛ.`cs!h>$;}/Zw,|vңa2w%o2c3k^}?ټS .BD }rY.IpbV$JPh֌q!Y3S 2bD;Q, *8qrf5+9*P|O{}HPĔ3chJgc e0Gx qzgW80fw/~ÞvX)Ǟ*)(wmmUS*³Uf(Įo{|s)cKe11}(I]Z]]O q,@Td?zup=WW7}K>_^Fo'2r5q}QkL"ETjfsӔ䢥L1 Sh&Ye_mY ЦЅH ZMHܦ͓lE{,JT^:~] 4f-]xZܽULL\/W3Oؕ7ʤhZɮGoݛWs8kE ߋ lif͌_~FGI4N*ް, y@N?=\rfqr09u:fŃ$j^ztgL`e#[ {FrZ)5qȢMG 6 2:`5Y6$@US>4dkR"m"C8M*&!|ԖA"V'T#gǬ>?w*WQnBmhhku~28,^Gcrk oiϲ@6߇)w4>IV!.I1$${PilhǶ5"HB_uЄy(`yA JBk 96Bu; GX$ `4$"9@ Ш{΂cPĸ7ifhIXºS|HZe)DPaܗ1-fC40rnYBX'aɗa8/nW5$HdC p 8 !p}5 ?{WFݿ -`$s` 1:e)RCR =IK4ի߻UIu(g77!6Ux\"1 0;g" VpL3F"D/h6&ĞVF[?/4,N!7V*? ~fmhM`^]nA۝IOKNofFeG? =tQZS& 6UhJtԷ; VLͮoVWRQ‰&zu͙Ϸ>=>᱉!9;[Llx3yTZK'X#gMiGw{5 r)l:8j`Lqw.Fg(/pgߚ#sf縣ο]Z2duGTe\ •ff?CEV(->d"?z|y94N"3=(e[ a$솜]*9N Ϧ3!46*o)q驽`ϛǯ.2,hIʛ4Wl#y&eŭyNc0P& _JIsTbR9Y={ihuz|li."%DIdhKbyx:A^\D)b4q ۪9.l*yNa FIgGD)4Nam#Ŕ -^ VE#b|+T!x Abuo-&Uw*4 ?C"qj,3{hG(* 33NÕ8 Zc=+gঽAw4h _Ldއ㹱 d2H}٬i{UY#fĐ  CAk/ .iYxS7HofQVK}qH%5:h &MPI")Df9fDC@{lnB%H3q8hHdAӨKIa,{.wjV^թKnkO+F ]S ܢsRQM`[v aFU|6^8f> G< B @g޿LƳi@f/8Ҋ " Q\܇7׳c#$+?j-鸳.qپJ$L"U` I l #Z}s(}VaMCšvj7Y5JI*r'6Z\TwItG!+h:6ɧ:`cgA^1~|; {Qwh0v"I}`zwvs\]:^}n~}FVcNiN|in¼A)Hp ]\mxA[c͇m0O0%S)]W7LaʢH`8J}X19B,*gHNz9KyC,gVp"@XEdIHDDry_ 5Eue\86[b I2& B's9|N;!' $x6AF%xpI81ǔB K|C:" N6Ȁj r^^ѧ;Fn`=<Ĕ\/U9džahJof~@,2 DyguFDof =B~F yDpj~kż5+WX㕯Ïf>;̐a#MH#i);hG<O7FAhʪ(IQ59RRdKizn '\N-G3,<M"i͟BSQ 5NѾ,[l8hgx"_v!@Z\wlq3z93 (WޝdD(DĘRi)0Fƥ$:; -%J!$ zP!|AV R6!* g7g܎RN Qơ£mEnN\!cz?t7 *;WαU4 Rx$EB<$4Io@(#c״L-8'e pgmr'!(fJ(ml7"~:2c(, ]Ũ@hYHϵ{Տ܆, iE w1$$*B 'И5ᾧyfLD)t5 wZP@ != yMKc^u8@< QY[n~X 93*ǡLx{= c:j"Q.(8j #:@Wpx|ԛuֱ ]|*!";t,=Cۦ>e6'*^cȋey;w1ki%,yCu1]}/+Ԍv_4|NFӆ^-D|AtzkiBcHd1ԛ ɠ"彑kH(j}Lc`h\Z7=JXsX|$cqcֹ7ѡ7щ7A=CgvĬ5)qDŽ2XdN k $0g:DRVw,3!ia@7hdqq,J:$Oug[mrbTϮoB|֊/]ޞ!(&&w%EoΕva{&;|pzfRBWP스7\ &q.ٴIWmLQH.gyY@)BNU gDcNE :?x#a^q)=0:z.|BI <XC#w$C=băF^E *{=3)(!f]s2#|YW+׻nmqPDP4hPxrE%+JY HUZWDT.j[9 8M}޳ x)]xi]x1]LWDe˞{8c3/P똥^jrb%Z,k;! z]~j/[2؈~mfԃMuz9=؂U` |thU i{uxd!2TpRJƓ ɤ)=U:ŀw(D#OD BbEAL eaKA料іbpЪD@ɲÁ#7/ v=lB[UCeog6KfҗD}^#Qjj@K.%RJ/#\it6 x$KZ ,i땺:WQWS&QeZ|' 6:ZG],@&vY3̍ػWJ]ߞ0'~.VnwM kf񧽳/CLEcm:(rNxS3i41hސrczDxD(] {`x-@Q2g}b,I6!dN*ҥ)2fv1ӧVZ9w$@ٓYʖ~&>-ynOD|q4MjyExdII(xD ЋS (b\{ix&+{G yJ0 bm8%=i=Fy&XTB d&LGV. ߟoe"4Ɏ@p0rKKڅ $8T9s<8Qn'?M))e, D6Rߚ_;ֿ>N }-7U=2{_ š(Cw mn&-KmxfзSЅj^z-s)jS8,UJ>1ɔ<@ xq{-Xtsx[0mZu_uJ#dj±nIt!WE{:.Tᓷ`Y%ܳ0sbOLBs@nBQs`V7eowys=n(50V~8pv5fu@јx?mf,q%༞(#wuˍV GszN:RhCCrx8N>0B#rcBWμI%)C̕h@jˡBzb+zc5{ْsx>l8. @kSΜ IE(8/4m8!W󴸜>:UhYxF5Km/u=%[pmY/I'X֞ǃtn *]NhҶXT^` Id:5tăV*L I5ܥą`:8&PD>ICdQF=0|hnid*:Zi嗝ti.wWeE_3}t+Þm@J}BE1v\YsC)"Jk 3IhHb(Rilcw5"nAFsRuI[HcF'aAa꺝ȥ$@ù!H!iPEwȝ X31ye.F ʗ{LdtV*@D&q}7tt̆Q;E,hֳjM$,i/\&)z /5Di'k:3L遂tT;wN~ aE$ RTR))d w;ARtbuaqXn>UH?8e^Y(>[ɱgg1b4/~Nc} +&ކ{+Η\J'wu C7.ܔ ]Q}&գӨԨwuX~;E I#YyDz`TkS U) onT9EVKtA&kdC\SJ(IENxP*6Jթj^džhS֑˘ hѱ(MttzisbBZ㉃bBnN||Lzzx(.Q;L>E,4c DY̛}ڊ:TJVp+JY-#hq[CNShyqF*g-6 :a5Wg\Zoa#EgF>Π;hQ̂MuN`H!oxX]dR,T:y:QGmSh7\yDU&)YОI1GY@4) a &ei H泛b4h"hUbzo5QB-fK2L Y %y鶯/Ƥd4N?:ə .x@dYy2sH<"8)( ث"kL'[ը_tm:L{!r^оc@ O|ޒ.O2HU^:W!#-E@~!tAwItLwAw-AGhb$ƄΔx(S:"K I3fWdԐS6"65 0Fx ziy"Sp.`1/< S `O@V貊8oU nIrUgk?0F0q&YYɥ_Y[I ?L*mNw:{{Nz|ܡmv;zw  ݾj:rTiZPI۴ *oD] Ek(MKI4.g;?M{^$$\/{!9rXT: } Hi3 BBX>s;7x·$UDA 6g<*%YFR)$YNQZEiMcvyx~M}ŖZ;jvKojPmצէ*l1`d%,H_ M {K)C޽>e(E$+]LfW"'' 痳8rA?QR*8*5 /QV2)БPY}F gm¿{s4 ߼m6.WKO'W#]%ЛG"4][+:klh?-?,'?k<<W7Xm`Z_,.g|?}h \.K1.&N[.u߭ӓqjr9[JW٠Qf9cRCʣBcBe]R^!\!ʦlڑLz4pWDвpET.p ª`hY܍6ZE\P75/.Gkpt]Js%?n=d\^[*wiCr,7(UEhp:קm S~9dw~û7\H7wW3rYs~8Y#:?`@<ȕYUW &j~iYn8\=`^FŗP+HyzN}\"Z\Z\r_;0j5:\*•k8""BWZ]B^!\-}[J܌pkXjĜ oDQD'ΛH!Q 9`L?1?frh4Q){R^H~nɸ`lt!ͱ4QkXBP=LBB1WBqWD+Z+Ҳ^\Rß{:ٸnYb9hT>-d.qo(rn7Ago^JF=ql.^^'Epd:G$M?ܨ/&*J͜^C@HV?Ӥ0\\ǝDyiwV!ҷd|l{P.-n"k,f+vjW0vok(۝~-;FE䂀}n2a\2k^xiϓ\z&nuf&F֝kmrBbJ̉wN(<׾2̓VY:pҗ"HܹIjѐ$pt>~2K+Ipߚ8WJEv<0fyʄٓe0o ^{S IEc 䘀E %.0mTEuG"_=fâv}--6W!g咘wR+-r&P65)C2mJv | KLjd9rxRKK`-ȇ2r\\\IK9+MVsfdY%",ROJFZtl-ٻ6,+vV/ bf/  t.ڄeR%{,Q[EIviDZ~E2OƉSTb>Y ftcWID34fEE^3"Z䔄;+ H1ѨǷYt,G@똕2Wqa;-k(hR!TR2玮= !ƨ:ރ%vwgg@+ZACW@hFt]]>e|HEb5'=:fDyKƜ 9X\I;7R|1>5ysQUhM܍S5ԺVDC*)IV6-Dw$ףI Y_VطJ0'(mGZdjڤТ-)$qu$~JaE)U>9αP`1zּXt):Q} ]&|J'{XMI1!KutAڲRȮB@&Qw_KͲ#oG0L3R/3*) rNg'Dgb |ZC:mZqLJ$[PyU벫ԔX^4yOC |Y%XQ&ԭ.H#38hG!JrA#+PZDbYьmC5YK1Z1ةi*B'foAjy1k"!%:]%@_>&#TϺێn(=ᡔ5Ḵ]` &$KZ{65> PCȚB\3v"(ά$1þg`Qy0 !.@IY6 crĪMPk>ZlOt;VY[H$Q5$!Ү$e(m@VӥWoU4^"`!-AmU($`F}+%L0A/lmRjh\yۜC]sؼ]/V˶\GIj0hfU:9 6IҢGJ$;Pp$JHuܵmhTkM!JKBq$RVO ]Sm3@<實{FзUfĞT23n(!/ m :$5\Qr#bŹEYf1uA;(XfhTYڂDeZT}f=*-+P!>nw`E^QH"NdviP'7 d!g0Eʟ!oQ^ѭbp0) ,*1h*mr3u#z9Z|qgP` 4@ QYjЏQ3j hNmS; FnfZHK+/ΪTM3| vȺyfj)څ'еuZ" *nd OhkF8ڢ  g` aвM̀| =peR^HlޟfhJ7a#U2'Yk9vڬ1jEr7ǥ!D 0r( ;fAjҪ\ BE!:MSJ,=wR@Ւ0Z Y\ yۨO]gF358F'>0 ՠzD}u;7VW{}[P5[do_Gq_^CZh  ۀ: ud%e3tz|ܴg[l{JJC.Ҹ~wlok9]ke\yq7S F 9ba^1 WƣyOpJ hѯ:B(bK4\9fpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ WlzRÕ5drNp:+d W@ԛ~`Jp WAɆ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbp+[Op WD9zUlz++4J)6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ4\YMp'd`b"PɆhR($"pņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbլ?8^Wz˒f]o\ m0V[{ëB^`NF TB? (fK}%?nt6/OWKs*tEhu%uE TƣWW@cztNWsw-6j5>nbbEgzߡG68^%j@šz}"M]7jQg7ָKɸaˋoY711ЫM%C.9&71ˠEI[ʟ~i/*a{>yiKN:)Mˇz_gVw!:tZW{Z"k;mniWX&  aw?NjՇu:]5AoW WK6Ey<qЇ\}L(q6p75[ ;jӃ\OF'L狿zw|y{Omu~=v+/"=ϗS?5"5ׯ~[\}uܰ8?7v_VHmYImp dnX.,[5bM)8W- KV`i7`}vצKùY~w糳ނٻgCH2r7pE9ZI۾uBo k󍸉$h\B~0ѿ^$v3Ho}3Cn=}cz;.Ǟ/n>ֈNj~ I][]CEYd5<6ye?3}oCBzty&>[:tpix_~A#8J͹jm*^kM',-! ؜=čgFxFX11b@m㭩:i"Q^fDΗQ']6҂*GJ~Vbcǒa682|D 1[Zqa %_?TWotwm8ihO#·x70q?vym2BW⣙^!0|Ƚ]UIjҹx!czy5ћ.B=hU7N&Cx!bVR h´3 \OHBAH(h8Ξ9N~ԛua {)XZ]XZdI:Vel%QvOjڨrbQg~Q`>AJ1P1p 1tG+E ș$Nx<菧zU޵osrζ(HZngN6o^-Iz]i|.T,JƙCwj}5|~<&.'[ǟwChc?0k'0&D sώÖ{1Ow7pq0UxCoy@G޻Ɂ ԇ 4p ޜynx);=9B _/T_-wJij=ݫˡ&Mi6sAz/TV4J}+=!hCT(։{͙v^&~#NT;)wY3{/kmH__ Q`$Fr/VI{gH9Fw陞_UC*(d[dx rtI_xylx< scbsrFkFbQKh.ȬU& m.񠕔t}5SB(hKJIR;Yh'" [#0w>A8Ԯ^JBE[+|rb:cZ<*(e ]R-,UU\)b(4. n 2~@x.Ta0#ȵff1xXQYId`0L) : IDB*`@V;Asf(b4JM&.z k Q" 2ZuRؖ['M{ -!ZcK[n9CR_Knqɖ0Q/0Z\a,X/ c%eth[M gYZ/"Q $7SI1TӢ1 RP*: JbG?B}S}3)nìFb L_fiGq[v|ܤtd-5`poz8>IZrPaӆ3,M7Wu qTCvH!`EEbs`J0i{iA܊S>!I>!O Ytǽ,V^;,6>xYlQ%z/xY]7:DJ\1H--h{o!n9 ey=/BxpAe e@̊Y88R 30'Vyր=k`~ݱDg,TDaKtdp> ҡt*:),F3&Jt;}1$)%T,oɚX!ɬD2MT}c^Ahm1)5#g.`x09D<sx8DJÿVGWWu.x#+Zc>sܣP=1W2I o0JY_P)u;o4ʡUdXE}}-c UL 0:9hE{YA~!gbM}B un-P?$Z}zm{$t.msLKհ|c{+ 7^w k.OSiow>ލ/XN`RߦgѬY|oo"Dx` Uqa"p/.O _'Xj/%ZtxzK~A,|@TUɣPy@, & 2W?'f"c&:/ÏVB1Y4%L/A&͘^*LZ-sƲhخ#W`-FxFI<8"g9d-SOڕM#ݸ >] \UwM}37?>G;r';=ą<>QɣoƲIIpr5[k@F$m^ icpK*nI"a۵ﭿq&D%o?׃x4Ė4i񌪸8 +'@s~|֌T̋d*ÑzFsJa$h|ыe<Z-K%R/HɾX"%)@H*ĔB"L%'C'FתP*Ņ KYVʗ..2*Zhdn^9텛P) ;?iW3|dfxs[;i&Cׁ| i'Lz&ڈ `BxQ$Ü %D}BK%Dp2[ lhCn[%S, &gB.r&z]J'jz8@"Ds=:IkE [#Hfm z>T;k7ɧpv"ϖ#:\~o)N\bz7}Dft؅z<}ҨŽKl>`B.~+" `ZWԺҺjZs4;~>GZVԲZpp =%w?~1޾qv8]^g:&Tmbh{T+Ě.)A@j> (Q,KnYIf"~H[i^ R)n|C ZY\JB]/Ut g_N|\oZ2g߂旲IL KL#IBvq_w$ .쇋a3MjI*~3$E)^ȡH43TW?U]T6ypUŪmD(g!sG>i+<[`*ZdoU"I6qR\mZpidKCysǗޓ5^gʬ%BV& [p=2UY͂AiSN^&< 7ϟ]Czѽ~q[(V`|[1{ hwЌi[j:/AreA訜M*&mB JSƄUXGO#öKT!:eF$=!ɘhNL(ˈdVZ,n{çG6wX&@lGt׈Y7|VV  ?ԋiW4\1ipmj4Ѐq`RѳZa~إ\0?a~ cDu9C FFB2C9FRIKKp=z^B7,!%kiSv_:ȦG?ZC<{0MmQ! &DcX;-Yed4<u$0L:d)`&[$*qQ,ާVO|Z@~fXom\]3dAnFfVIFn9r8;{1}/ _2-%XI m:A:F)xZ$B9 kze0Sўkن2J][LNx7FH%ɭ*zc}y|I۱PwxiN^DOnmFbH\=t4P?zʏ'L*͔B89Ud M ULG p/"dAj5c 4 Bgbtk" Τs.2&a[؛flj m϶-ܩ-avUbk2>O$t^Ol8]?ǟ/OhUX3ᑜ t2e &!BE wE@P? )ٔ ^C4FchoĂeqǣ\M;6ڂkV"P#gBm-}!2W1`# @2Bhf -(W{6.xTC&dȁ ![ Z$Ap$E'i# T'\7qS(P,boE٧Zjx^`*!kLEIhS pLIBBL"%{l3 oHn6\F";I'͍,ՆI 0|Mߗ8O(份_X&!:{ӒM.V5TV;39mٺ"ג'Aj'$H.M)vqvq]>̆`b(Cyl5q? Ja`ͱi-;tv: xe B?sE1W$2GcHZ䡛"e%*͕B*uDl]H+졛"%jBs#2W$hUW *ҪGWE\Y`\.K־ԍ^'S.1p$x6'O [+w+0ӣ|FCףP]j0pqyIͫ;+U ՗#g~7Or-Z+{LTX+s4gcY?}$) щ/gO|i4jY#=+ '1Wϓ>\m#2WE`u<檈XUš"\}檫]Dcs,J}w`?Lm-^SN+ioNۤy0OQ~ö!U@*Qq<\B7?{Ek?OƜ=m__KƔo/R99:nK$cGh+⚣YHZ`ԋV,}ȭbhmS{h!Z{h!Z{/U4ǟY4 J"[{VG=DkC=Dk/I&Ji!Z9h!Z{h!Z!+)E!ZQCTCgYx.gYx.پv]HB]J\%8mСR%MT$Bh,Ѳwro<1I *6=]Y$ --fMk"Ջӏg箿(m;~Jp|zk݁W>4@ ]sCIhN{Pilcu#HB7_u:Єմ 1tƓ#cF'a9Aa lЌH&BR`@?N>|=w> U@JkMaMVd}{J%99lhYStʢAHͪEףpX.I > .,zdE_|ZF xt'opwPDv"mR&蝡3BX TR)hdLaS}!Pꉥ`4uxZ(t |%Bi g]%F^ inGEpkAN/gk=;~׏ҧ7Fjxݭ&ƍw}5p8BU&6I 9 ϸp߃:;[>w rF#re(Iy~@Fu1]pK-14?@$4IdwC g4̳ݿeՇeL4^_/?5}6JFnzOlqAp,\쌿mDG)̖*P+AC2x{WEJ|sZN9WL^s24}ihYr@# %R5U&0vG\ 7^@^wsUzWDZe8v&k-[ҵ E{E,Zسx6Mz{~SV\},yz;_2|vW%ZL}CzAצ$-5+C4d2m6VGM_.miTuʀ=(* zm%v[_9Yd8hm&+viO)Y#hEN`TʽO:M;}~?(kRq#˘N )Тc)#5&XZa8Z^=pzb =zbbNܞc}%Xl>;]/q%j~,4ֻɲ76'ם+P&Y\1HLÚoq M,_E%NX-sY1ڊ < heg9ꈅ< ִN ZT> :'%$cCOi]d: NVfQTDyD9L,hOI1GYi,1"J=Oy;9Yf&JJ[}.KW%fK?U&kR$N]LI &av/Ƥ IpfK`OO+DD9c$Q:%JV +A MRP)ȡp܍Pqgj`nO9K/hs#`3NVI,o V|椋KޜDG4ez;Wv%Oo/~.`&i;q}o֨JNJ\ &͋i%IrOְv-q7lW'g]l&ҹfvnjOGn annlSwvit_~6?s˖E-:2mj2vn04W]ziV`%{i R7˓|ד#A3EQ|4NrTkP0 L[1_oɍL[Y3ƲFdlJ30eZw^84kA҃X~b8Le`)E$Rp"B*ʼne*}XR8ܨAܗo 0 зQtI .$r BIk04B[U8BO:]c@3CM@lb'gRXQ娀$ zVs\!~qJ?͐wiJcukkK&ݦ5yƟu5 qbG YlK8R)ܛ8ӎ-\pP)Sx&WwS{Cޞe]GN8O;VC֣# +>x􂸎$"OtQ=5AO h 8^>1rBxIY- b]~|R&(agI9RG}QCyFvޤ[qdhQ"E=o #  pWnxM8a ]$ʡ+)iTPGK󲷗,9 *[{3?U"S#2v,Wg]&ueP}yhvw'6 0wkbٞ:o9ݓ.;٠Ma㥔ҖJޫ=apjgfngm?$Uܻ%kt$QU~y,Zn9!AuybNGs<ʠRP&HGC "6vHH(TExg,W`OU9\JX-X&7+Ԑ)NW*_8ˀ^>]6Ro% *$x|Xa^{:[R+|y's=Q<BdCmU*|z|4\ @M8+(|T>Q>|4G69i8ʖ瑏DQ[ӣHA|xSLJ $'+76H WƁ>PBZA=6#`A J.|N'k瞚sЄFSFB5)kݠ0@Vdy @)&,.H=6ġ3S"8q0O*H @wLZ#BiNȎ'NxfZ%bI9?G~z+<7(8,P0Z1 G0<$*Iᥗǽ{1Q>zmQo^ ˁO%2ǒw}3ql iA-b<Ȇ lC 6& l/Zٔ\XVɧOL[ޥ=J-z! s8S@H2^`HZJBIT½Ҟx/2 k&U~$7''A}xC|Ȩމ䒩ȝF=%4F񳧀HZީRB jG`οmɡ;nO仂T>p:σh )] b%,xުּK\c I$āC;~Bֈ$#!c#'A糨6@ѰL˰(BPƹAok& @GSG'?LIEx*抙F>?n헋+qJG"6ml`NDTp< {;U>֋Ey!ʳ"[)4E₍:5uLYKEg{i#N@ʥ"R(Ebh18h )K$MJrLf9pQ~A0p#l{eu?[͖om˂TGb E={֙a7z2uvzt-#wP5Sz?>e[ZeؑK7ZښwBzN ,ZּVrw)׷_ߥ8vk5TepbѦql5,z9A ֔wVrZ7.#k[͟7W\h8ٶD16ϖV :}=/IP&sF6Ɔ kgٝH;wH+;wƎ1yt,\XEY˵pAb"&3&H|2?[VyMjjaǻ8X~]&yx0ryJ3B ɋ深[ޞڷŻm~<<ޢpmsSv~pmӺe{;婢8]OYUZggcmmɶ2bZpbZ?{n8bt\ll6m_hb!LxWng^'l!^o>`Nn ̮C5Gl_~ìe'vIf%_ sep, NSgqÜ|p\Z𫋿v//yy4B79y碑W,;8[Сݷz5x~u; 2?|Q˻i{3 `σ2O4H{,{;ҋՠϣ_Pz9sѨÇ8wF5v;擫4i{r~?^Ƿ؏^ S$n0T3 mR IQܪn\\ڟow\鶭]?sAӝ7jEo#7?o2||wopwAb;rb7.p9Gcx# V4FAn՝Z?Km/עPW/֖#(/o'y\_ #֕yJFk,cmiaM=' lZ6׈sjhT-[DjϐI D3+4Ru6p\*[+zYf+lWIPow֟uk2~S?^ ڿ_ؒ3WuRwVxUJWI*I\%$qUJWI*Z%$q$UJWITLC$UJWI*I\%$q$UJWI*IgU1_ck520--=l0 s.ڵޞFhd+uz5WMUSs\55WMU,<5MEO\5VUSs\55WMUSs\55WMUSs\55WM7Laș_He6Fjp4Bj( tZ9靀Q-v*?OAH<m<K^0qb)@$BVD3`?ǺCSd}TP`&xlz)>J/ K']Q)QL_OcÂQ8U"@iwYgD֙o8Va&PYw< jbm~˨:mĩ9El븭- R.af5R1MGpHo2"zFst&q&QRG&#p%=(Fr,%fTQo2QA"i͟BSe=UY߉"&˫X۷vBUn܈j#b6O?kwf6pmKY7mKDE5ytgi"dbZI4ZB .&5"ָP^/RBrhaq!q ^ 1$R*&X̜Tw'b=X,4* o/_-6}h;M>w *]B>"! 7 ґ1kI&ʞ\'f pg6T^3A%t6`d޹!^;#ُn2ke_PX8-ڌTԮ vIn5d H"p Y#{jg (&!N !#3 gT5!/YdK$Qx:6NuTb֩_je` ""bahъ8&2#ʹˬ"8g' < \n6"opjT,eqy QFI{(nϜ/9󻈯!.ΖgY0.VODTPYn31L޺L,O'9:N }"xSqSb_X89)8<< 8dz@Ω.P b.%ٵ̂+ eD*zmݚ|tx!mmoɄр)lr3 RBĀ|z=-66jɃUIT%Zb4@'{eg\R:e݆-[.h3?:rqVC#A*cu_$5SX+ NbR_cE`,=OO[R1_arPyR M?%+RtOsa݇Q?[FADC4IKg9EǣOʂ#zL[iRμL%1 1NEFAHN\=b-Fn5䴊2o6ϧ-#ƟezGL>khoLns8ep4ڈYr[RCE5 C_v\*8ڀM1^x[<% s7OSn>dG` p܁o{wq1ż` Il2JK˦)T>jr*#=b(mu? lU*&BDu222VwZ}.{PKkRZ)T,O ID23a ѐJ^[9-!#&^_踅rڸ~*ok[v{w$ǵ`P>dz~Jt 6@;*댩|2 %aYOx|}|*+X%K&_ >"rAB@2 1p#Bz!~]~ϜMP ؊aMM|~ۭn=:::TW.|ݡq I>o?]^&5*!>䜀 B.'CG*'^t\8+pEW, I2D-w1 . ɑvي Jre [Jqv,䴑z+bH;t!Ǯv72[л& r71kǤω<|@LiE@!/Z >!A%.DID80NOZf_ǡ y;}^_oLGhXIx-LTdGJ2ͫ/ъ$5hz=,nՊM+6 |i+bJx1ݟ_VVv`*?$ Hrкyi\er |I|d\+K,iONсK)(M;YS}׺B4)3JERyHuD M8Ԗ9+}\lz!\tJ~= 1fď~>CRI*~U*$(rNxS1%)TZsFJ5kh˷˥bPbx-l-14g-}b,!YR+g#?lɉ.eI?ɇEu4,mB)F?M"Z57x* "#Iq\#aGc3$G:g{3E2{:+|xQ(P@$a0 s˺9lFf}TN%kT(_*| ޚ$HGB p 8 !p}k ZE3+(f'$IiuDp!n矐?#"Vg[^?|bq'>~7uV8,r5^ۤooO¬M;z=,wh6]TjŢ(Rɐ$~qrAw};p8օ}W*9fY},>b<#v5[=/zE}7Da=#o GqZ܌^.|0\WWJ9M?]X3v!ş무Ò $Q[!Ru;\D\ Ѝp= . 5Nae"KC_/ ]/"ů7+37dd욝]j!,]׳ٜ64UsmQ,9$͑ƒwYLt7H(gEnJ/r7h4Lջbi" Ѣ[݆7zD?,[؋x>N=a嶺o_0{ƊɢnROpg*W].乔e.Pc5C751lI7M!Q} Qi4/WUElpNu<}(ܲƿ>~Rk݋6QHNdZϋAr<,G4:͠2:+)yʧ,ΈP{={nhtZz~?E'D(o@r'#˘Vh cc)Glu):]=rZEPxbPSB۫w.X{m .%K.8Np a f͹|ۅ JHQ%+SEz+"V10jq3FЧf-"@t`VApʊ x,h%5,GF/z^JaHY S1!'uEѪig6**BC3 IXs: 1E`=sA:t٨5r+ i V d !UG &$A-ˬY+$R^X!,J*>6BsھĨ"~tș .mbzfm+C~x }G7MWQ+]M_F<fFb~OOVa]6yk.gi>oz)u+Ę>W-m·m?AHjHf{mvAsW2fwwk^1 gGWW##G[Pu^kNnѢa샙G?t_XfE-Rj:q͈Xٻ6$UKpn^[ؽ"iW=3$GCrdXfzz *N=?A X/.J [{ul_Yg7_6VZT.3B9 ),v#A)$TrWBsLƏ_q}s;*Ձk-Ov?w=h~D 2r,*S塻L'webqD*,ѸL>t2]e* ]=Iw_QBE/m8u~x[)?8 G*fҧL<Fj!5ygw#G {+8&|S*W}<p煝ZBX[j5sERIĴ DĹTFAðPxX2Ɣ`G`MѸL.c3BT*qrO3#Im_[C5y@9֚( E\Ӝo$t&BʣI(djL'7ݴ`ZcfL.W2T*|rWO]IN2L;wKB]erѠL-JUbO+غ=w]er|Jr tWKML۰%XUV|^R.?Ewekg5աgi/˃ҡq>C^hTG!{Q?ŗ~X@A'~hv@Z8Γ |c/1<[g–5_$cqk$26$D16O9W{Hc\[v"R)r-,A9 JTP9u*꽤Lk')?OB6A F8%8ZS+܊=fn(;) )},M|lR0ODcA||w>vf;r#'`Qxo]*z"_#rW`]ԊwWJ%O +%>&w ǃ2js*]=Ewu}G}pR:3IVKY|W.n.'s7w88IbH m % .fe~_sxzn>"|e kyaNt-Sۋb(6S;dA*,ASE']qK$G&Q[ɢ%%Q 08d!ޖ}YY#h}ѻ@vߑ^ ôP| ~5sV=֬Jv 3%ë@)]_O_WC3_\N7奷gPd FRbGY_a*“WIp7>NՠM8/L')IBV *zmxyOqC &'qR1g^ _MƣEgdAaǽ8WiV<;9ŲPR+#;WsC8c(!TFKɏ睗,{K[x14Hkmc66kc۹G|x>Rj: r, 0d88p**,2^N7v t=ydߦ_(K}7y_ZMl=EOhӛfά7KkBAe;,Ps=m ~eG WI+IQY!Bc?$6pA=uѤ~x .IɋJlm !*A:? a㗊w;꬧p8Eޞ1CgU=bnbfxU2^{qޕGs, *Yә-y֖v+m,ѿ =`}g:؝nT&[gSN/ `fZ\t ~-/'u(_ym~VfJz8 2 ̹5(kַ+\|ku4b=˛9H# &Ĕ(.g!,p.j+X:τΕ Zg8jc"@4(CR08m5r6 rHBy>e!0Ąל;z)UZk!F$< `^^Pf.tB-xZ`#BiNNSe :%bjM`DYgF:,*wB1P 4r3H`7 7 &IcX0,$KEO6v@6"?]ZAWm`:'RL9' (_y{t i.DP/.mBЯ_ߞͿݜ_3c=*l\pz=НO]:LZb`ڛ~9V^dr/. r:M2kgy]ܿEs'iR v> lKgy(<U}΍ÂSh\>2?cOy4ֈ7dweTfGmW*dF};Hq{㨅NQ$ƐcKKplJ.X:!/8}ُW !/fN5ȡMj525LuVFc"`mbG]]Z]gB.zi6 }p?-DA ˽hUk9( ~ŎS$Zm\ǡ_عˋ B̸5"HؠF"¼04`k(4ִ2&gƀ!t*)c'2^Y@#FkuN?,Z#g3UR[ø,eM@3G@,;h}yta\kXp%;Ph$x>}r m\]E?d8Q" A6Ќuܺ;IzIy-)ӈ=PR~'ZG`K=Rhuyݓ4A lIgqV9j&&8ju# `TY:-#i/}?k`:2V?!آee3+RLj+7p&_&x2kO>3" ]Wp󮇣i'!5&r-i4:AO΁ Sx!:Fк?Fep&:Y=~4%ݮ{=/oAϵ!B^Mt~8Hjlnf9;\/YӦul+[L=[ᢅF^W!G?Rl.1}[=@$, UX~ "O({gm{21uu20s,yΧs=(Rn緥4]u.>(_[ڰ^$Z5!s&bbIb.*N7 >ipDHneFoUn=3Xqmv uwKnI'ٽdl' T6'1kuȶbv_Mr6ຝO 0uK(K] `+&mԈGːc"I 6ߢ[[Uo]Y7\G%(K1X̸( #3V:Z $S:V!OCX~lo||Xa[''."RNxH8${bu40 eSfS9o+[C] ۊ%C&nГvKQo[|{o)1 ]reY})'v~[Zʁ w lQx8-P9pUK9plXml,8F)~_>KǐCQr% PR|Ry"tJ |3>ɡ׸ }z ?{Ǝwdbv1],yhIINίbّ,[vyHla>~U{/[oߡ׏m>Hq/AqQ*ќ!R7# 皔aH IR)93^a^8鉝+J" 6!%rd‹SB)\JJ W'sd3BwzHXth $+a\4_ܘ: u@bbߓb^b(Μӊi_2XٿL7`*+\bp:3b"z]ݥٍ!h?ȡNΓҞu <ɘvZvӧ<.BNEETRtγN+{CT6@Z"[ %t+[ƚE4*+#c6Fu7f9QI8Dl&"bc^#""h1(5A-"BE; q+ GRcm+JHU8Rd#-EZVb9r>M!:Iɩhhߏ ]HZVeϖ!|qRNž  x\ 6Dc瓸m_U*TۅzaGTdWAMAggm⭻+:}Uu_[L?˕Mb AQYsI!e_ Y*gg_JJ0I|= CTZt[ ~x`';-8KrH)]ٮ _BN!\Z~$y

Uõ,g}?t:,eK+S?lM:B"C^J{d{ lLdI##F/Ⱔ~ID5"˅/%_(E(} սe 9o1(ul_=,j4*KlNEW-PSlB? $f~ZETZ?1v}Yqa WcwQ8٢_yZfV<&UiQδw|w_M/WCߤ5a_,]}_3mVX)@ǫi_1Xmo6UoQj6pFPd3OiZ9uU>ͪ^nKИKg)ko[CTgqc׷Wl]gՙ<&o "\F]>b[qm.*Mj[mJr\n'x= TޢΈo$Wnv܉77Hob?7%orm%#GF]> ϵ6QؾOϞ"rW~Z~(g^(y}s+勵t]E?+o>&JAk^s6soD5YCX`xzvN6~*| f}uƞS)cOL1ӴGԜ6'Jp/~UZvj_(ʇY>WeAG/:oZLNUhEoGsv"$46xbޗ<#&цt9D'{@3A*P"9"',FlDϻk@ou=M<ϋt76d6O^^f;Iq=:tkKb .Lp3t\s~}VaHZu1np%.C1ugtIk% ,j.y0/ZP4NE|&XmE%DHڅlt 3 9ҡzHdA1v6> DSN;3d\@ĺj( J0Y/RОRDuwufyTkM`t !32$勏tT;h&㿅Ӷ E<*9ecu חR֘7,pgYM`kJHf J,1Z 55)c6_L@Iql{=3:.R%RoNޮ~i 7`fR::f1 | j$ /ʠީ{+ ;;k_e#(8Pk2"!h"J~ Jy l۬\^ Umi@Ep[.*xYuCYm|,2Fon7/8 #R}}|ϻCF|cz1+iv1QrQtЗB,L)`)SS SxD QefEYMNzTTi(Tٺ\i8[ /Eދxq(w"f3 s՘Y//?E#PtH bbSڒ&K 4+P%jXDAjԅ&LvRSv:z6 EE%{iR:T%ͼ-YC?緷r\iݫկ_7̱ΆmUvhY^ȲDN7,o?Ϻ3a!a 7{0],uHѱE@(&0ڑ9~1˛n[hʫ A{9 %ɘDM0j2PTT(*>NxC,lRs4$ҭO#N~N`y[n=nA}nC_KGϻpcTL!$0%D$UJf;a4CW41!X&7vD Bm f_ضx@ʲXRE)XZRnl=vǧ~xEm#75|~f0ɁG$XSXVq!#[ UM0O๝/ߧ|y~6>zRA J5f=V:cle<"Otit. A8 #ǫzt.tde/֓. уKJQb]Ykl bfZ?A啾W>  [ԳHQkn/E6K-@*zS6~˟CN @'bQFsYbm3yFB!>[3җvocdZ~$C+變8}M@nz< 3/aA=<{|v6[F ~ Z;WMZzhy~v !~8kg O,ZUZ,IapoKZk{˶ЮQ}H멐@[l;`kt[­D?⯹smH@YY[*7yylE.q<}X99y_qN|xMú 9o^a]R)΋Uαg.Lyԗ;u"m%sEԞRFiS={6|a` ,c47DTjv*V3ֳc?ˆ]uzV M`@ btqә;&!9i&bouPHY"!ρ9_*JWw,֝9}Z0>ތwymW{mExk;p=Qmzvds͌KW/=6לGoҽkmX]uc3?A Sx'*,sm]0ᰀUޱ]}ՕZ\y w_oqv\(V~uυw'NZy{*.EsVbvhGyc>W7=?cڡ#_zilsI(T,+luʀU@4TZ'ZPAKoo?U;Djf[J gIKN\71@ۛ0OdP& }zoZYl/7ݦ5}~ 9j6Z3Kٱ EOR.~| @ND]i"uqf+I S)z((+l]=vغzu0Q %S !,7.J-b㌕e,Ue\ԉԥpuqlR9֩R9o$lk񱝜5E<UNxHD 5Q{O Xx NU(KYI<h?yL9z܅ɘOl3`eZmxt?2؟b,-oȾ`@̾Qqa[ڔJ <S]򪀡#R-,"T'r™*1TԽ]16}-XϞ#=?0:z.|BIs` ! tJu3ex5Bg`;{QTDяRu͵MMxG`h\Y6\{sXxxG2sY6CY6!zԷhjM.51 wL(#E椰J ^xC$et̄ǣՎFb7㩎F&@J`ev y* KuV/w)H<:8\=}s@i܎geW3(`Wp^i#uqz 2@gJCm JJ*UVHZ P/o#3ا@;m]xx)dj(*I6EAqHTx\ PL*QF33z)xz"B2F5Vuʹ2D&'L EһJźs`Rv6FŽXYrO}KJ@K7숉^>>[S::lf8Zg,/yN.0IFyy5<2I)@¥Fd3.xYꜶlc#-ɝ)%x,$m5V*D</ܷFaGEy9:ݚԓw,*!,{N?W̜lM|zg—lgy/7rE|sG{' !F>'< /?is72چ],W_]diY>PwKG%`{Q*;f5))U!B22ͱf2ك{/L׬賨&0SkrFRlbIyD_'Vmoݩ޺Džo r܍PZ0gVB|an9(˝4dCbCtzY:XS8om,oFBK bHq UaW22촔˃:7>. )CˁE((|^DԨP`-nb/"`HՓ5 ٠bp.iI~rS l,OWlO (tsO¹W/,ΚE郗W1M>UE_]q`G{bG{vw<;47$q?,l2rL\eŕ=S&2LdлLdGT/2y HffEYINxH!3 (4d|q(M+/]^D&FiGQ_ )q .B:-=&\̱3q6\GP/gB|>̖$bjYy/tq+VJ+Z4bHZ=,HDl K9s`%bdWTiNgW9/|Vmw%CVld/VDlKg?eYt}`ug|,>O2{S&$cѱCU$Ma#_zG8k6}:|7sNF_є]#/Rt7JVxVd頣@QF%;Lm (]b9vRH>šZUf/ޟrCdI[(ևpeL!$K@J'YJf;jФ@[cIKHI!&7f@A 50^ 0oc mkymb,^y$KsNDNi=t[-O,'Ʌ$9 $q*K L91)c`L gzcZ=Q㌙||YRnZ;nZzjx/5}eq7<A%`K$)RbَL)<%mu ^@h<dz. c- !jOOJ뮣U2' z  H8K R6Cy@Nuϫޟn:'{Fzt.(- ^$p9e,圉 rQ`άq7[cM- PcBmE#W^u|˜8%˨ QDA9 &+E6zu;jYDwwH LCnH CR63;ɯ WhdM|wlB蓋z8>7? x}7*ɇ廫&N>(C3q,SO/*B.$4|IϠ ARn4n?my`1KfNwغvI>]Z<㝿'99?_ S xn^nҼP6o6j6g] =kp&um& M+[ЎR٫&‹Z޽yvG+ǟ Hkigww/Ӳ41y",)Ys;6o|oTcm@Y8>kxjM-z\?ߓ63EFgs5o/hGhxՉRn5qB4[nduعb2^1Km ,i*XmD")(=#Ԋ ScHS!…۷>^.MUrc+-'Հ\vs [X#ؿ8E@q GoR-ojܟ췫]qw0~yܳ ˄5j$.J|&+29$#o[%E+Œ!GZ;k[\o:cG'sIC-:,tqm_qs||-/5Abndsm=3;7~ =:=/E{sy5{ϣ5Fao]<;~3bm}\MEe#t 3Jj'"ܴ'#ǧ'VNox6[, P3_~4\oMܛ<8P#dæ. xdn;7:BLWtRuiRӯ2WT`ઊ+PJpUDU5GW,0aWU\iϭ O*ep"pe+XÁ*o'mJvʁFuH;U`c`֮JJ7k+Xٱ{peK:,ބw9o||NNƣ'{ED"#6Ӌ{{3D\=%jKRGWQJӍ45/)JD$+'ßWcg|mVG||KD$לI*+W&AeEe!͵dH"lPAj!lS͎IA8GJFK^4Y`i0kǃN2Z$!ۻ7(?j+5;a;qףNZD/FouNU`ઊաUv_R9^\3WU}G\m#-}*^!\) xo;.jFϷm]hQz!I .~ѼOYH}08[UUZ=-R-gٗYVtU@(g-^xA}hb!Ajh3b(<%w ]UzJ$K% Z$Qd]YePbFP9KI t=6`m,v{"R)Q9`qF(! r L@t&E[ݣ"v/T^^jU@'7T Y?o;/?v:e&uff%FATdLSu*7֩?s'Yg!e륷"2! "(I`!2Yu}ΚJM3S@B>'RrU%xMcٙ8|{DC܍vY./?1f'cWG+!Os JQ<.J4!RN+2fYtjF0Uf (.f*ٻuW4~&oyT ,9<>>UiPKQeʖ=:uF#B Hv7ϊO < $Fk1ƎWT*jJsmܩNsKgY{;9ߊ&\[p^QoE=^w#\\/r^~$Ϧwa))kgEI&g~:؞ KniҖS- dt{͹vV%ƬI!;-)ޅA$΢ЩerkkSN%1hIѩ=& !l6Q!L&nǬB:NW1,b8QsڝIǑV[n`.>` wJc,Y*ߤFMx=! Jpf*.!2Xt4dk 5GQ"j4wl;[~{ؐwN a<ؙ|k;z[72ieS  pfdya!B%JVujK $nJwYH )D+QHLZHpKo췈H:]r3)9.b{SʖReO̐t>2?"N>3N ,nzx x.38 L*`\c=f 8q8h)Qi+dяVBհ&L4ϵ3AYCUV2Զdـ?CmDxxGuGc6FQ:xGř #["6R99' qZ A+d6YK'bTQ؄[pXmC""tAJޅ$T_^z]|4tWvrkInmz6v\X`fě/^a͂Ian9}NUF)gg bzNxũuφt*y+r!E2TPIfeemU./L2>9Du3T~xwܺc[W("Vwb0] vXUj|;/XN?}9wHZKbR ܆>C2 +п& zk/My['5nOW:yTRBcȥ5`ý؉cuu틝.8g &cS%95qẂ|0Lu .z䃋K9e (t_pm^Ml-QTv.K8NMK} R ҏ-q!r#ޓ<2u!+!XHAbp.if(c:9I(|P-db#ϰcϟ_q mѤ4x`F~@?ae05o$:O-e-#oS^ZQa+JlHjVCwTΚk^U.*V܎._׮y!TZzxԥKeg_2Nf$0>W\N}:ō3O>%ɠ0^l>C3cګG\IPҜ CЩի ZQ>cNpW_Yq E\^JF?B1yj qjy  :$ fx*3 V^\JWS,K;[l9]_d* ԱI߷$:P-ܮF+<۷vwvQ*3#]YqW"й>)X&iUsE:Ye0{DytW_wq5땖"CƠ#m }xZX89쫉!]j{9|孂q^G{(g1+e:A;I&$Ι0QdVqTNV%$|UW9MOc:o)бexq\<۶{^P57&, ,-ҘVsoLXV }cŸ1&=3oJ'Ӌ_TKӭX)u5"z WH`張LU*}6zWo_7{feF14C ,DBQ )%AxIh Re[[-pw^QKIO{H{oEoEN2{Q-pmu}ؓw"sgxMMHz"mK0Y'9'"d>1ã3/6ή Q8b3$CT"L($=ȔH0Hqɣ0D+J}fN"݊Έ#w"~b#%v6^L]ǬEu%av[@BiVD^$}BF&EDϊ>GHjpt\e"Mit#HoK oˁYQ،#<)U-goYG6N&B}X =+SS떶Yպ|Wk3&.%TB26:Rs: Y$zȳ+l_$ԟ2p:p! =}{p$˽v{utБL("P_zJQ Vg(u=o>on6sG&>-c4 sjǻrn}w3lzѫ֡^WL_l="CeEtZmE?Ab)B>P$x.xKzdr7AWJ$Zc O7g2]||6MX:DjW21*0A |LXXd$boy:<8*r,M<f8.r|xЅx2+y0a E!RN( δɱ WunB=t ڷ0_ ,/uaċ/8l͸AȜ0X-|YYlEV[O$U֘(Р52ۛcZiLڔj*Y]Y+ĀʀDzVQ DmooOq6>.KT?*V%'h`y7M1Zο 2ݕա/*_XANnW}'OYdmct͌ ż/H8O8/y(٢D`\l\k8i8!81479z4t- oqx;GmpUPCSc\}8Ňy"wȉ@o;`/1am*yUۼvݓ0>>[p_4;<%~szqc7Mŧmr{[6M멶mۧ4L=Mft2m5.-`[!NⱣ?)uNы;i8]1mZx>O(m<O[;Vwfiظ3]Rv-xIqmeN0x4%r7r"1_U5^iZ }ܙllEBۻK&8w V}|gD!gSmԈ=8g5[c &%+8BQyyŒvGţ6Oss\IssOҞ(3wLLp[b>3ZdPKXmr2@$=0 m-y S`t,!>="%n`][o& qKc`rL[YӳdWDJ/J-/q[)y1OQˡS/iwclxOjʃTmH|tZ/Wʇh@f PhH(]N&EG5AgR1(Q4V\l$ ` 4=)jPEY, q^:IY{+!,Z&sCyniԴ\H+:䬎t]Ŀ~v[AdK2?D IKhh`#)F[b(uį:酊tOXIgC!3'V F*p2ƠD"e۝BF=zo+_;?|oS6̷uq u/GWyr§Jaצ>&5?.M?Ƈk<DMt7O57Y)9Yqcӆev{Q҆3`i3l0'zf @ݤz5}ʕ12v1a͌{>#yc3c"x͈oqq3f慷-xS<+ihޏ,\gި,RB -Q_!2u4, ~yɿ5_9cՇD_Zm  `ٻ6r$W>6_d@p̾f_f1"K$OYb%ۊdRn!ETX: r$c8ݘ%vza\S7+?|:&wuj-E͒9X]`,y^ 3Mex۰ۢ%h^3׿ㇱ$Ds牫7Eoͱ1|8+pzt{`ERO6n˥0_)x!8wCoTSrbRq>\Ĩ(4,"6{rl:[oj>*rR{ + Zkk(ݕG!9K_ Yd8hm&K4Y{d0T>dwH/:9RHM%ҩjNDhRq#˘N)Тc)#6r8"{[e H|Ys9=0S@%dUHosE* 2R-03;a{* (g-6,uj\q\[gaVrҳuD^`06 ,ZT1!'H봋 @gΌ:jjU@G$Y5a`A{V"& Zg%2 gγTcĈT0g- Hkg!JD \>yE&xU_J5+q ]{3csVy͆^5m%]" tUU*Urn;r<8u8HЙ.!_ L1+"}eN HfWlm,0 0#:X3V91%Dx$Z@P.E.^9#eäNgQK$:FI*8,eT)"Y)rc -: ]0YKmWR\Xl2ÀsܰdUVRtU u= !1 1Pż(LCb>EMv.[*rU-='{UW}7&HKH3`Y>pg6*+c&0Y%;= e9e>tpZmvq*ΙRr1%&C7J;0i!!9xeRc1KVoӘg=vCשiG<ŹP]vy}s1C1%aEAr/#ƢґGo AUT$yd2 ,],(ICI )K>&e4VЙ 18dHٖ}&#zGh:+H\ezF{2jȷ5f7OO Rh/u! 0xQiIң6g<*s@hI9K֩1J R|u%o\O0m+\Wod-B2oFxZF[j2ԌP~pRdR9)ef4%azrؿgU~w=-?絡s ߋ\*x.F.+5 Ӗ+j̐2:Hذ֖Ͷ^:@~Ҝ1[ꢽ>⽚Z_>9_U'4U<}Qt_'){=4ڦm{ۅ^fٍ&#ɫ᜛qJb|Yyx}q9Z` O1f~=|N/|_ |׋׋%apz"u2F<>4?OK d/A.'\fHiÑzFQf΋e<]#yAG|:I2TIً EG2)h0YA R\H\"Jyΐ5L,p-",7Gmm[#a7GhuHbci=WuAkyQ ^DzHTwM$ *^( QÜZMiEk5cO-#@ Z.3reXN+_|t #pKXFTq,H>{Iϒ`UhAΤU#)E {hM&#w;+ zmh;Nk{ͧ[GpoQ1x\-"wʛEॄςm]xǓy')mL佤}LTW]^Ƞ?!SZ/Et恂٬;nݽo?ys"CϷZOᅪs3]x~<ʻ]q6.~@Ǵc]-5qbOhG+\t Nkm*|]Lg(hb߶zels-0qDj`GVUX Zu=^Pi?E}F SG Oڢ*";uUWWD%gWWP]#RW`u<ꪐűBB%^]BueX_e(Y5(D-,⚎x.%…G0k#)G@~[00(Nd+Em_N&y9ްW/,P?ݛ.@DMK@ 0 1*uQ9B0쟅Nj5t},T_iv\zs@qn^\]=`e_X]=\m_V]=Z4/G7PWWW.=4"QQWg-BQ]WWۥbzujԕT#RW`mF]rѠB<"*-gzJrfj8(..MBMg-aMIM9(Hm.SաOGo#܂ʕ+`+D*/ B멏izOu' [Y,vA.qe=岨?/oDjiq2oNL3}򨅋9{Ӭt"5c]rt۩+_.[j٠S^v]vnk٠ v?r\.OvYO kx(A@j> s 7Mm4,@XY •THW4ka }B>!݃ @vai&e۲WcyV=];r| P*etvLLNV^ Wi΄OɪCCydsgM6h#:LF|2HRNt!N{%YD%HZ)AB:qZJgt)'7-E{AJ9;Xs_ٞgd 8YYq0ny#sȀWeW$!xҦy/o޺$f4u-Ղn7yZAZ& .Ss~] >|8>WAX&YE>ɉ @G-K`AIkHF&8AoD{. &R) bFh2v m`X]bX0QG[<+knHCO#Bbwñ< EEqL+7A A!B }TgUgf}⒉/Z3Rk ޭ젵~G2F;Kg ,ss !oB pwNaV`Vq !LtML,h #(:,Q b3rׇQ?*q_4bgFئwЈ xaS)ВT>e|$RK?dVnid' рE胅F|cC(9k MWI/NlMz`Tܐ(Q"Y$ٕY\~z-pqOɥ)i /3RJֱ>tE&Wi:o{g=u\D ul(h яO4%C 'MAKmf|Q\1Sd_TjY =$(uݨ&UN+U [7Gk`jk3sfI; {{.Sm%[x=xn"CtHzAPR0hAW8-?ZLE=\Ժ~X$ou \j 9g~v@V|f4rSS&: ]ɮeNZz@b)*AG$9;9fvY+.@{0aȄѹ 3"#dYkM&rVJFy͢vRYm7od[ҷ{TJvUw&v<ƴQ΍e1 9&{࢏Jˍg:3ăVlB} e ǔ,J0p }S-Z/J:&=zcfXhӽ^ZY^(9#hjLsb~i܌IoRy-77]Цr6MpPŲZpFfd+7@ E[[=×xԡ1))mhې10FX2r@ik87$ IH}! 0Tq֧Aj-jS &E2:+d 4Dir}yotҴ ˜rV f/5m$H8;e LRB΂8F_DkV(ud_N=cifal=}>3'TxU`v3C%9)͔)-/DaJmLbm³@٣R=7 ay,'?'ōs6OWz^MR?דAq\M 6 Ұtv'f. ?H流n<q7Y> |^ VoUGgQ*aӆ3,0G &FDl1ohxr'U(lZ/kFf18֩#8.řX|/iZ 5͌RND&ϼmrQQ?myZYJo~q34<&:Nj&kC&+\DxjFկ?i&`U"tX\m)~I%c8X%-8S2d}h2M(]=;7jHs^w0SJɩT uAZH=|aEI:`+Z0 %w(1+f=v"H)xÜ4ZY:R 3XhCxgTDaYI2FX8|)4FPig:jT@d $DL9I˘&9RFk δ1IgSXf`MZw(0cHRsgK=a%ˎfNJBX)jWؘ'ŘD30mbt赛=:\7U}]u/dD)87. Ϛ__=`a8$4yǑ+'\W[jsGZOSI/ʧ@ʰfA'x߀g[={>ct)CFD[OXtn._tE^,y[F7̴zՏjμj\0QU[^V_j)7a%cgnQ"n~aWi9]U0e^tϊx\w`!$ :pNv !|vTLYy1ewŔS{QLMdem"#.Qk(!fE"MqAe0;V\[•Cm1Ux{br+dEAh|v9#i7ߐoիVib^3>jݻOGISZj!ۂ+26*\&rɗ,i `\~#>Bto7{vjT`KbA O4AU1tg Llv.fS'Эs5mgLGuw[УgE\}ϥyc?(u)94J2mUPHQҕ{C4n4B=@1bZ6FA(&".,gmB("`JQ9"ci Ⱥmt dI'I(s`.nS Pyn֪4l 嫣dRf+ElT  $Jb iK R:hN4OO?: =QAK5g/]@ž Z2EaKL B6.)\xvΤ~޹7zZMGz(ڲ;H.xEA'0M/\$3gy>:;@QńF@g5&d52{PWU)jt*Z]Y|bƓɯy22+AR Z]WoXWA3p|Ԋ~b>**)%XE 'J~*4%9/Q ZtɏrZ~nŨMxx^M\0<~ǷBWNKqԌ8RB#=>yX3qCmI~lXGnιs4YܞO4z2۴&G?|WJYoq W!9;7z~yj8,[?-Y[]y qI!=rNGSzIfύ%վ阄}EqohМYӍk.X҇+}ęmF=CW6ʻ$pT2^K&OJS:ѯ|uFk/u_oT|8YGU  ZRA껋3z;7H?[Wωb5V?|j_ e{cΑ.G7WLz)|?{F/FrKr$+S$,kzfHQ9FD͞ꯪ y  P*[%FȖz0('hb4J ޫ,')._y;wSد^C0LIu?q0ZʯG``@9)q^,^LqSvE?0(= U/5Օ.įyS;aC-X5Wh~?-RӦ]=j_|f#W 51aͽclKN DϪ.rLuh6\hm.Xuli7BC웑[{ 7/<&UETGsA&ASLH%*z$HȽŏϧK{+;/U|rހvy)%VyKxer+Ԑ)NW*ߧctYv45ثvjFK2.켭zE39JUMoJ&%2^D͆v!²|-Q]zsQA+X[j.>Q*_wr/ģ ; mP]5QCV!z#=NQb̒$O1)MFM$ u]u>2jw"h!d*rSB`_{ H5*%B,mAQWv-83(YPmSMs0wf]_wE|uuP•<0):Z~ e)9kdilHe^l i}i9c緁2cQz6Ǔ> [}Fuln6eZ5  4q (-w$2b[E*"R\ijb (JsD 2jg--#1]bh>TIokt*xrՇgf;tX]լ*ɵ5˷3#fu x2l*=9[=rh5B זgg()cErnbe݊ojmn7 7=7n:畖a:ooJzQ-k~9ηt+xxq+w!_4#lY_g>EM됽Uϭ_7Zm~̃AW Aӊˡh6Aw$ɇwv7 ڴA+Ə._8?-WtJWEaRjsO\ %/\hyVY1>rG js7l |#:h\ϞMӇv7~ /"?O 宲 AѰTPuHU}qŃZ|Ze;hV^?5|GEBW`gYX܍oth~ޏ^J ̼"s՘,a\eb Jn++VoގRk^`eP\Ck1WYZʻnzJKOjt.կ\{'][2WYZyg0Kzg-+=_ +_R=bfG./JތT(, x[GУLL {i !Ar> 3_/bQrPϲ 6竊xIYO:m;ï>t* ZHcNiRY-Xmm"E2RH%xbJ')\6ꃞYJQ ӺdMPzFV'4J@f6'()(귓ռп!: pt$A .̿|GM;i{& S ɦh6()  )o DsM#2(Q0x{Ɨִ|[<=\tURYjX>n+|KX@{<2IB8 rzpIeqbDIȔ!LWIi 6@\8hq"DE {M>qkpғuw,*hqAZ-g\^_DWNžܣ^؄ )1½6\XF̸N ^:'RiNAe:pw)D c|,]p_S41*!3~ #S'N#~ߗlZ-uO,rx|_0Qa Ϊ+N&qE/o]>Lc#-qJRJ @P|wUh~=s@.O69*&zv͙_}F{S7lyccW+ɸJbn\Ut `Z4z9j٣:(nF9'?^.V4.)5ϹQ6?yKo8 s@sQ_[!4u'BD peOzQ_EGϗ/rš^?5ci#Q^)'bM٥Sll6E,M|%r$ol$F:-. Yp F>f+GJ)X>ۀ.jxybjQ=V5v0ؤ,kFz7~ꍙ#c9p3e%.lo_ap` BBOü_z[i4f+>-+PUy]y*;1rVU=Br'T% ϵJIApJbR9Y=;K@:e5{~bgaaȳt jr Qr\ g6ڒE5\`=X\]P| zv"ϳ8k ^>jM'xCdw_heڜדv=K/>ozГξY7+JKjҥ+;.Iӥo1]ZSqrfM|meE9Fa sm"!בTT@Xc7^$NeFromD;BPi眡1\I#)Ƞ8eE-i՝e0ZD<Çj-$ APGz pe2H}T}isUY#dEabHAic ׄHt4qau:hW%vy49Y$UR'n3l>rI1Z3)$BIOcL)_bUO;DQ@PV>Zxi6 ,hw):<+ÿV_*/#ܧhMAO L>10A`(vU~K*պD#J-Z8%L%L%L%L&II+A2!8c3/P똥^z$p#D% jƀl5p7u%cCEzf)>+?+q=9eXN/Duq&tf}k?а+܃G7 \^,DF2P@xR"s>ã#]#v,v1 v1 #(QC-ea@=SBdXRw-3dIȑA>(jLHjM<(ÁD@'DLȱ5qv_Od:z%Cq>L4B|{nGf =z>TGy6+LCD}oB}w8*yQ12Ru8q KFog{CEB Km`u4[mMr%NR^r*d>YS`&Ks[mzڧɮgB%RZ,_F+Lvg5#gs6\D*]P Ns'wlRVH9?/jjUk3ϋsr~w"a@TN뿾1hov4kmhȧEi" 8xsyE `Yr%9gd%Y^tQdr.wfY<raS)wGMcӑ.uT| Zq{m4|\,/ߏ*|BIZ}kӇys?}i8o"Z8v NPRq!%EPͱpBKnp#{pr3dDoaFr橡L%YnBt.jhLkN)##eIz J,GX`{ÓLgw<ߚU$.uҨ(c8z$AR+N  [,'uiޟt b1AkcȖk=Ӏ.e(A#u;!5Ң^i}^W/r_+e݄-:{('ԝ@Qnnd  ALx)O 4(S>,q3.YkUp%V ( Yt$Ƞ$QHffDe|*) VW< $ K<+ N' x=QL-W<7L𔓖U$CMEq*0^*yaV;4Eq>ZgnzqH7BWΧk T=6(`4E|)10lj9&ޙ!8X#0oqfbۮѽ'kG1em^Hr _{^~Z!^G"ЁILݓCAFi뜡ۺ.w=ն%>V\Kukf .f@3գ CZQ܎x\st<=cIѾNVX..)ܴW`.F,5,~fњc:T?~[ua)~(~C.Ǿn2ifsץ.(c[aMŽ%z|x_e'U? KӫDvqfӫ˵CK?X5XM_oq 6blQ€(Wuwux е|ٜ., sW h/Nz3M7m*p9poOqjlZh'`V "Yr)%L^cnbXlhߗqd+1[g9&4t|w;nf>U{}iK-c1s^?,ORU+K(F~4ǒ,rd9(3zS(K!#ƅqJ VBRU|=L^xMQz~LAqa*4?_ d\ !BC&{gmuK{#;h(w>lqUEjڔWэXeJc`߿;͞㺃Cf$;4tw;I@M`!ÖjZ+=A6=&>W]=ձB=Uw;yvx9-c웑[{ C pU3UADE3+,%Bp0|7VǰAEdIRvI c /!n32Sʩ bڡ U(NKb@;I14`#3.켭z¤7eBOHKNS3zl:T; d9eKˌ)r 1T;vvQuNJ)Dw yԎk"A5ci(6u2BX,`1DW|8BCJb #52(`8-RLp7k-8j2/:Z繗)aJNYիV8D cM"CnjI)K,ކ ]55G`ғ o)j)zijaY^Rbt=a+DAKBg!ʅr^ kB p a6:¹m4qpA^ ~8,mh} D*f;.%4jyt7d^!\xOE_L"+-U଀X|^r ~>] _ s,o-?2XI\T'I;j:RjKf(L@Dcc۲vpYFՎ ?ҝmH_Sh^1ԁ:u)p1F: lp6H;vVk/;^s[Ql+|WRM*e#v1o>?߅zqWj%&ʅY}] h0G0>T gEk oo*m~67y?~d>$ٷqvw |4Oy۽˖wvx9b偮ךY?pB[9m>c)fdHk2]. f f%O@%7"JQXȖXRV8d!} Gs{W"l=]7]O/U !KxCȝUbtI@h?{5?1 \R9e:#aܒyBrخwkxo|8絖l:~f0秓xXk{:b:Y;1]HPJ>Ṫ|`1L;W1$?l$4oPmVa +A<TF,`g)%K|i|q57CoeK˝GrYXXI 1X_4 5JEuʛEy%eǓ*4wSMda7ygm qm Og Go|2)v:8iG7 Œ9K˭.#ĺĔT|S'֓OsKOa$*8ʝ7\ ISϘ8P HE*,8Z%P#=㾁v 5(>՟g :Η绲 :;`#A=!١IcrɱEg )YJF18Ds奎SENDGQ"Ѐ ;I\kB.= !f=Y7BK1n#u^&Fdwu|!)ӺFG1 bJzB 0G&&q|*IZy$% #a"V t\vaUBU=>M>?28$ÐҥuVI֕9gNqלdʺ$%K~'UaRAL8"d#Uc#cEӖ0)SsyZݿxtdxuMN;3R/,(4.Ŏs0i-T qHt^E $XDw7 qfSgB9UaaR"2+j6q+j8UTErۛ'ѶQ ws l䣙;aǓkjKŰ$#."!p!% 6Fΰ%jh-J1`B))b)QåM*Ffd j0D0R-c6q[|X%-63:-d-<-`?Z\%d]^G;Y<DS\c;qSkmsyuu/Te#غ[ LuUm nRcY__LƟlGz󱼡tsH<&څhe3Hf<D"ȵ6h&[8z37z WԲ\ۗ;~$O}ӅYpiJ! I !+ƵĀ#SM.:tw{=Wkc.˺͙m a)QPf%+di;?Jru(Y>8?vdYՅw6ɑU٫OW heieYC  jp [d<!=mVeΗKj|1.΋j*B9%7Hd(QRFWh<*j !+6x?d5[(BQ_~e1TӳӋ v+th\/`q}qW辝oGcuVSp"3|.}MhӡQW[?S6#? '2g!*d"uւ֔z]ІHr(8O$>ipTȬёU[ZuW2}ti;4[jj84TN⨺qBbSuPUƛ9W(m~lƟF-^Yow5I) Wy-߄-jAzwLk߽Si)45_zWGպΣzr͵@#=WD  TEwo&Ѐ8#jj q:P@C1~uA5ex%a b2l HNH N9(~ݲ A$!Jʘĩp.<#:jt9"Nv&t zzCylNi wMN>A1 Z_791 bwbgϭ?)f_q^'7;E>w?-*qJSm'DM9L*sGzCz?Gz? ;߬Kh✫`,:))l2,8We&n{vFtѼlm>q>~ q狷ܟlCM>d6f]D|IRSõɓJ0>&6^F*@KX4:ŽMGGN-*M^Xj#p#V[\.it*Ř,U :g߉d[7/SKyoofzwL1#m+JdL%7|Kޏ?EV>Ef 0jOAbx5GqdV#G,'8K͎FAS+MVK4GEg-yZP!u%nELJ}[c,cx>^\.A_6*sXgCc}yHJU*K;MVz!^_,wU48}U9K j@>YoGn9/տb/'(f-_eֶd|ui!^MqooKU-;Z+HH.j>̪rm%@UK**d+|6ìZ|we'qPۋ~8B*V׋Gߎb)}sYT.QO9Qp9:6v8in"~ !/rW^7w]ĠZ/ż@->,G͖^ *B q,Ao5Ϛ.NP]]7*kβUtH<%ǧg;ȥyY/gO!"#T?xOIu*R|hATOv8~c?_x۬~ uhfCG;=  ۨYVk.l!O{X+æBrsr֟Mr`u1-tS1y.C[ ˇ;jo_)(œ˺帾9bw1/ߖgc[-rc Ov,_9"🟰9ģuۤJZ EyPC˶ #:R#ʠRP&HԒqCoqw$k:IO{V_?^}QG/:( ;#OGo %)%VyK$4(>)BW1y &2PrH/qzY}:ɾ=Mwkq*ܭn1 m[;ƅU{i}#P ַI%:^XD89\Z{~r_ut*Jk(BҚi8z}ջ:nHo]#TKrkRZg{uLuyn9|޵Ћ屬/wj. l 7 ֚LR!sBTnZYiI5+yY|ɑPyXLȔTNzɳpLV^Mjף0,\,X ~)^{-TyF4sQ>kDVO/GMZIE=(oX0'yÆax\[~jjj5]vPE/Av3fˇ8-_rqLqQw?ۖcPV?.vsMUGYNrχ% =u+9?59BU7Y"[S?*풋n& ?i뻗qu~*"ٯ75ѸN7II:d0߱Itde'M6[KH8b=@t6N(Js.G+g7\cGF*jcL+ b>%W s6)v+41ʏhzdy*)Ȏ\ Wt&+E%vOQnk#]uuݜA^n,9hp) iJ$:ǿP(T.h.`5M+ɒ{-)!:!fXa,ZJZ $e;(wzןs쯸RCk~[Y˳ۖ<{N~Jp|`_<8׽ɧsq.ؿ?yZ.,~(_84ҫ_ .z\êyV-R7}/YκoE/D{E^Qh+Mћesny1o@۵ [[~Y-E]y*Zx0o_Pz!\h9q+iLfE:Zރg:+ b@+.[#[fG7{,1_.Sk%6sq"mm-@b',σmDqpݿ73\YDSP[0\t\4)nL>])76H SqC| BZ@JKA6JZ6Zs7#?cN`k )F DEh(7v1sonZe{/C`0'2 JƘ+5462 6O9WM>$;t5*ʯ"ۈCZA?uجEts'h.}n"ajĹ[`up1.㧫~tEr/*|bWjͧsҬcni02p2cz[c.~cۮ7lMm{?J]c-tsZ2q8؞,'2OG?h%lR> nYy >M =VTk&YGKH7'kK?=*ݣ/ `Ѳm$!~,"j5͘2&l$!\a'2_)xZDS?URKmZslU =*lcnmxvWr1kc[rOtuj"x2}aj_7}(U63R HiͶ`KyMhoa{ NXڷ4 F#WwunWlD;](?grSeZ'RPAsG8 83eG~f\Qfˡ70J?` OW=w~hyuu/a+sueNjס#RW!G.u,*UF᤮bȣ0h•hOwu('u {qImDluWKNJX7i w{uрi%%N"3 : @kFX%֦z@drmS\*A0B XuANS9Y8}7]l7Tkk5vY=Ͷ#Xcd#Vpјjd}Ǧjw{jsuU0Ki.|Q|"vUzoP6[V:Cy}Wըj_ţJEr#6nlhJ2TW;s/+DirYGYͿwl2h' -ΥɎC@^Z:xݭPS1򿟓dKvU  'p K \1+(k!ʹϘ,xcXFܼ(dU5fԅD 3m dj)H]rڒ-_pwK#a*qI#VRGYG(Z|W0 K{ANo)ք@59 "jkɘ!qT#a}dC&gX' 6͵K{,& K.Zbύ Vn)ĬƀYdBX܀M,Z,JEu4Q!KեWZP33)k%s Rn)cEm10x{.*8: ᵰN< ~j n\v[MAutC ]n%ZnyuqSkkܭ^x@n-c5 ώ0=]:g%an ÄfkEi9P {_j5ɣHEۆRK>ջᠸqU@.ş (>[К|[|w LWJ b 7(cȆ ڄ!e^A )o)vC=W,ܶލ`?*uFmU{CߕF3`<Ѧ!O@ CRgxXYp$lpvL?!@‘ƃckY :)L 3ω{] , pV p ҬC0{sO m*ngQX(ʨqŲβw,Ey@|#>(ShHu]RIìuo`JѧzTT0WG`XH_\Bt@E8hRl-@u6d́ XhPmiDB^hZ1b Q2l,L<<{T.VKM'f g''cMkDFLXS¼ ;f'N/s\BoXXѻ.`mA-$ >n(0#upe>åo[p#<,eЗTֺt6wke>MŢ!0ڃkP$jC62qJ/1ږgyI$Dܾj ,k\uMn< o]P cOEW7uSUߚ%;2*֝7m &{kl4$1;3Xa?g?|-~ͺKmcR,j+VVLyV,%عh;nn-֥7O`z|B,M,nOSLDЀά1޴x6@) yyjFrQ,efԿMK)}/KJp0#/|Kih_B,6b\~A7᠚5A:Pw  _u%hыqmix0.ǿ+.U2QUZw%eKs d-%£n fR^-.7mj3Rlz7,J4𰗨#^r|yABGMtt.JTVrw t*RmJ{@ς',)/|BG׷mՓf CBÃZQW,H J}!F<,,9N%Ӈ)6a50*o(裤ɍ0XX7`豮1նlFr=<~Xxcs + GiՃ+ȃL_*|Kd3 [:g|TmͅTC@\Z"]ϠGE^==tW_ X{t@ e"MX&؄vXfZr KAV+iQ+@{|tɘ86=-h=]ЉX6nqHn-Íw@ɗFKm ,0S ձ:IokkEH*etr[ܸ V $J5CXޮ8wS B!c'[d#.wֻwON m6xpA_PN'Z8(;5$>a<~-Q{8^qrs#řy噀OUD6چ`I\v$Dm}h[TjI qĞ&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4@SG3!{&lįo~ ?zlA/&qA vN^aļspʥ/';s|>z F Ha2XrfO`lkE죮߯~/a =Z1Þؙ~dYH/(>9:fFiΐ!hGGh5l^kj`gܕ_)7j|0T*dDL+ TWq%*Q\Wp%p%rsWPݕ Iqu@\!ʓ Z|LUߣdGT>Is_:z7{{mO7?߽{w-^~>g揿˯(8!8S"Y0-j9Ӣg1] *E.Yp%jWɾYqp%b7Dpiplx\?V_1WRAy#*r4 &gvq,TO+f\Z{Ǖt#*[O+y.r4WPt1q2gd-ÿǻߴS/@ɶ<:yI9!^B`Z5j1&)N苏=q%gNHzeBׯ? =]~r@Wz:N=@ϮK[;Dpip%rW6q(+qa"\d=2 L;D%q圧&HJ0 6ݕt#⊍5O6)^Oo(?$ʆc%+ۥhi"LMӸJQw*|`bL!Dy rWZWL&)+Ay^1fwFTT: "i"\əJ< Dq%*<U\Wrm`OJZW֥JT#Jk?4ʾXw]ޚYjYwaOwDO]~s1;k?9%=eƳZ֍r֥oE 4>ς+Qq%*SV\W68 ^SZ7 D-q%*Q\W.X3JO Bשݻ+Qiꀸbw 7_;~ܫHL881Mi4<4N%1igv윙W",_ /Ǖ *5f"\`?DnH j5{Ǖt#*@N+4DJQqu@\m;JJԺw\J}vuH\h;>3ibp'-+&dT?!؅yȍyv*?Ntz$-gV uW:JZ2 WnK W.Mi\AndWPB;2gqe '•'l+,Hv(+vvȍn\ڼ{\Ae0: X'eI&m /KV\w{:)Li,L~O=(`L3bJipdq%*I]ql'•bD<6WIqu@\%ToD!ػ< 7WPiJTƤ: 2zA gi\Am6wW =&D.&0!R+.1vS3A'"4ChV%=^hC֍Ϳ:,> v˙oZ'`+w4~i:u*v@ hX;/z)sWd1JZ3 DNV\}\YG{8Oj`urqW6ؽJT:$+1L+W?Oj\Yp%jݕtAqu@\d.4A6k^*Eo)t9A]&´~L4h1-*)+iN{JJ:; D{ǕVqu@\v6̈́+yR8 Dmr{ǕW q"\A<f7שL^qu@\gz_1.1es9 fy`yߣZ~f(-ЕԦBm÷Uˡ+K BW-]DOWWHWLR*EAt*åh2J=]]!]q*S ]eBW]RtutDhAzA pꞲ[%e3ARDXR Mg>BZ&Neע{~FZY ]!\AY)tUF)hOWWHWRexAtK.2\^̚]F tQFR 5 c f5R5ҕVJW)G]!\MT)t2u(yo ^#]n-  ZL0Y-Р\A\ҨeU߲]Ҭe%^`QJIYE5-%=eloeyόR.ߏE#lzyK4]5q\M. -n|3 EW]ɞNmzs_At_y\Cu)t uZF{BbP& XA1tb ѲK ZL}OWCW0KRWXr WR ѪstQBH Հ?VQsC{fnC3A(kk t+I)4juF4}4-DWůkWR a]!2` #њeLtut צ5;l*2\JVUF)yOWWHWHNuAtr֮jhu2Jӫ#A'r N'gjF^Z7C+.E̟?OmzJ qDsJ.2Z]xOWWHWr&tAt9UKM)tru(v6mOWoBW GF UKh9:]eJj.h{Z_Т ;jށC.Vpu1kv:MgЯ]#M j +P*åhYz#J2&(P ]ePheUFTOWWHWZGh1tjV ]Y. eעzzL+ZҎh,bpU1[ <]!Ξ 'Fd N(,Uh:]!JJ+C5}R .Kʥx1d#CqGS}RZl-禡F, F.Ұf[J`; gǐݢ ΂'Xqf0N+;XW ##iewޡY|Ϳן< BGcG~C#W9XEt&sKNr?O)VZwu,zW/i]-3v%3 {!8po-k''Uĩ/ZkS*idj|U'?zUN~2ww:qIFB&ǢlƓh# a`k&֬p>\eǓ=_#;UwcX=-u>}L{.e 빱1$SI gQ9?"8?~4`d'+?~|Xն W(~* yfEfdz8ZYgG m:,mBι[5n+L]?g9tftڬ3ĔlU﫜_|{Dk)x'y,aaxw@Ƌs g*u6,ueIgbPnTiX??fZRo)^5O> |gbN빗ptN {Fi/![[o_6WJmKf1-%t|"Vn>f7OnXtlHoNhK.72ϰӱVK( uކkM'6DB-ME Θ 41v0OEaeTFn_^;*a9ś8C0]?fX5 ww5mw~48Zo58csm1k;e$|hq%F^l}Ro D+ς )KuqJLjƲ $W B$.}b"lfgBL l"(n,Yַ ȉs\@ĩ;FU.ȵV+rjos#UoB6e￈dUNxHD 5Q{O Xx NU(KYI<h?x[ ֥ 'ݙ[^דX|6TyEM>Ҿѷ0TPgoNfC瘐V_w)R k6JP %&2` ܡ tJJYTk%q2R6(uyn]?ot3?6wezQmz>ԣh]֥^MA>'t6hJj+pT.+:JEp:LNNtΩvȩv©?! 2H,I>xŃKƉI>@yQ=au"CW %~r`:o&GMڂOwPsZ|Bͨ3nj0W=vX.̭ &-NM3:#|ʼn3nWJiT'Γ:]hˬ>N2Mĵ T2F1T%m;tewHrkGUHY9q0z@CXITc#y*[}rNpn$Rb݁G:*q$HBS Q 5ZjNѾ5>O~AYxo;mI?!vwΎ<0cZe~tGGqJ&B&ƬJE DaRS- 2hoKIu,g:2*#٩Bj8 zP!|AV R6!I͌9̌iָq*z.<+n22{x#L?Ol8];gl`k@.I@MJ5F[pk"ylwqiK%3A%6`dފ!^鶷:ڂsxL̮vkDf]fg^=͆,iE w1$$*B 'И5ḧyfLv`aRYﴠ2*C hE{rMK#r~` ģQ-akpam/21v[2b6"{F8cVFDr.&F oXA44$ jFSBNM򠀥3.8!(9-i[E58G=Dlj8]ץzɩZ޾yqo'"ip( ,hO&S&RInE5>ᔫ'9x+|Z89ii|x k"79ޏ~Dů?nz돛7Ce?nt? ?'7=*ugZ}0Enm=ܳ ErBW0kh-;ϥb{ܳEoa  pѣ.J\JDUhřd^Ǖt:oJ& O{ٙ4trSސRZ1q]6 ,|'ՁH&b+&񥪛oFvYFEGۼ}h??F?h:Rb}@U0LJah]K nA28mLRxwx:tr3*I6EAqHT!\SL* o&R/o\"h 'm n'3Әto0݃6:mdK#iO,;Rzu)QHB u89IB&|ԖhIl;נ5pn]s΄ _aW#CHFRեE}avu8=&|7O[^K:%oߣU;s C`VL$OLQP Du=A7p0^α fn1p*1%Ahô:kCXhJ5PDr@Iq$x'-=V$ձU_d<&֛gI,e"7QJ=7GTR [͆h!G$\F"=? OOBǢFd J@^@I0uѣ{Kh(쫋EzB+!Ɍ;4H~2fY^3I'9XORlN& 9v?מ#Jx3E8N89ҋbƹNs6A䧋 f6N:i.,><fMFJLu;tZbҔpbY-.{0@'xg!O' J{XT|*f5ZW~9ک4;,'CQZUnOiw?]̧.CE{v3y'C%E >[V%. EqpH>IaIA>/^.8l38Qzs\QP:vLVS|`!w|ƶl7$'s`=u]R>=+@YOa#G}JKET_ƛ@D3UQǗ61v8 9bn9~?&=6nτȇG5=:wCS7w d) ,Rҿb31fG pN7?] lyhY7XZЕZܵG/59vykr4,Ji4xhtV^9c,(T>dBi }О^{hhN:PpI\Br$L`q$EE:XA8^PxB܊yiWj۴;ն9v6v+l֠ X}phKys9lVW)de*orE!7kq3؏0f,O"@ 鬥9YŃsY * ZIMt(3%xSh#'2 >A)(J6Rxrb>d{ |'+;Twӑ1 Z:.bf%P$40-˖e$X1= ] #FG+~P܃lǮzl^CuzN#Z3õh䀔ZaW-`ؠD KX4ysZHpnjTf0URu%.IVg l3995c2(r m7n )R=Ktcl2YmJk{K_Ou /)V޲|˒R,mݲ|:)7΄ϊ"w(pGA\M)L@p.= &:ۑOԎ\MMWsjڷ=MGC9T4\*dg PU7R[ &u6|vs%tݔ}Grx5a8]]mAO?*|k~}iQq=g+B>r} rN`qEX H5'q+ LF~8>,` g O1mqaZDH$gCȎ;!r6 RHGtj9(t`tEFYj#&&89S1;ORLvZf s }8#aXOsÇ*._0eڷCWW<ɧ-sԀ(wx <G%Ih*.H 7sf<#ɨw\G%J%3vd{c+9yx=xSGN8φX;N Z0`H\[# 1M"z Au݅1хn:͂DGX<2{ سvFfญLST؋-CC6?Kƞp2?X~s#Mhۀ*ZG-}=[a>fݴZEk%!wB`Z8?d9ZT}󪚰F_/=Gt=D1]#TRbkUpa^8+ՎQQ1e/6Xy5Dl.nűAKpn5{!Brlfg\ 6il06SDiXyuC se>5X}p:45xd7W{P9/P-B -Gu|Xqݪ7Rs r ^Y(0Ҙ JAB;N2v=7wy#KG 'a˜~X!{< xX}N_3@C5ɒV$~ n(~G;$d(öWfٵ%Z* w[t.]x/,DQNr76/;U0Օ$~-UqPpp[#fnd2}or_gϱ1?u39H͜ry2Ј}ؘ9(P p-g_:L;_9+2 j&3HK⧩eYף7L>,s͈YF3@C|NonX-qki)@ ~]<έ$'h݃Mö`fI&a5SP LD#%^ #YdèrkZq;_ĿZxe<9\) SJlb7 A;ϔ9Ęͩ,ް%ֺΠH@lxV g{1OP3qX<;0Y7>!May _xV~ᔗ~ [1dvz'F?ɟFo1x9KwW߷?nЪ{ƆU廻A7ZrF#dp RWtV s=/T޲gړEX k蓫x#\Bygipo&*pnw\zܱ6WVq^jQwɼfCA:pi RE  Ny9:Bdɐ >e' ս{,Z͵)_OI!]E {7CW;ZKקPkvϑd7eCtfj #:ъ:]e4#]%2F>ޱ{wy|`~tO[9{Yk w%:jQ_Noޝ n\|ePLЂyr쟉4 s~=8[vmy5ס7 ֑k]ɦDݍ&T}<%!l)&,ۑl%Nte{?9Wl]= p_WWQ0+Pt;vCW ]MDNW%{gHW.`j% ]M:]M)]=C! I Vjtt5Qt Lw;wH ")Y-K(ⷤ*'`VhhC^i4 fKrMa3t5Ѧ_hu9U:G< pΚ݄+YZCҲ3(1mjnNYC-hfD¡Du9ҕ$YL~;kW.D^]MQsOq~}zvqy\?Zꏽ^\}ưE@^`޼ p!8^4?4Zhay`޺i| 0;}QNarL%O zͻSHX|&Rs; [J{Rƌ0'Z[G,} B Ɍah\CBR@ёJtIM- |!-&>g-F=6xDr<5DqR c0/@AԎƬbGM#fm\ c)('@B'葤&/ߞd#QrGKG 6SAd*B3X)B?o.BsVUM/k>"ϙTRmm3|3BɍdKvV_u3l5&SZN{,si!gs>c@F5y mBi.&^ ՐR"@i"%!BRRZwuВiW͋LYlE3ÄnbNR;j fnYbQdBn` :`՞ =w|֑]Zy;a:YgrD,+MV/OGN\,łb |?栢R|c0ZK.8%a PPyU%ٕ2ɠ-]^ [Fhcn Se ,&^Sqk 3yFXHn4gi ˾K`T# ` 4 [?sAERc-E9V Gv.uAI` upwCZcΖ U #̆eBGkldӚ`AV# Jnl(M4 ≳dz=5(!Ȯb@b!7CA^KC`w(c)(be0k@ %H @ՙj Hn-E$"XYHXu0wq b LNֹE%P:cj!fT f,R r 6|6@@yd&e*1PHq Ls$eEjוVY2H( :;: qJitU(H1Y.Qv J*Ho{m!mD y̼ҥ$35N_*CEPޭ 6C zb@$ ͺ=aCEՊȐޣP4yYȠΔ9}! \ܣF{ۋqiIslPBr|b̠JGSt8!a%`c;vۙxaWO]\\i/UÇV11#Qu`D6CX 3 bPT8xiL$NPtdY|!jmH fU cJ Hv9!.V .B  ) >@(E&rZ#d^W1P>DOR.-1ZzOG"LPb2$@:;B⎹11v9JrC#;QWq=!V1tr<&[WBNC/ui޺N~(?;AdR`̾ ]{ #!eDC1m>B^Z@mDebMuj WLE2851 Eg"Y`Q}AoCzs(`#p[@׋]>'mڋ~˹aV&gy0:Gtt`f=_{nSQ-> M-4QG[su^kdݨyHY#Gh4v˨ ,IF_.'з]Uf]jVj7LJȀ=T9 ]Qn$|YΘ"SA)aT'VA)O3˨zơ2XA|_-`E^qH"R6Ԇ%AX0R+aX 0ja\ fYTQc$T:1YJ#L ^z1 U @۔6L_Pc@sAвRw 5֪Y uk(PUI6L BV tKvgAhIrs5oܘARwUfTڠ@2XAӪ5-o(-85(Bf@S .8ɮZ{:o(=PCXm(utqH:4JäeE8R= /J1RKEv,&f=t&V IL](Ijl$dipM(*?#tͻ+򎚄F()H0fAAxknt}܋[.}zޡ ;=J33x7?pٮ"P'Ӱ(Y[_[ѿ|\+Iqw 7i|r2wtˏgg?]5yq;n1̋=/ssWG:ĕ~|{o/ W y ]~kżϏLJK~uMzu!y_O}[O/{?r~&+_+W̖@v@i3N )9:KuH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R':-9/8dg5' )(Mu=G'PdsN: N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@ 0%'\ 7qnΙ@3%wDV @ %H@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': |@ ySNq.f@@@@ @ _H@R': N uH@R':km#WE%@.8 d^g0Z4'YbeInV(;j]$"YUxO T<'P*@xO T<'P*@xO T<'P*@xO T<'P*@ y=Z/{^{ Uz}X\\Bg;`$8R8.%qRV;.%),Koq[ jJJRvJR2V DQ!v@\dW spRrB \Abbbpќ'5\[vhUwWlNl{E_﷏?ԛfzs|W8@P L$,vKNP`: ̻ӵޑ0Ti5CpI\WIZWIJ) \ArzUXv3G IZz#TF{p% uu3pp]ERDm;p坁$m$ \t$ڥf:UWUVӶHPaWojYPz{S3v(]"C 5\3 Co4E/Wwa6i @װ"#}W(0?~4Om<{ȋmnZ71/{ #^ZYYlŐbRW1ȥWbt8 /UF>~ _&:ˤԺ]'~yqwuRt/Rz"}* ,Hg*+yW UJZzpEĚwRp;WI\WIZzJR®$\I&r: FiIJ2az|ol&Ӥ"%yqLëOW">e[?u/?wѨxR_Gq_>/l%?ۃ\o.^8٠煮:fx733Dt l<*H,#*h VR "OiEJ6v!gd AMmE}n0U./)jV]1I MϾo{& :̪/^wPFλ03j bs4V0#(bFJcDcr#2Cv#_QZ9ݖ!ThoO ;\BAwg<4"x@/mM7=g7a5[oK|,z;G^jAM9x s>q˄+A2N&W2#,|郂Tߍj:`qp?ބ#F7;Ы=܄aoֈ/{oC'(pcp0\BsK+٭vŻ5MXcIE%6igë)hxjғj }rzDiE(e@?B}D}0+;[WXp-+#D .)gdk0r!?kx4cHzh`FP9Gq<Ǖ$qdjlDz}Hт{ ?Bv(+!:BuDa%K~Ө0Q#R6JD7Ay5p<G/$ (d2_jc N`!x0H㌵30{-#챉h ݊13&~[Ӡ96b|<|H+ʍ_Hw4]ZןP) $O31fSu ^h<^ڥͳ)tnCt$T뭚^𿋉n7Mtռ;=>m vwoB JV^}Vf{@+%wd4Q:hBOGø_צgSo۽OZl}hʗjL^5tz4#Ss~`M.PK_5/v8IM&ܓӱ9ʎz=#oݧh!NZneOL{LH6jHXgnoLg0Hh+Ź#VŊ* I~"#H'3yt #cę LeJtEM.`$+,(&qG(#jF#=8d xAVڳn>MV}'s/^~S3Q栩d7M]'0D$sUMx=A`DgXvŁ*IH+ qb*Nu;w[هE*VilBOgrzdfJtTx5U _D&|N9z!Uxd!R&R/hA #!'#61;l;H_vQPQBҖFOr M.lz쵲2aDDc&HSE57V"FMsFV;|=ތ$Wkxj!]츜iEʇ?̅sc̱j6bF{ɱ$Q+-9R$Gfx1l~ /.oM!g A0PP*FdjmP6~Z}|fR!])$TłT#"ᷠ/T{|-oxռvk c*HHh `Ϲ6$5Wx.87<,Rg2DO"]")qJ7Db΂wDðq᳉ 0q2./x~C-؇ @ɍug^{sJn񼛜&;# H2YXXI 1X_4Üj=.7m[.tDӚjگsekXck~[37XQNfaa->w*-c1k+3:):W!90teU$XLjAC燊\:T5Ol`k.PHx3jIs:FDzXZD*>rx5Q^PZ霸/󟗉:n3=֞e'Unw _ޙBz~8; Ic <Ίb֣&FGL-*O::t\q\q+\#S J{h@E睤F5HGb Pi`=ȶY"BWk&'bt`#ʹJ>E&7f@V jt]'bbr>qX fs+&$$Xg(r3Dt*KqX'eaGɒuZ8R]J &D*e'!PvDUv dR%wV;?~>rMSGP !,(@$"05t0i-T qHtމIa <* s~]`&$SR[)r'j6qOp=_-$4wϭ^^EOwЦӝ]6܍#-'.-l/cdwY8+*GhdE$Њ{6j"VBs$rF9ÖxLhVdJF(]JH1L.\{'}j93@*72fg?2*ٰ0f:3'µČ\?<0F?/P`0~WFlF4U2hB|$a qN3. ]S`r/aQf2Fݐ= JlR!yIl;t$LE:ANGُv4+&f[P;vEm j30FMѢM"CsڀEЈI:uAaBg ,d`h5> 'H Q G ɌُQpkHD̦"bfЂwid|Q`!), ,A sCS h͜Y+ l: Ą3[XA`IX҄Ij$f.P䎣K|n \.ki323.NG<*,a*sg 5`œ:SFc y PLe-x \l fӎ#x@X 82Wn?#Aw( L“$.]Ix]qji“$T%Lxk^p%;czIrY,bxbJix FƖ;'M[TuN0h NFAt2Iwq+l(!\XPpQbS`1VqQs)X udR,΋ԋR!p|l 8-N1_vek}cѮl{ocz۞BTD“CHp ))2jJrk%Y4tiң(Vlə2pC1 \40.ў<8xsZ&WX´}yxWh3U*gZG;Z-^WCȘEpIZq{wHo7Z-/?v%%njc8J..ޢ#dkګ+7;9G]z ._EnD ;S骇i~㪋ů QlU~K `>?}S5XҧO*@cJdSN] 0ۺbYBE|0[Q5pinv{Ņ۷'5h-]N?n"P%;22 iͼlZWZ 'IR5cfKkq8’1͙8:#O+eu嫍E_`U!P[dǼjx@1CK%,P^Eg9IyA͸ g$\XV)2」к(i) NZ sF`3;^Rܣ\,>Oj .P-ذO ; _+ ÿ i /9k | ~ p]@ {TE.)n Ǜ3d7[<^4IhI(b揓疁#8`[(j•w͆;tIF{MG&9?8ڴaVwgap?mz+Ҵf]XN^S?zHL)D~fy!Yzײ>uT*PRBҒizA}{;[HOnIV]POVV猛xji St5z6EAq2HJK79:ȤFb"Rv(<->Em18(8பRe (Cd)ae(ZF$Y5rynM;㗣^as+ALIߗTU(N 6?"ӛ<Օ rG;Eg@h$p\%aZQ<#Td(\M`T+K2K1>|BlXųE=dL=)%x4SUR ЊR- B\d P v ;ACb(d8uf.AQXṅ>XT2 y$g IJ#T*\LbRhWm_ q>kt!WVJc.ъږN-P Pv(]&y@[(qگW'$$ Qд8c9'd@cz!罀(V\-hSOm2Z4]TwItGK? 3@:Z6jZ# n(% d1ʇs' [Oݢ.O}R{>\Njo4>clݹ8gm աIte$ݸl U_`Y*e> %I ld۷0ո 3%΋ @͵htxeADalZN8A< .@@= m-XT'!mǜf* /Lds$4D:#Z8XTn-W]HS(8Y`&IcJ0<$KS;A{J8v_f85X`!L@x5o,;D | fq A-b<Ȓ C & /Zٔ\wP&LtjXr@:N( 48\%!$*D^iO9̂X8@Mp Wpw38_yr ՠ|PO6GFN-$LETT{JhgO=# k zJ )h79˭kO-jm0uL2 fT][ bZHY~J%jF҅rƎzKx9^6#jFm: m^73y9,@,pAtCޫWR_~Pԝ;opD-6f6 o>+ յ rY!`?C@n,.GɒT~@<)c/Gz%jԕިL.2*S)A]@u%aj+$?9ؖ䲽2\u]]e*`]Due2j=R5t=>e1TWt4Fø1P!z}?]&48!$HP&A:{9qNUΌRYBH){y3pu):l\K./Td0YV'u1}[[&.p*c?͑ JP\Xd%pJ^kfE%[!`]*8+ 8˱jɦIP&sJFƆTgj+afw-֐,A KH!)pu`CQ$Gk#ƣAr djG|1 Գc"a ag?ތۤF阺bPW젮z*5r2NF]erJuuTuSDSG*hkue*S u.ST=RWH0roU&}QWڍ%+M_`tUGVN-fEQ>jQl-vY tA76؄ CBu7"we/˩rC#r}W< WoW_fvO+IŤ^RhQQ<ɃH>w˔ IlP4*lk3k sYz9I!O3شK,`w3s0L^moQ'q߷غXR$Y[v;iqd5ERUXf3`^{Ca/357vSGo?y%[q-EsMs^} d{.3)xUeL) q6i^]fI U~煠I}odZ*J]ͫl_ ޭ̷R=P1˻x:kh3l3+{'ڕJ-|m q A ╋`<%8=;Q)eїR~;U)eщ<@\fʰ ] KIƳsKVI/J%;gIl>}"4v*1ܦ=pMp[,Fk29rp 2mm8淝*z _\S/aŮȭ'iT>Xơ 1@S7ӦOZK:v )2Qݸw]z \nՄZWԺi]ZsT?~exf!e5ûݷ|WZR ywaeQf}y1̺'{7fT˒4_4=$̎PyN%>ח7}(f=MͭIMB 8Ld |;ch)bDz/A/ :Z6`-+skq H׶!ڦo}}oJ@̮i 9Yy)\9^%bsu16h#:LF> St<*I[CXoP4۾V+ŰH&NJ.cZ,=ok5ƟV3V3^gʬT}M༑9zd@2+Y͂AiSgVw޺Ļp|luMjO- v^[Ck|7/( td7~sHZ^&'W2Δ^% FYLA2Z#QFgydL9&2"* DK~Oq?Eywg3-/vDu@,ϫ]dϗmpPڇzqT7B`M NZPEoIIN$g*ɂSMIC%,;JQci_I1)N8sKg|#1O{,(CnCq!;5K񏎸7QwGwG2jbRF`eIs$c(A:F(xZ$B9 .&`G{ IeZ*&' <18^Eo 265]NL\E rMu4v-t8 xL<>aǫJSתT~edt9͹"chV,*#g} dA*5c0\T&fK_RĜ5qeRmKֆdl& [cemYB/ O* ײMo 2={x`–a{g *7 .AX&YE>ɉ @G-K`AIkHJ&8Ao7D({. &R) bFh2v m`X]nY0l[yXb(]ڭQǑR[v`<.FH ҧU9koB*A|ϝ&ZYiGŁ4dBH251E4GQ4)8ײE2mp۠䕓s):oa>@۫0/(P_MfAGH9Xx71r^Y*)+5eA0خ7;a;m~RɶRյgGS;Tۣ T,E#!GAm6P"7l bZvA*RB(M*阄 -m{6u4팽8>uJ2kRƱIVP'+Ԣ./wlz[:t]ݥI6,)DkdfJ&R{DPt(8*`CG1} U%}ئF7OA1JI& h87#$-Uc!bO4K+U"p-ffCߋdtV*Ȉ 4Dq}7tt  2vY!YO/B(t|4HX2]L\B K2Zg(u_mTF:ƗW 7wN~ aE,$5SI1$'1%e}yRzݷ _Z,2f?|>bN |ak]8ףXA/Gso6)~?݂n'w7,7&0Kݏҗh,Ug_5wnEΰַ7(ٴL2d|{Y(}D{m#r=[Nlg͸z)re? @`rѬ=J1voqfF|܎1^.t0ҤL9M]XS߸xA<,Mg̰/4<':/?U_p1n8?/.stB5ۗwQVls1$$Rzu8WnLݰK2N!, m55KHr`=w]* 4I ,h!osRH46Dfu"1k)OX5EӢ?-Z؋}D<օn4'N̞M@{3+.gnWŸL}BC<'wC[jM[U &zc̭͑|}yJ{Qe~GDRu`OZEA{Q:?< ɉ:OxY9ZpIiM^;DKʧ, E"'0^=J@:U=ߺhxKQ*ur3I: Zt,EtDKlGN*V,ut%58*q鑍R8qNI7-D\2\+q.]r3:TJVWb -#w3xn9^਱nT:a e3k+7Ja#/=k`)M{7XE3 69,$#,Bl.xJ G"Yhut2ڦZeWsXОb3Y*1bD *|vֆX/>,F3@VŅ@Z> ]HRs%`*5BY3)eBɪyƼtcRD8%^''%""OF1 (q{ +!6U"ɴ+kw3s>Lo{!ÜŦ,n7ߨ͆Jin>'S^mI>%{4\Wk'{$J/U)L I߯ $0i {{kTZ mP&q8ܕZQV4Jg{ydטQ " u)#W T:aacN~E ԖI(mՇ1T^ !S$#Y@SMf*6ݪ)tvOS BrXlv*(@13'El,V-Evеc=vİd͗!0v\*I7WY'.P K`Ơ(irfXuԣNJ[^:)mQuRڊFzJЊ"hkl(`@ aG$&j`SCthaN`%& 5=Y{}2*$n?v5nzj{lOOOנQ*T00^$'@}KS90."`S̙XfW1)Q#N5Tzf9B fh2;Q+z:IZDOҩ#EFE/ÙGr$*8 *XiD$uݸAC "8NXr<3 wQ#2sd3T{"L~G\:&Vs#QZq}c\¼7^W-jxvW !)CېRԴe׷idCRB<1p56OSȖDh6 T$4Q YAiAj1KEr:1~w1O 4bd V /+  W"j^ʀ-OJ› 2(IIT;&*'E2v.\)L=l9KosR0?dz˫t z 7o 1:&Xc,ڋ-=Vo k?0~\ޙ*64}e mBXSqo-!L{jQ F ׯdt9^4G˫b}Q<1FHg:eZQ#Q|.eR)*ʇh.Q^\^V33 Œ߿{BJpgfO!}Xs UiSV#I FW=u۷2~hdYt E;_!!i]PVCeav^[vz۔q:ST?.qY4 H^5DfɗB1t*8>􀒑%9TE Xb0 V2Ѡd*EV3VHI]AإnA;܁*pVlb~(} '( ;͎~3)*0L_4Qqa)pp1| GOǷ)dr.~}aKb;/dz?, ;RT(`V@|VlYx~_~-eGғxr*\H?EL!:^wՏ-X2؆ PIs*g1+E|ʬ r2·d{Ke^conན 1dtryU}+!~"VfrɴB0rg8PJB "a\3%M֗=f&݅ i 9 a3(Y !υJ@Ʀ2rGuR9H^a0uveo@[eau^^<^̱.l+. 2Pa='>;LLPPHўJ).MUvN̼gݶ.$>Ten\mOjRsՖ^1jKE> zW[’ ]mR^H\%<qRt0' [.u+KW`Z_iN\%lhJXRՉg(4zI T>^{65CoGil"ˡ?*-0:ּ ̋,|Uj8| BhofSQPY Mc^dӏlTu|:_PH;}^P͟o~zy #^Z[lΐbR1ȥWb,JULkq0'1?3aKUτ%tWOX'WwBWwCWwÖ?+4 qE h'}eH\˜W ]E\%ln, 杸zh5? q z8*𡈫-mW Kщ(jE\ FEO#RdT%LYᣔqrYDs<ӓ }cnT9SNJ-ƞrBalIp.LZ&ڮbZ0aryB%1JS/)E𬗽^ 0.'Ob!ٟ^&Y9jT9&9) }[.IKd$.IKd$.IKd$.IKw,clWaSt)]{=Eמ+޵St)]=EמkOѵSt)¬qw`w_kMp4@Cys-XP]`@gv{94¥@טFDXrzg)kOh*bPPAQ x$ )[g՘IDkgMFSѰG1tv*Kܢsm/Qp M-lV(_ETzx/`\ ٭Jl3W.;xVU]HnDm:rCbc͠ԏjz h.&o-Cs]<7N]0ͫY9+jy9x;*H{+#7d4o~w|]h~4ˠ[W!2v7 Ҝ?|qM, ,<1nwxUjsÛ0P0vY4a#A2 <{@UD3Vg 0&fq-|rmJTF1r;+NABې Fi٧0f0.m`GI 8ˮۛ[{ݫ/͟Ïѧ_1Xn^d-}9gg{ 7#l7*IRa[׊zVujF[Vmi=%)K dzDa `]$g!?o@@"JQPP:$GZeSb>Oּ_y(<@ђX4 Ĺ @%/U|` ndÂo4Ox^)$qKLImOYfWY*V٪ȅ肴.E \e `{n1! 5'L)nB=X?Jtof43o[nh_\n4|;QɀV `K_s&c>`6Rt9*WWPf&q f6Y'3Wp~bx_SWz n&h&!:J@d'mHsYFCXJk9n:ng+0u+^NP컎YTv;,}l-mݫ ?P3Q= թ^.RG4HsyK(#C>FYd`FzW;V/Eo&Q%c9ĠeR12*#n6OQ\ 7kJS*f4{1PPL$Ԗ/"Z /R imO $)^ 8쉁#~uq19ܨMXqO FH3[2هFH1b%,?FA7}LȌWJyfgi|*f\BA6!)vRldjV3o#ߎ/ܽx&+-%eTV74%<؎fmF?Jjq yb~Gm9<%B88W`dp,2Mpt$ *2*s{Vb11KN)`Pl.E38J:\&٣*p5R ͌CVBXQ/e0cJ7,02 Ѭ{ 7M>nfW,*3 TT2$!2J1 n%&idb(^HQ`S*Qg혱YM`U9fA;ٍ~2k&f_P8ڂ =0ح\ޑ1jJb ,s9koC΢ amEU0.xB&f9IĂI8FII;W[~ulXƾ bq("V>r@5xaa2C( p}ʂAٰM(KY2%+'ZS6N۬4Qg|@RIq- ,r4BIٍ}jO8[쓯,9ue\xpM;2$϶Sfs+Ii'$ ݑ\r6퀋žaq JVaxxX@ElBۂ`3k IU~lZ.nfĉ5ߋhԵ)I>HxxG! wIأ@E#rS %xiSq$/P[" Y:+" ĝ=>f'l,i5C^a *M-pEbz&*J6Y}bӛp‡Wh+ 8+017l$hőhYreLu]oRԻN'ّ8Uy#2*dЀR\Q[ (J8b@n"SDr>[WϷ~Zy3ַf.RaUˉydK cScGYΩ-PPJ\T#ٵJ#-:H}2mp&֘Ǐ# =AǷ]vv{޷9 ڀ9Ñ^iH v+66j&$Xc+`ؼbˋƱt^d8k(y6D4zavu,:BRm|G6wӯTd'f7%dpLh)YJ9=JSO}>)^`6`Fg!sGD'OJGzL[4JDTRg^71kRA"Pa0rKP%sA3\y8]@Ij2<4x]ݸ%(9]vrxu=fFm/gxÀWԎ#bAH^iS|P*ʻdo}]8mmb Mv `pc6X솥37Zƪ7ԛ-IFJhȄI MHi,-'/66=i>Pㆆ4$KZI{؜y҈8ky dp.#3q\Y$fVjy$V%zL ܥą`Qے8$ibh5Gmyi3M{W|oT{&4'phG,GlƱlzSˮ537Q56$Cd\Q.2F٨l XUAC3 IYIs:+(Mx PTYxTGA8'dkImёjNɞלb9%{TsJӑИD(Kr'!fE"xVd{-u]ݚSGK?85V{ۡr_NoB|ff+mrq~hvJgJ[0J* E'PT(MDR\&rɗ8! y&Erd49-'Y]L93g', LNk-%)U٭>gUN'yWbǮZ6Ţ|q]1X,+yeQV,@_RLcL-nA7,\Kwo~+v~47tOŢ|NE-uӋ%79O#lr7n^g.)G+Y$ƍQLJ!̂&:?~PWJV;\f?"^dy5|'+NQ*ʫ@RA _Jv$7-s}l71|'i+3U7qTggѦ/֟m(UTQMJ6 i%D5= q凕k{ jȶQ: &fO%`9+BvjlRDSdL[cz(QG#G@QMDOz7#&whrU{;n4:$)bG8HX Ι,xI#J%?OLq 3 kC{ JdRf#ElhL Im Jb +H䩂<0*AsCր, Y;9Ƒ9Qhn'?M)!Z-dڳAi}?uҷʺI-:tv_Q82K'ږݍ- 5,!+ ^x.F3+'_#[ eUPL9!0)!Nh"x^II{B-`0~H]jiȌ=*Ȼwg7w&f'i:L1ճp]O_Jl x#0ӆ)2!.8,ܲ]5(?z&ewlmt4}ܹQs@Ǟ#__9}+Pօ_Jg=xj1zFGd'{^R!Q>?>{ ײ}dT:.]Z?]џ}Э= ]'H7Govn)w=s UOnokt4_o( j|l9H< w}3BIeVuBG9O/sg|ͮk27r[2yK- \Lٗ<}?.ҲMq|schoQ#4;sH4C{fَ>7MW`kdh"MoƫB}l{z߇wc!41bVܗ-Z]`2S?蟓 [?y / Gw6z%Hyː &){cn٣db|gFKe*'qbsnYQWrKORMrvp|F~ Hu-\ -|D%<_z_mL>%ي$6MKcne7i&;ƙH"zg> M=i ;}_(t&FMxZ_zK+yr3K Pg)Zb m9Y8qA!sR!&}qb{JQӔ\p'1$$073l*Fs*FZ^1P*1T8Š9lFte9cRgCWT.tEh[NW7Е ]9.r+Du(]#]I:]fce|LI"y9c|QG\,o-飼󭿎}_-fiwT*wVXUT+)3VPԟ4Nwj6Ji*bV1`w 8KDЙ>$wD=cC>\R@6%Fh2fV16ANu}EM}Oh5V}9ƌk%=n:{XZaF%҄V XYZ)kdDWC\>O+B)@WGHWZ*#"+ 6}+cl#+J4Ky@P|1+sBД~D_zׅR kSfmDլK^'l/""8<#P-E'6rTK˞Dn!V_ɦxr-&'*o@NJC꫸?rc>BC&x;a$諆;Z66kumlҵ|:SQ%~6zI~ZXCqn\oIh5X311f b&#B\MgNWR!Iթzڱ'Wzɡ LIw N"P*T٘NՁPl '|VABz(۟d M~1NCdCUC"u/(uJ׳T]遮;rdFt = ]Z+NWҩuEs7BW`tC)@WGHW0<#Bʲl js+B+LP`]#]hmاAffabXKh[s~;Ɲˈ7$ժ4M([LMM+'v8װl ;]J91ҕvZʜ+lYEpd= P6x8Ɉa ]Zlc+DŽr]QtD6tU "+K GIWIWLO?KJ y@1cJϟkw V(J3st?0>v93V芊p ]\)r+B{TP:9ҕ``]!\wTjʾ-Ut"t%T=`T6tEpʅsO+Bg 8~zE&`N2#wö́Ɉ f458ھ4 MM+nqfCWWdZ}+BVDL\is+BtE(VQ~+#9~M!EWfC%c++ z%}gBcWR@WGHWNuVJW|Jȅ2}+Bi:NrXSgu^<cv(֒MN#kZ0@L\u^༕N?Ho>:B{v5 /MtˋHiQͅ[hr1eiEcT']ux}Л_߼yC=>4Џ D%ʤ PmY}aY0QPq^Ϧ嫖vfnsA2RY.Di}|~9ڞnܰbϥ-/׻3,b--%b2'#c~6WF$&L?Qzj# e[$x9 Wo~[7*;+M[N 1Yo SVF9I3D2UJвJ,"yU@Nϑvt?Pf,Tt.0f*JO\Y`FQꀶIJp??A춬Ӂoݮ2!iAh&ծ^4N˖uz%<5vtEJ C&SqS%cT)=i=Be]c{ E4ܧą`G di][oK)U1 G*:aϺ78&H4pvTRD,*f(WэIcT {4M|xKB@л,T]r+V12p[I(0΁be k*+'Rҩ{_(W` Qt(MithqH/ܻJ;W*'ca*$Ҡc=ұe iWOu-Vj̵2W|sUӋOܳiOWN4gu}n=rB.@Oˀۍ!\.)NFQNQ6F4z0BLʶlz}"CD1vi=b眭ln pZ}h{8p8 &&2!Gl={:_mKtYU0[o'5PzyR3=SaE5@(C@Ъu!C`.)|_ UUF*s1-Oov= ɠL 3ڝңi\7%lˋ8n7ܤ)vխz>tOo_NW][S|5i:yAiAH>cŴjUh!/CPhZ$K 3{hScVQ*17PIb*\B`h})*Zy:RrSr YHTbJ78+} |2㔙eSVӳ;u=wm_k1$ 8h؋ O[7h$@Wˊ$2e [ޥ$wf<n4E۞vѾk55۫>Y:8e]HwEfV]ZbAiݡα_OWMhhht?_LMvIFIHo}t|ߎWOYuZ[&>e61x> ̃U2uP]Ly{>=nScݵuL4wx:nlm~s63 Lb'}MnGi:SiӎKPFo"wR{wto2q9x ;MonэUg`2|Lfu[[=kFۡ퀴iY.7!UFZݼPX۸ 0 +  w^4,O`2)z3oȘ$N^):B%xLL)GB?2jJ4e jj7Jq!rAZ"JyΐIUA=aa2׶Uvm)#1PbNi2M^Nriiߝ|g_ (=V`vo }H1:Y[j,wQ#J&pƥ&7.ud|f0<\?bgr8i, IȕaA:|%-aQ1DZKP%\UhAΤU#)EF$.2њLce$BWFՆsB6SDٷ>Əs'4)n[s#w~k)}˷ 6D]o]j{tyAZ{>s `Z7Ժi,Zs>a>ëuH-j,n-=?4~e#zhyf[=0chvH/?8^QۚO5ijzlyhyMe>WwH%Fޫ/C75A7?tsƊs˳XW?)TA!|XY;1+%r.o=sEרS,eYY -m߼L+>}WZxq*6a܂F8g5 B:M1y0RyᖳNE$lzi~iat#072ǀA&ΦߜR&Ŋ1E#adMu\` )52ep' ID<3a,#ZIkثN)'8۟Ex Y ҳ.~^g~~_ޖ˶FPOSա^ )e^&Hkyu箩24#k|AtETl8W| wrOZsKȗAi2FQ'c9`dYY+$9cd%5T1v NSUs௛1$tHB)k;0XDM9=~m#^!ܷ<{y>Fp\\7rB, bY)p6>xK3>4 &f钼oSBPIc:}Ly]r-\<3D`KJxF<@ fU;K#*O?]?~wxۉ T-%i9͑&(tɃthQ^IiTI$r.&`G{ IeY)&' <1RIDԼ2Vӌf]R]Rͽ?w <:aNwv$?SuG8{N\z)^|W JP&iKfJg!Ӝk*2&lF*#g} a JF&ǘgũB#,91ң]38Z:LBd6%c=RMVSe, agοdljrqȜ6'_4hdr{?\bM*}&<cAZFu$5$TE M -Qfe(ʞIElJm&6LCV >5% /Ru(+`w1zGʨh2W1`# @2BhY%AX[;DUy6.xTHC&dȁ !Y Z$Ap$EFHNƹpR k(/}*+D_D$N7{T>eXSA%K%+H7N $Y.F>Ҥ Pb(*Kj9"6:ѫ$U>:QɩrT~?ŃTFnUN: -Y\Z$3$ɥ)q\<\2{I xJuIeRl w)q!X`h@.!=)U1 E#Gc9bki Ȩ4 $"NH]F2 m)]Ho)ݰpuػquusr{u]Ji R*_Ҧ^^ RZ 6^[z + $wh) mӍ|h\mVQ$GsKޥ֭L`Kqc@(g=rXEJ# ?_ߺHL:zsFѵ㿬ڤ|Jђw+* $>.;㕝_I{ge~]6wB^֌qwD%s,}B.DOR ۜH.|b.k/WzZ6h_H,G6QAdHND ~q$dzqڀ3Lh!ZDT>dwH/:9R HUi:G#@hRq#˘N)Тc)#2&X`8]=pz`ybyn]_܂InO^Sڟ{W>W5WC ĹR<\svF-7S@%dUjHnqE& 2-7f L N/f`@9kyAeɭV\V =`"H)xì4ZY:b^=74 c,z dI'abwS hBEu6UAG$^5a`A{bVL9JdLXϜgƈ1PaՆ\>,F3@VŅ@Z> ]HRs%)`*5BY409eBIImy鷭/Ƥ pfK`OO+ED9c$Q:% +RBD`[W@fIq:C12 72qR*~N./4 }wfCpA~%4HPrkPI<0m,YĤO} ڨFB7Σ]pH=fUt 5WNF*{t$g6\>7jj?Y,{Mvղo:z#t$,ײ~Ciwh iztvyuS9HMZ> ٻ6r$W>ݥY|3ageOw_c!d'-ZmInQIlQCa_r>dGBa #zG"ӑ1LV9Df.[ m  ñ"Rd\H+"j+LC Ld*0\fc˱38;_Ͻ |k Gxq(ܯ/ӫķݮ;e6B^UfV_t[ABiQB!9/Zr>.A%DIx| 4&j 39MΪdK(%Δ$8=9dnrr52(Hr`]A |'=>|6[sgկPPػ}Uvb,ɳRkϲ|,-+,H/)@*`綵/|bu˵yOnLCQ'+H?/9h/qp ?<OlY;"BVD$X^\ BVZhA^Hoٸov^\/~r8L(OF. mˮi?y3,/鳍 7憆u2yU7dUI: D?twŊ|OO~n^Ƭx|l~5!i|]7Pɮ"^O^SRIFXc.N#dq\M&XJ!ȜztKnۋذby<\Lhs})65!A|рS6:g^)o{D$O8Ԗ9+}Alm>q6- Nj~~gѡ$~߶RX|-1 '$ZXB!d'JRBi\ply1 :er"g֢l$n⒔Sp%'=HL\CgpvKI-2W nkJ_MvDi4pOίZs~Uk/~pDd)dq>!SHLkox^N<%%͗q4>ZgdyD}ٛP.I&D6q=Y=ZzƉF b /HÔFyYg= mgfk=Z.d ec \M\C6$JlY{6j}gZg'7\keiZGC1QԵg7eّn }[}P-Jb叾*(pDS&i &]1&+]Ԑ'B^#OTjnU* 'E ^zS";Mr6ZGv+~iT,wZSwVY_'M-W =iܻjO2OfWv83N~)LvZVv:zzaJG!whCkR/a/nVwm:zKwΣmZ[wJKV\%J`}\jlNۚhvneGr?fvP[UIU..J7:IAm֜+C"D% TT'-&MKuz]}-㷷k]./s[lv{B{-B~s0KΡ\0w_:D_춐n utdOc !R , fv+9Ĺ B$a<9Uްd8>@=Ah7*aa@4Mq+XPir(4MhVtAPaH%Վ+Y \CVʾUAq3Е ]`Táw8UAktE(%S#]!]i)`X'ntk7*=HWGHWF*TC sO% Ǻ*pkGUA?-(ctu4te[rzV)z' wbNf|ju1qdvxyF"Ɉwde=7=@vJ+`@]_M_C/*r8_5׃\iDC&|o? ȣ ʘ UN1K zùJcؐnr8G_Кov Ud2#VGacj4捍ZB< ` w2YN>Jz5ϮTDã$l%{lDdB5AsKoI% "U 0*h;]bc+!!YW%.U z(tU w*(ьtu t+?uFnR./"a')-e&2/'ggDc넦nNyNN?]'('g _ԵN/[$k5v{kBJ{6鴕! /M[wv6 nYneYN O׹/NsswV-vmU4 k"#C؟+xu[…kK|VcVݰQ%oT-o72fTݾ(܍l$uRņ'LzJ3v鵼?j*|Xh"5}a?0yʯ?<4?Jd0W'6Gt_9!pTtz.+|*]~Sǥ:69&ʈ$DAyJ0 Aɬ ѥ& _ $N(B;O Z_S}<'Mr2DυY21,A)|J4Z]b,[K:I%H]L;C4s$dcf>/;ǔZ=]E„e:X3>+ #I$;g*bFK1>$h*)\l>+M?g&"0*B"b:#Yђ $5gQ&IBl@OB&VW>j2e[>D 4Yf*1,FAvYh0\1^+"\lttdA洄pZIK@.x1=dZ-1LBiZ/=}JdZYHT)chsdED$-Vl-SC6P~ɶS"[FrLN%}`0D, -H 0խ`Դ(u+V&dE@L0RLwt)NH@ˎ%RFw 68oV#CQ0H\9I* 1Ka dX52`MHV6PPI%X"+#c<(R8“_"ɗ XLb !҈n-zl$K.+$=kSmD]mu!{Wժ1(QdF*nYy)^xfw ώ!Gr 0S|bD p1YLu>J@(x+ lDdq Xi ACmj((Sq;%BpA1nXT5 ^ iNҿjOMFAꕶebE؋.5z5Y(e+%L-(Sj)$&9%B,{SDJCMH er} #I v/UTy V"54i":38oWA˭w1_*"&"8i|T1& ؼ(qE@/ ʡO7rj.һ3D_~oQZVe[w"' :d~olD~pYiT2MHJru IhRUÔT؀#J!=l$Q5$Ѐʬ$JhQV3JKSoU4p/GT^{Ҷ8nCJX4/cŧ%E S!LĐZ mmzv#Ϸ5D}/:[_[:L%q uqGnvJ''A3= ]Z6Rf}iww%[Y:6T͵KBq$B֘r%5]D*d=2(eFAjЀF-A<: b-F=*- *T$kGhE\Q"NdҠNn )\s{ 9;X_+U F-L4CE5FHmr3u#;<vCzHn@УtFdUA;FтΨ1sjoBnӺhҢ:hCVuKќI2d2 D UK.dP?:'/Y%m{10Qh 5ě6V YkJ/V X©rX6i3 ^ X iFN p#fAQ%DySS0 8KZk-hI[dk\ JCb=CgSMh*f;\"IJ.Yx`V#]^ ~ ICtڏ.XA7_PFU~BufW4z;S B.8!0jYE wkqz^ ]F䷱exLkn^Cƻ)(N{r ̀ǰ4$+)6a?_= ^|꧋zj]K E%ӊ?Z~^]]_ɧ`ᬶmAz|z6x7ͻϥ'iDﴬp~WNݷ]wϠ9]}"~kts{ ^kc۬~Og^lV5HzMqvaPgS#Wl:z Dn@썋 v- ZIN v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'r@bInGN 5~N Z(*L v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'Ђ@ -zrE uDjN Ԏ@Kp鍴^ܙv(um/;yo@賧;xޢ SJSի YWTDv;H9HJyQGw^`1F(6@ Plb(6@ Plb(6@ Plb(6@ Plb(6@ Plb(6@ Plbb Px#HErm7G!Z7HlZQHOc'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N vq}^ٯ/g//h)]vy}]\Kkz_laГq ǸOpԡKPKY)xruZ,hnpErWV۹TZ ]AU\'=wQ"fbJkW ĕǞ+O W$IpEj+R%*PWm5Ժ*x'vlrp5;\ߜof_]}ݚg.zu==ձJj}_ om.Pu1ښm >W,]}(|w~X]={_i|767];/i z˟gRU}CN"F30alu|8V4:A&~ܨzP~H%(~{6<AʨI96=ki*~\IC^W$8np^t+R+g+R"jR:H~\0{\PJu+GnpH+RG2+R9Z"tBj۩ ~Y#MWhsՋs}gW7.5(soV/~rՊyƹy[xj'N7#LOLrc]i2bFo#\A\\e{"s[͸z\d ?Hf̎~!T:qJEHp'\%T/"%3(D_3$+\/"a ܉Xi7|-گ퓍]ӷ{6qx]ߺsaJ'z^_6gΣQ޴vnCNekcAxnCvld4`C9mʾGl;H+O8o9Voe`C %ltBm_m>bZ膘8ЩiEڲG[0cɍG^45IGRlєP(hRɰJRj+7+RqE*b\-WJxzʮHpN\Z-玫q@\i?v*Vn7z9T*ή+#=]\\{ϮHjIrnǑ:3^S!OU8 9:540 ^n0MrM7E0f&0i']~Jm/rt;Hь˨MOS $)M/Z#qE*e`\-WA zZAM?u\0TF["!؎pȣ**'T7S ƹT.+yRN(ɦy'!P!2IpB|c6M=R?M脞?5χ6=w#\Ap* H=*#jRpQnpErrZ+R95' + r5r"V2+C|hh'!j"KמpeOE0 L\{4 4%b!}OY%ăirWTzbX"QA4fGC?E0ɍ jsT]-WTYY ~H`d2MW6 *we\-WQ¬1v1;Nٙc$]oOn =-C7#Mzgw~eȑe| BQEC릢! j1%RSј{69VqiͱgɵG^9M?l4af2peWmziM W$؊npErW[ q@\) dWXi*f\-Wh{•vB~+]/"t\ .WFz\d~?RTSN5MkpNy;4 L\z4 /Ieb:35w4f Hs q@\9 Γ&83fR^pEj;J["4f$כ^pEjp@{ij*T˸Z v9;{ܾFSO4(^{ɄŞJVvi{=yb0E'>t+" l bS RqTw+l)IsԆO1W T#\nARf+R5j%:7M ԚψJ+WKĕVY'5bL7I{:hgM/IMj`]%#а ι{{U/*7|YgcBoWJ_=> A\Gmެ^\7^G*o{OR n/o>nΧoS=|x[]goUTq/\}L(q6QԔճ俾?#ڈ]ϝ M=U1IGX;aMSiffriC^JW$"OL/">W2 qV); \A\gWcZx8_\P~0nsq6c~ZP$Çl%5IEFLY{W(VJ#+88Jz,pe*IOYgWcj8iHA HۏͦqF \եgkW .fޖF\2kT aeΠF=+iuJ2wI/uo<;tde H^%g* ݒơXI,[l:M t -2uQGj>f<2J?ǝod^Ѓ4)N2~qkyTX<* x)Tq0-HR|F?#ڿ)V&xGa;?~InKjZ;WڭVn~Qt_V/Fiq>Lo~@'/-w 4HNbS3~P{i10gƫF'7Ŭ7Jx&p[nIp`۬Tr TFjӄJ$kBA֏Ie/~eF[_5W/Q4*ݯxӴ;8h|r3fpEaNcƶyн3_9n 6/g5E6 {Nؖ%w;E۹Kɔ]}3?UW B1bhӤ1|G<YPmZcEަ(|%EV# ?VM|'nokBK-ҜoUh)-X0l.{_h`nlR_>C/tl"-u|x];V;8NDZf,02fjVE꾼[ܖvl}/4P3=TS6W:LE00梟FVݠ&'p/u&ڲVX֘kBZ֕]p%^gn[mI6u^`Y%(z&{5I?5LxT?@mh9Ueeʹۂcn՛V;8QfXƢہ " h@ x큰; )RVGT%XJ 듔үgY D* , \ C$%9?GkJZ \ "$-9}@IJ*Np J:"J9"f5HKݤ`9•c:fx$<] -G*II !\i0>UXң$&W 퓽"s')'zpB|լ*θJ nRz5({;MY{:DuCTő0⥕5Ȗ )V*q\z,"g ISLb:볢wxufF,~w WVXVe0~)ݸ7,--D!Y&x&|5ݤUOwRؚ˓w1M/]="aGWI\JL:\%)83+~#$GWI\ɎJ:\\=CiqLpE$.>v%Ϯêym@KPWhF\-f~3|X>9h{2bdƓda"ڌyJ:/RU;G3҄9K9.'SČH*'9OĢ|~5y@{.!M ]u_E3ιԔFwVaǼ4*%rDzgX3,W/|D 7& ~`W9mAvб.Cޜ?n^X5X{0/^O1ި5eR )fô\ J>>3$'gMd5I},2|RՍkSA<0_\YV|m@be۳߾} 65\{7_|#J1.Ǩd\ݨfTQ \E (WKM)0pZ]:(K)A"GgwiIJ+JG//kNA!&˟Z߸{ރ+$J8|bISIo d]$=WKi&^ ){ʹi,7z e=&KXbZ҈8 D`΅B@=J7bxxCBtɒ_T+*#)TD2'Tf !  RAP,@2A6ZA=fI/> ceOmۨ%Aȹ^!&Mf| J*GZ O]V: J_S 6"^ɬp( xWR ]C\=70oC~RV?.EbYE*"`^ZCCtF^ J1u3+ۿL:gJ` Gtb !jq]{Y\s.DT/b1FT|Ijxydwy|Un>%'׍7;;ftE,E(y̦ 01݆ӷ/yUu)ztLfg܀۸=|r>\$mi]iM5ɭU75BYj{Y]{؉ɑGdr3gr8 fBHf$@ӤQXa@@lA#뙐(}(!Qtl=b`NXTRuWz{3o;7{fUcõ$0V 9L: 12 ȸ?e[/801&"^`3+j **Q1ca^*mq>HYGʍ^ywۨDҚ8i0E ĔTTB@Z`뉉*V!)SLG ?2 P@#'d,a,ܜjch;}K#@ H;p6e&hW8HjcTtyj$.k}0i !x4l<=_k rh?H:6fH3%%'eq+ܴ+9?-4K,QK)`) L~i؇ʄ-RZ昶LlY6QJA=gFa9-B(r*9 R Ԛ d ZG 1l,1 WD00skԀ41Y a^ w:0VٖIMkl&5ˀt01(c+1j)ܘ]>}&$\aa:S*Zt_p5};zs/̨_\<b;aZZ%b9#-(խug?/]qoyzdX X"V0*e&b]Dct1=Z^#H {>s4#gW*+P#$ aG$&jSCF>*s$,"McD2W&R/E"&ZHHg`pHn9&Ζ8,>{s!>v:̬&F.s>9>K>?L?6@QPF%`HN@J;0s0\D:*3ѱ`%:yh4BkM̨3h! R*:dZl+y!p׭+->;ZzŪͷ/rML{Sd4%Xazd95a3[dxF#.zJ3Pni$o)|q$=8ߧjCozYϭ[\mi޺4uO?1AqASraSi3# 4PR#E:͏7gtHGkjN\0}MGrD^/_2I?~ Wrqyk=:ĩ$Hp ))2jJ 7Ij|H N\bə2pC1 \40.ў<8x$[,Y=!޵,B)3V_o'&Y9gM^va,*I-I O̐)IQMi$̓m+F 5mIaꝣQ1 Ia(&.YFzd7{a~Y Wqbw{mMnj`CMt"h,.HkoL;r Bn0^3qɻOl9*_/.[ d:ὐ*х]i4,Ԑ⴮iO_Ud쿵U#[湪taD5@t-A +:~˦GC)yuݣ_=׊R!1GvdQwǨtti]]BYq[}x e(nmSHc<_sfv.=8]tq:OȍB~g,1eD^ʁ&^F|BL~u?+>IQ*9(GIaN0W$hfStO 2'+7YW80JRQk"tD,ha9 fkCu]>[/9hBGA@ Fa4LۍAsQhH@R!H4qM9Y߷!NpS"谸]5|*H :#ěߙE2]uŬfGbSTEǜf*_Ld3I 3KʓjC) "1J$f(@IҘZ(  JRx%Z^ȃ=2*N#+@[ݛAv3ma<S}뉝J JJ_Yv]`CP bKV6%BI{}yz&^X߶<ϖIgs9u )x d)!h* $Q&J{,pʫG`ˇ+o.V/g2QA %S;-՞S@$G@bAT)!EbH"(E?ֿ֠'*o)鱤k+o|g] ei~̓ka5LCҲ!]wwy/vM¡W?'0;1O9?oVJ*|8)孔D%d5ۓZ5zC ػKO߆_2hkvihtWnVly=,Z5@Xۆ>ׇQV:uta6OW7Xu}O7}F͢3F no>.ޏp6nkQvy>j̟^2Bo֐0S\IYqT*O~AwIP&s*FVƆTe kS @ {l;؎>ިSYQzpyiSJ:ez\.XY˵pAb"&ةwItOZ#\]B::]ᇒnr+r*޲᧛IGYa*NGKl1W4!G_ VttuڑMe˷=?پ d=}oɼp.]ϯ;f: m$v/cL!3φeaNp\⼂T\(­ibW9XE'Ϗ;%ȈS#qUHy+$a'mOɹ`OHfVYH&~${j:SpBY*Pg%[pJsCQIwt&ic꺻MMÙo(vفo  /.KI yKnζOMj#'ꫪJ <S]OHR gs>WËzؗyUq'֋+H/F J=>XD;t|CDN ]ăF[E^eϸWp]f{Mz2 y~o3;׫}9xk+z ?xfLa_H7A{XqNᚢ \ ˈ@@Gr0BBLR0N3=$KC&#$. cJ! LFϱC ]mXcs:O8Q{2U]y39jt@W >LScLhCzh TP'ѦrDQgr[h7 iP'Γd):]hˬ>AD!.q! Q0GPrijXJuP~Ioz|9q</?QQa#9GV%8d2[srNpn$RbF *#(Y)D<*b\pSs"885^koŸ.͓r$%;$)>_G8/)S9_AQ|8W(.DĘRih\2LjDq) NGeZs$UHC ^ [tJ`U Jn)`Җ8-c9RBS` j 7\7w|7\|[d=MN&b-vU4 Rx$EB<$4Io@(#c4:"(oy,U;<Cvl`;8fK%:3A%6`d>!^G9ll}fljVѲVjvo}5e$QRX8AƬ Z19IHeӂzȈ )yThkB$^ȴF֡xt=,&a{X;,lb͏S-ba~M5X" 1SEx]#"pDTkm$Gb^&${^`f1yBW[[*I]`J%)eJJefF2ɓ'Ȉ@vU hI pJ)!Rq@MBP3.8yQR9=iXE99Ryo#>jXi,9Ua\wx" ΂E|22N:N }"h(?1p/xXlv@ 9 a+:]1l+n~Uf}zdu 5"ȸɱy 6kYh# a\=qCaP~EӟPj:[oiW5bRQ{hgUx^\"Kt]/[{%8xjgԴYp9"{V6un]" 40_ =ͪ=||CcG /qPH E\rDŽg:<]=~7869:_n6hLV$x_q|QFU't pr6?&ize盬x>ZlcjXqN?؄s!L(t5{f́K=,) >ͳeLnbbhFMLeٽY0ݣt-+l }cN:r|O;[;N"[sjytrwd8_7 nMcf6$PE+vϴhdAgt~Agt~NWuG :^?dm\DJD F[MぶE2fx3MN^Jj6~z'`\Η HSLqxL@ &*!eF2?RDkm"pm'ŏC}ӤeK-ɫ{)q~"nnK1?}V\ZR0 BʡZb@)&ep[J㗑§sg}C}[ra\3Rٱs486Cb9$40 RAbPث@d OZINJJu&JRQs4JP_mu?+@bB?1u3+-T$%LhD[:Ub:ub:Yb77\G%, BX0.J9qJLj2BKhnxC*&hȲx\3C/Qٔ(:*x XZ_:9"zš-j]c\~Ґp~BdSor7q*V9᭲"MIP+RíSӄU(KYI<˝`btXEiZzWw3\<7TJ [,vy͙9\!zovtλ4~>.^θ-]>+at |BI 8Fp$C5rA#VQ)I小\F{;kJxY֗M*{jdmuxTLT/oO zx0 U// 8''QI .eFGI@Epdj>V.w.1ţ=ţK\s! 2B$$01ǔB SRb?AH+"t}#o 0Ru5N&'͐HW u,c6((Mku`8wי+ykuZ)SuV*կZ:PG Ȃ6,'7fAnye^IfYNwNrrhThߜdqBDjs$1DK]2$Q0GpM;ƉG(x5ǣ/93U4sv Hxehp!]蔊weK qg#ab3A%6qxK̖2g?b$l}Ab@f,j32`wӲ!8ΨqY-\L$*B @1yfLEcE0 wZP22CыbMK#b:6NuTbƩ_gae` ""ba";cVFDdT9#<Xp"AV催9$a+֍H75I* Aq2θQFI|cW`>|/eNTۈ!.Z},6KEU~ŝ3¥,`35L>L,iE *x \ !xh#@X q * m8=v}x Up*/Abx7!HA*}vB '7O8&5G@JR B:HsY5ZmBDrʢ^JٜWhi^J}J#[|8[V}]#ެbKGd>8@&qnJ"FPA/*}l)SHnԀA<=0`@F ב&%pHH$%O,JZ1Cc Di%e7_ Չ GphQ%(#Ku<ݵH_eYIXV FbqJTd# 8qݛib0, T)<6 zyoMI($ ,4!Qh6KH :HxAL׭yDr{Aq}fU q'eH*ނ LQLQÜϒrdXVz=d7uZٮqdhQ"E=o 6 #  .q7FzI-”"B3G2IP6(CJCA0z?$4sV&h p=]khOq^_/ˠ \ueN/961%_!5эDNxB pK4srOX˙M#u:jśy4pe^#__ư:-}K.@=>>v_*x5Dk(%Q5ZEYoKl-ݳ.ݿV`Cdu%ںh?Mmj\=J!.,F[x9<5bEQnc5M銑ju<{F7ϣk.4ퟧIG"Wǫ  d|q-v L&Ѝ{fzOmw 0Wh𲯽jxdV)`섖Z?C2-T3(HԒqBosRdYhBGf/)qRbOLN3oxRcRLdL^*?<}길%8TqD},/=V]pG)M?Oϓ%M\m͒^zzjYYѩ(T" I+ /gI/y"=!>оNQuQ(U]3s؜jɟ?íѳ)% e *9ν . ;" <OQ#e k XmCeL%L EfDBiqbtL{@+p2i- _)M˻-k_"۔c~]vXzwySve`+k'$MA>"32C77C+eEfF2@7K7<Y00`00dTUWEU@[lvΧ %<ҷȜb^ ]>>_#W̶tFٮxb%l0"5D0 I5q+=C דǘ* O-wad}u.x(6dZHyb[D0''b9O<.n_i tf "%b/(kC2欩$Yy["!$^:nm0 g }&\9$BTU4&"gf|V"g(Mf6F?"F,׿" r\kGI铼u8Ge7%nyt8oAai =pZ7o~g{(-lKFn73ڻ:9]&l*ۼ>^n_ ݘݍ-Қ!ߞ.WbKb&o3YgLGej|Œ-rȫ}?9+o&rAmo1Z#"q[,9%;ݷ7FD- m/p0+Gt<[7Zڜj\ŝ-?^ ^07WYdIʞ쵠2ZNީ;DoyK>I+1r'{|;9&MY)Nmì0䂠<ȣvJog ^ *"Dscϧ=uc>īeڰw;{cxcqO5_U l¾G8Sc"Z|S S\`c:Vǹ@6A{2uMl>jwu"qLYytX7^YHΌXArH LÀ'Ou@aQ} cD[#O)XEzb 5r@P[ɔĺ&)l 1\j) (@zպZGPB.cI5d0r>W@q zؘR:C2[chНF_E +$,_BX8RSn͸c} X7lY M ѦZC3-q!Loךtw} h' c }%ḏ>C_{m_vm΅/9wbXEr^j)NPfxM*P}.܃l yzTubZ[R\m hJtb pk8(kId [g]o#SūPlɐ RP i`P39w>qE!DV| USŮn;>KcWj'uJaѯخ[ㄎ0|M :l?}IWż_|wx2<"`QA  U6)X'xBgI͒Y-#G¢r.TT%b,jaANRZĻdbTҰ݀IIc$6 DccoEf[zrs7hqxGm`q^>0Y;m4`2 5&]稵4 ]@&qj&T&hޒU͊ȹ]Yc{-zӺn߳/X{kS'ɳ|" zG,N(+6BF%tLU.䮊 ,Z?e>]>{"bmѠ(%2YcJEQ~e`\ ς891k2~ ?ש_]n]U m%P[zF; +Ut%UT>.Kĩ"A' 3suꞜ'r< _ cr9 1$@q]k% XU]v~׽0]R~<]eT#6AaWsaw,{I8X*PxtV5Z4( ˜ \9tǙTL 衣Cs;6y9 l&Z Ai"0}tQ頹i Xʭ8(Prij2`!Ș$E&^m4晴&"$*DFu|$Ҩ8 ԫ 9 )P)Xɤ` ] U6ERK?"nmH)y=&VtkJ0(HL+PՊ Q>hXE ] `.cgɷVN226a; 9wymbpSB no|1 x3{'Q:g~׈ݴ<٠}UW^;dh*5/>Y}xb8?,;Z(?\ |w֪/KU{+Rx%r\wBY,qYo/g-{u$fX kb?Fwn̻C=od a>ۯ=?ͷؾi[:?/@.0?nGyWh潲7n}k#lNlՋV6ç]yqz}3U1͹C?NX.w "EvZSYW,LH×#7r$cF/ҍ_E9lu2|-#M. ebub]uQ6c*gߢ<) +/G2pO>K7׌7uX])q`bx.xxI2M!'WdGV8s@5Hb+np@JtQ-d8$k##24&E J)x0rnڄƶN#|ϦM:.`J7ndݐ:׵]ḑm}us;-SiBmƇA?P[#7jjc5*Bm/0fQԕ<uuF-ڱFeKTWy^\ο]r6m:̾&{$CAfΔ;pr];%?ofg"OS3o:;f?Pq U<~e\ ;^KUMӲQRO?Kg_9?U_]ż͖ĹVwP%rVp]ړ3z}a \?o~?~ۃ+U>|%QDw(2VX7xW|G+=:B.)t6&1lmdcȅ!&*+҉+9hc"ӥ vlGXܙdBG`j\\"qj0XYmp<4Du*hJD* SNk1] N DV- D-y5rLRl5DhHɵn59}9B ٓr,9 [=5YiP8.::[]J/uś @SY͖b-O;nU$+δ~T9˩-檔XJb,kJ^rb?>XyE`R,+Ȭ`2&0bTl B OlE; F嬗_1ן`T= !EX LʠrNŢ%mv~]K9 ">3msA(,IŠb%)-Jl--4d1*h@M1xX"q},%AϗͰf:OGO$t!搯l$Z6!ag|vQ>g *g E\Dэ=AT N /1A4x՗i|MG-}tz@rutt\u/ D^|9T9TB-J&K**1|Ev9d"익SZMt 6^˥3ҹϊY1G3G(DU9k|JPbc1: Fzپ3VK6 :[3`0+ 5D ZH{ ۛ2ϕu3>9QHۑS ]?F_˶ ֩7RQ *]1Oktm_xZ)/U(Tdm_)x9 %N+ +HY0s"˟jM-?{Fr%=6e\n\G3ErIZUDsDjIck XfOMOUaYD _INQih D>`R~Hq ixp9Aoiuh*SWm.GE.yU z1dg/ofL~؞V0sgl9]yO_@c 8PW !h]&Cfy5;m_W*My{5I_}x®Cvg_1ٴq[ҞWw `;Zkf_qaWד\U(^.`YMo$W=.|̃Y\_2/m<͑J2:SBSOt󭻙[ޡ&KgUq gg,;&ٝ9w//Z N6cEQP^># ┄ɅVSNbW~|{X&wuc[E )XO] b:m`7QΆx 9 ZVL_Jaq" k?ik6fD߯˱gg1rȜPy~w &V Sϔ/}gwgQgww7U YYk%BO˯ .nq۸:\oq ?rSDV:m^G (Aw_س⭷J,IGRZ8M84mtR^9c,d*( ΰB{.$&0g-L}%Kur q gc#C6Q`q ŧ(4_ĥx%>m4|ӛ~:-!ubE<\s~A0 jj\Q:%Z 0{ԫa(M[Hg- 6ĩu*KOTY8gg&k%5$E `\FaDkO$0|1C@'p9 :rJLFlTTB٪3  ʣ"1sōuBDiҙM4&ȠOFb2\5aşJ$v(ay^1}5__ EN!?:A h6, }J&"ȝ3"ƍ5udY+ }s}#6C~{ǜ\Y?뇲>y;~7n6R^,SN"xIuԈbd+!(@"wJHBP癣A~ŔQenGO>#?CA 6*b"AD0_#g?VH=jHp`pӻb[weM3U*֘}vtJʅK6AtRtqU9j"j|RJ6:I|H6 9h}>EHhZg:=H@T%h"&F T.@uV.+ֽ r,F΁wͧ?8'}:>D|yGfkkc՘ZzG},oJ'PP+J%CCalP@%7Q KeBYRep 8<df85c2(PR k#gY}#?2u+m>jMoZaaS4v`Ye) Qb&3a1nH"na2>jS ,9^Kûv#yHҝjf'JY|M9>]ϵصM×_5(o7{ GC9T4\*@<mFjK>qФNfnn6>ny^xwcl=8j,HI}З9ypÕby*@$WhqEDmҚǸTq^#уFxyo=DQ1A1ŀ"rDžD.@!$ǝZ9Rh\pDU TQD!Vnb3%CT{*DD3 +s@pGiNGO֑"pyXRnlO"n/MM>uSSo\0Ax$()l|Hј#3Y>Bzu^AQ<2S [^fqw: R9(-8M.,ۦFsn| 1"Ǵ(4!K7U0 L􉞬Mǣpv9lWnw\Gh:_cn%a.5/=~!5FǼhF{yJ}FcgL{^5*z;VcJz"gǭiуD״RnrK.5(T%Un0m*$E޿a||q@e΢|=c{}%iJ#R֢ ?]$2CWD ^ug|-.8 )ϔzA߶~ w{gSf!wԎ'iʷ,]>3 y7<Z38l31wkx0(|fdTd]cQ{Ӕyd8&&.Vlb٬.6 6Sf3b0)7! $L 8BdF&0*$PKR XD{e ㉯S+Pu.=]dtm>ѵQv_ %ABDY`|s!AGFBBJvuRND$QM0Ƀ?A҆D\*H1w Dsͼ =J[/, !\NOPE#Zi)rþ0^PϚ6{[9P(0hhɅUVZB veq7M)2Er"Bk-|zoOB߸М,/9 ),1QZh>T4:G5+RHNlp(QUhB][1k5 t Nj5J5x $|x0E!Yvʽ/*G~뻑k($qT /IYP 1!dPA%Gd%emV۬T`ghk ܁hl܁cC^+C j&ΫwF3oRVpmW ߭|{yw9{nK1·ۧN🦿->4r|!b3UH9"D"^ ml+Fƒ6W8D|MLUY}eCW/|F ^F#WLFij%4JFu%+G@✍\k-R9O%PW9 RRvI}-!;:aepcxquP]Lpq_Ѥ k X?GJqԜs_ gUm]Lu\ uzpY@gCVw2[Ǵ kJ=nX[qc( Dh,@{!0 Z4WM1VwbS_]K\5_]c=Tibus jy;B&тͿѯ_''ӛIG6A}W]q5juz1Ydi<e@ pvl l:HKQN!aP62sآvc\4Yx"/K4҉!bDE|g5Iӫ@z 241zwVm"!L *yA@t,4ҡҏ *g͸q.]A$IJ6 HhHf;,$L+Mx*7+F~ٶÌ߬tZkŖJK*Wg/{+Ak\կ +A8ZDIkY:|"0y(wC[٫7{-Z#@jEZ1Q PlT*)@XPG32jdrٻ8r$W}NAF ]lO? eyFcSVmJJu(҆RUVG21C{.%Gs 12 U̮BMk]Զg 78=(!ȩd%fFQFd|-~U*<@[WH=(A_$N.շNLթ2[dyȢ3\?r4dofW!ALCrJ@tuX2I!t^PP<ގ5?O~e&^No'n>87~t*\n3'fcw ɯ37Q(t~}>{h.5[7toT墦db?є酆r vN4:K4g4Q>{|x[GWɔ4nL7߮.'LEU0ex_.9x#ŽϷᆚ:" C֊㷾-o2v:@> m^kB3k6ge]L_VR=^dY ]ͯv9wU86m|wqK6<\_^>q姛k,~ B.d_pBFVZDh.tpŧnS '}zz5:|B.;=}q@;m/Ӷt:%a 4b;[DBWKǸq#R]/PG_SA|Tpp/,k_5JLtpgN?ԇg1\-(&fA b SN 5Lh͑S9~I.G.A/o*mb6YRhG@& ?@D"[E 뫳G%R]EwhTv{ #8e.K"ώÍF!c~1䁜&_of)UTւ~ {w?=J*vpv`ht,(Uaʣ*UʽV{ jA(Ig> H~UZrL b!8rQ+doM"xݒS _beF[r~h_{s?IyEJe)Jŧ!:Ss e}%dZ 7\rq2y1]?mڢ5c5yV/y,Ko 6GbY:跣}s yb̈YYZ62z˷ySD ɞ`P-)@WkIq)rW5ZGCJh` >Q?M>r(/Pʁ^l2b`(&&ZE'&{*d}S*}~}/q?$פWH yX50ݽϏe'{m+JceԀJ4^4L804LCX ^ zLlh5΄ӻ oz0;kRgIƬ PPY/c0^* wWP^X?z,&l)*rr&J(ԚDomo^.^w=>sۘ=1Ӯ-}&`/>~1ܷyP^-XN!izz!ϵp6 1We9z݃֩RRId61sh-Kc2CGTnh^Zэ6uɊAiήm.mK-̌Mآ.:Z US}._͹x >BEw8f H*oҮZlM[ 5|L5{Iޜ_-uWW?~:)Cn1i#V%l:kkcNګ~Z{,Q_=W8ʘhp %ECkW4o0ѕ CW .P誡GOW ]aXYO?Lڸ5iœ1N:jbpU y] ȣ՞`0 )鷋\J~[lN.;w.DUwVE"d,h`7FP"~Clj7LWetZ͇P]t#]; 5ܕ#ӕ@ Gztf=ZP誡{tPj ҕa `;jp ] Z骡D;+Yf;Ъ;!}G(w3'?xͼ-LBxs_9,y&s(weMC!/lfXZ;c7Ytfd7Vi& ]5BW cHWb=$~/P誡5GOW 6HWor<QhW  CCMϧҪ ]1(Cr7kjγ誡GwPw&0:xeBӋ$E2%}Y{VcV$!NJ.Uu]=^{oXQ eB$w(ݲf!'5=/&caIP:DEAiOPHG\JQ21' ZU05C1 LѺZҵ޴V re]~-~Zf 2 5-iI1 V٘dIqRZlЉ~"!0 `bsىYe\ V^{֤d<삇$}ZUerjeNd`1Ί)ZJT^{T-KQbW.9Bui㸒eBuC =>aZBhK7 ibMNI 0z/ IeBф i9ёO\,ޢ ԡti~Ʒfh;oylX "(e*9<Ԫ7"Hbqˋu&ǖQژ[r,&Qpk #1-giF꾃m EAPTY4uk֣68g9ԠMEX mFn7((ʡ7\AyΦs` Ú`g'FWg׬jPX;M*Ĭ6l=?ՅtZ, |"Z}W:(M*ӫl#^b=ѠeTA!_bUoFBM ^R328nDi0eP0F%\) GK֬kdž@A5DC7HX] @K,brjFF̽x;hu 9x|# g`EV 2ߪvC6e |nl5S1Zo[@y`g=@ׇ ˷q?sgg?Y0 3kF֞5`C˿ ೳšbqծjx5Ǥ\3yhYsh-ÛAiی;x~4\}UeFIMG^ÁwäDϐֵJC7v#a?T&:U'%\aۊNv0 )aTХHπOt4<{Hoݮf1  34uSj&n ?k;:UycaU‘r cUxJ'f$Up^7WXڨbj#cR=iom-+5ށtXv=ԵjsQ{&=Lb2b+(Z{'е-9w[~nĺkPqn?j7לAPK}n]ϡ@N;|i ֚&6X b0zlx_HO 4%YIzUipl'aPJ #Itk xzp4T:6_fs˥ZpU\CC-Z*cQ4k& s?GR!6.X,vSk5c4&Tyg'];j0BI>z& OG wQٰ>Cߦ':Q~*hV-_`N߷Ȧ6'фi&^~%Q9/p;G{+);54vͷ|߽p%]{ 1ݴ~x`@yׇx'믵 W 57hxpqo_NsׇJק:劉pݵ޿{|AcH%vi=Lp h|p (SK]@t "n-/c}]2|Y tCJJh+ CW (tjt(#]R'OYxxsnq5P]zk8}CGOonQ_[f .>L-&SwW}|婀U9]ǛWՓ~wsJ?^>ew/| JJVe"iqBTp,H}r4ty px2sJ j韟:Zytŀ8tp0t9&NWR;ҕQ1y?] õib{ct(!]YIjbɎBWVk6oBW;+Ł MuzboH2Fjte;'-9y< oaAkg=??^?цyy7i=xpZdv:X5~vaT%57҉ #M{SM~ZQ!]cm ơ+QW6?K ]푮q(ơhì1Z[+F酮HW$Ef `0tp@f%b~t(`wvx ^L%N̥.U|h_\_w]hyIfa >iw!j֖rSmNynUl8Yp]6 ݷ3EWJ.M߷/*-sL\k)lȪ6h1z6hӉ͑A~ |e7w7~L] m"M-+ԡ7JU{? ]1K?hdNW@ivIWV7#+Zteєib: eBW;+RAm`jਆ+(ts/ BW{+;j65ѵ3W󁿏nE?M4 fZiQ/4Cڸ4 `1 /U^6٭P]G+kYcn&Qz1{8th:]1vHW v~f s͛AF H-ͼO>2tr{'> &r?w\eh/urJ7v-N6ؙq^zubZNW2]푮5F CW.>Eh<]ePVjte& DWL* CW CWRZ2FyX\ޢL4*ym#`NjUI6)mFiՖN@yꄄCӠ=h 0]1\mG+FkQZ!]y"@tŀ8[ 7QѦ͛`~0G +;Κ( BW{Hh$b2i3RXh>]"]ػƑ,+Bcަhrc30T0BWKV)K.Qv.A-MYd(X$C'νuHUx8p}Uu:yQk۾[~nq5+Y_<GP.x<,:/FRx4V_ꀖP{4ߍG#zyE7B_*X_/|BqhŅ0ǡ-[A#+թUOQ)!B(X:CW.]UF 7HWLhI K;CWB[?t(u]qI+ @:CWW0BB Qn+힮^-[G ܀vP~ u@e`^l3\L.U4Qi!G+!P.hE*T[2o!~\ɺBW^NWHWJ!B3#՝QWVRv(% ҕVRuj֤;C . ]e\2~DMҕsپZ䁋NWNx@iyV3Q-mȌӾnOu`ի+qKq-ď.<8BtuJE[6T+թUODu2`E:CWWBg*UtQ+6]RW@w*+t*vbz7[V WUF[OWR+`&<)nܰlhߺrK4N.}^DycJNծ̽|4-t2`%;CW6]+D˨l;]eHW0`]RWXPpyg1Ȟ ]i"%5 XvG]eq"tQ7=]2$og!ʀMw$q3K-/9 =]EjMx<:pIUY<ׯ4i5GigN;b5t@#mi3^ 2w Dms Z *S}y<Ҭ_TV3|?fm0TP9ݿ~ȟr7T5w(6^IyߪTY$TV\?. ?4 h5mI |zd|axU7[g7ןQiͣҢٮ~ri8<+>X෍+$s{[jAMֵ_b݀w` JO40E_͖zG"A2!o"ur)\FB-ME ygLDB~Yo_OEaJ/[x41i9|֭y ~yݾKei!h[*l1Oclc;k#i 7Ehݧn' 2*Pp}Za~s.,n?#:v6/o)SW7Fua9٨>OfYsS4vÚaK V%mʿ |&$_)ޥ7@߽@iIF8+e',[XY8-`4 H,t,xVLkY IO^bDbwSL"vYdɎսZob!H2229=`QYxm!!,~ !0L(Y/6\8scc I2& O5Zks ʛ.'U\3EATSJLDLºlIRbkv(2gu۬ooܜ]So?P5&*eXɟ͝`/}.$}ڤb߬ 5 s<`-l`YNDetJX6'(Ц\A5W6PUJB1摼ynl>bkP?#ate%IK% _#ByEڕCu6r?nse 4c;Ryƍ'咃7~ Wv9K9wW0&ܜeuo}{ `ӝetŽns56|vS*N]S_ܓ*nG|4tf!,0eóz)S`ȹr=L'-ncَ6YgH~y.2ߓb?ē3Gs: ˤ36gi4z?͎԰+*eգֆwf9wٳ1ߧܪ=}[C0}B6CumDlO;: HF Xd6")\r[hB{8NCDЖ܉XіYM}u'&B\*C(#ئ45^p]-ݢAY޵d~~4vOl0b<Ȉ+j9BA8E9y^,%fmx[@2>9~4 2O}D1.@)Eo6gCx98}wzA:H}cv'FQwgWhOs^2l\d&4?Eƣ%!cJJ"Up0Fƥ$:x(J!$H^$Sl\Z >ƠSbTrKfgJc\ؘeʅa. ʅk97C<0]~)lTn&Fr360õFx AjPHg & td FT-8fj_Cvl`;8˴ɥbvDD02WyMn25-ݘuڌ6ڌ+؝l΢3jVpχM@!oE ThS<3&ЬQ>LB*CFeHhϣB x"^pD !@<:QY06g?VN1N31Sa~_ψ=#nx騉DLFDr.&F XA$5xkxeє}(Sk)WQ9x-|ؘuȇ4̇4><584]ُM~~WHB7θJ0,Q]Wum#;D -. `:oǵG<4Q|1k|Ǔؙ;!z[t&搚M;&"sRXI% y/9gr;^.>AJS ĩv!ă9GΌdX{nQDDo[\ms/li C[+~,>6;݄ -U)Hä!=BEXp\ ouhŽ73I͹t4mqPąs&zJ݋C䫹-q&=ꪁg[g[=ꙝY=h% F;jKWK I*ВIJxx~5 bB?9 3[HJoш~I)Z'\>J W`< &h O1Xܸ( !3V:FZ $W6*H\@D ]PKgB@KXfS 8:"Eț1/4OL3Eb"ZdNE;:Gxau7zhGQFGυV()5khdئSBMxUT n΢#0⾃RQb7;f<J3/%xmumw@K I=%ƥ^Ebìd5IRs_ pYQ"8Oj>aĴn0maڊ 4 MGbI+\0NLކ5D rQݵV8scc I2&yQ3?"<5l p 2tmZDLN i;+赃lr|~JɨhO}G/1/$1yyg-v1z7@gS?5 ?-DF">4xBldR/?$?yw!EB~Zw^h# T0rQU?{Hrܿ vp#!`Ŏq샍@baQ]%"i=9wOpHQZHQn%q53UտQXm!{ѴWIT;n+<[~ EV܋Oq<{8>7D|}VMw=v1Upu-ѭ__]<$tܗL3V > %MDdͤUrTS \oЗH8TJ ps@en: $T9wF~ މNIx7NłjoP*nbUݮ-˗,K=;,'VuV~T)%tBR.zsZD Iz;;#O4˶eЕxWYU=gp81ImF%5BtpRe2簛M@kÑ r=a ziT[%iZg_m;5WpcDL!$irLKDHw4hɌNk%!&&I#u5h+4b0b^6V$A*<[M\1T,gcc Aik'I{1 D`O$kR8! plg1Z> B~QO/t-qEwЈiZ(UIa{G(&#I*'ɑ͙, !hHTЌW<(|1UiYׯ2SGɤNT;U cu`i %5tȐ)^ty \8.|y/?U:8/9^+OBGl̀q̀G> =2zDڋ5Y@>l륾3ufX }-wE? #QknH.xfSЅjQ-ߺ# M4HM94pH# ,`d9@ZK$׫ǫcrwܗ[h6̇ċh+/*(zQbr:))\ۘ™6:[ 14^p.I?uxoX[x5 u] Ti:-seD7/?~v55=5Z%0KoSv;8dOyw<)=zJ{Rf䠇履M?`?%ڦuv4/%z m MnoB5],Yߔ̖R(s0A( Qo@j@ŠY!e{quY,x-MWc-;&<ALBcvn>@Qcn?; ,ywL)*58Ua4hjFgM=јt63X[Sz 7,6);ݺXBD^ksf*dsH*^tV!W@v: 9YUfA!@s,ƽ'ĢۋȬc@<h2Zxd6ZCbLD*[Sq*'BXx+)؟v |Y-py={k57n:q_&]ⴏq1.\7|?oOVv4*- LrQSf$ڻIErVow$:5*1M-}P'(I Zҵ] 1>vyX=cakiuCn|М~_jl]U{EVE;dR?4Xy*PhPchQfxgz!#<4p^qcmEvRTX)򌕈6U*WL9bL:E{+oPo'\ހ;\S)w$-6:JxqM G+Ul D^ZCT^#QZvyF$tj33l,!Jqw5o!Ya> 1Oƙ@9T5Lcu:#g\t8 Iw]Jz_Liq[v/$H4䑑2I 9.&cȌL Ѐ@V:6+鑂tO8E+vsE" ~g5-9)MDkQ!Å>?^?^/sg*+xh^ kQ8ɢs]NmWV 5fߛMR{(*tDZjի0I|›:gy5[N.W*X$sm>^MsW1VucN676 +0^LۺYITZgXջg ףqjYq1{`NŒs$=UnmZ4Y29=e/q>rlnhb"U"Z/p>H#q6s~Q^&_JOlxrÐL]gAStθ#c8ݘ4%g3vvi\3X,*oKp3 A1 ̛mdnr (~hXC8YLe~ j/l8,y.n>VZ_.s9iAxɕ*B1 Fi4).soxENݣ=h19 y~+񘅞{zOil+Ufh=ҰfF(8<گQRP K3NA ?L|@?U~dK2E '_ R@pcP9A}tB X'6JÐ2$o5*"gEb2O_rg]LZaWlWͱGG/8Ӣ+iwh#ނ'ȊO!\*LuR͊jwz:UcbJcj`EڄV կgcן7WqwBx[jS͔KcJdWֳ6qŸ 3o!HRNUwvR֗ZE[76#IHPj,{>I39i`!I(ޱ NR]&>\:4cMS?m㵋I /o(Lx5Yg+1s8g[嘻'wLɌ*(\ox.w|y@z;;a`fDW̤W\jsW.@;HJw|pt%ջ۳3tQoba.+t]I\^w}wXPYb嵓-Z`~6rod )OƊ_Ǔ_Ϯ~Cܛ^3@;U.r2y!&"NjwZdO3> +cc{cis;nϳۡU8ԸEܡ'e3+DD ] v6t5ЊNՏtu0tljU ] NW@w'ywt5NWt.t5'Zzá+aЕj;̧$&bBVlޢx23Xe64=F M:MkK4}847e>p\jOJ=>b8D0'`! ] lBW#ԩ7nFt5H]-F] ӟ(tuta7'u5| 8BWmJG:@65mP ShqgH/uտ¦ab˸Qq[Uw$DFRL\d㢳\!j-7?9/^D,n]1& IavmdxJjhJ^ ƲM* Km[MUie]5@dd?." hfa I/b}16Nї#mGv'9VuTrF(hk"E g&͜ !qU6 O=cy%cg <˽-roVSC%0T6Okt'KwJW8Ά\1s֛@9#]}b%2#n6t5Z;hzKAtute jIjOҙc2xt%VNirɟ{UtF )nnUsf64=' hU@ܑ]0,sJgCWnԹZStutˬ;O;'|D`ȭΆ\#N>(*DfDWle>jsyhTJpL䎕Z ~amd0= 1w.[o-Q>m??=9ѕqߧ`m wχnPFC?z6<.f~wUV:9ǓsOh2J'˽QWQ5ʠ.Uòo?w@(Ig5.YqH%ݦ|>4\[}(vZ^߮ޠq|qs{y cY]c>.IJqU*go;?2>?'7|ҫ?%g]s~nu՞ܬ5'=La߿ Qu5{,l˧TT (1Ggt?g53cFG3q ] ߿Ks.@nڢ?]{ye=>7?ͱ%CIֲ &G3 ilMn]!/W!i=$>ܽ}>Aput;߽@30P׋~{+sOT!(--jrLr`d%㬐ly^g?ԬsBYScVBR*֚T[HӔsR-w +IWS}*Ͻ)Ǝ¦&td3 R֗ZE[7a"|rZUa,*on2i&'9 :7pKZj (V1J6ׂ$҈ьR)]oD 6F+r|$&[ݹR3["RlrM%F1=F$LSmO T ]}csk"Ŵ{)Uh1K޾!Tmcfc(WR݃6\Wc"c-+pbʹc0&?A}4 4T{KD=]1~6a!S[#n@#.($v{uJ&hH8IA13X2|YbԼO͍hxU!5-w\SIs3 $mm {rbC%{zb%HwS-vf%]I!A?6a@z1iڈұFKVqb*XMЗ E2lk6vQ} P,<ٝbKAL6YtO=u52ܔ,3kV+2>Ս` :`=K:#0FHֆR3uiBl"^fF U'> r`E3b30ؽ2TTlPtԠ-@kO4uh;ofK4VwjJTVTlJ,d.ϖJǚ`nl%o G\J֝P\;xCsb)v*+3\4˶ ^75NvkѰ֞‡*T4X@ %8ѻV$ihM!'.eX]b KJw` K%cH iBC+.G!2(5 7CPW%Cez|= rg`,nmB AvE=3UAlTzCZÍ=e1$ucuĐv BH0( ",*e.t/ Bq>9#o͙ ,|^{FBfU $$W$l.&DY>2DbAȡ" = I G̠\%{^̰KSM3f]"cEMMDD%׀9[u&>Xw7z15lۇ\p l/˼ L5MO[oT^FE݆tdi IW=$ B+ud2 :" #&fN|N"ΰ A9AJ$rAV Z 5̈tq3F1Ӑ&Țݗ9AJ[[F#H>B@8jG_Nwd!T?>n s +xfېMRyY+#1 K Ƕmdi&9 P ˜}c6A@FFKF$M9m ;s 7Njyr З:@8{u51IWfDsࡄ<nxXhfW6Z9Q"ʥP4]&D*d=B@H>"Jk8=[ԃV 2Oˍ YW*ϕ+")طn:!v*vB!DE CHbQ8TwG=k됪),A2ԆEs81sJ@n,2r#S3vz |,׈L>9.J;3)%LT@$4.w-c|݃r5Wh[vp*R3QbUqZi72LeH(rfF8ae%3}΀xQ|e\9%7aG;1>P2A )|8Vsgg :iO  jv'rR2yL H51^"1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@/ ?)1 6t@)?z&o28yb@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1^,sxBL&W'v5Gjʣg5)"&Kd +%'&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b2$R:%&vt@˓avH~L H)41^"H)b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1^~iz+ގ^0mm}y}._N~f3H q z=!GO\K/ѣVZj[}2ĩ&7Z(3^;!w%ct&⮚JjRjA+%QoVM 6oGJV=}{!0 gZ197F?|Wbyijl;nFܝi}kέ8 &u'㦛ΞcwӽKtZ#gcY_&}ɣp5oj{_~5 Wcr|9ʹ'IU)̽ҵٺN;qQVAr6Uc{{d#R~J7E4u6˛tf_DvD^2mKf-3>ϰK&;Q+)8,1}*IɥSZ\ѵU ~k^Bĝ%Wﯛ}¹T^6~}I9a;qWÃnj| O8i[:-Am."as t=ɛێT|p#t ]l]`X.۷@rr꾟x]nbp>gɏ7y^KpQa氶hqBbFO֔W!vW:̂şu"{ig-5𞣚+EH%$I>r1vko] ULuEaH|b]#"bRz@zeO/|BBshV5| ==~)Y,Ť+Ȋ*D}σF\$)Ui ,0sL3>s!5"(q#"jr>ЇVnr׬b9◡` lgQOhZ'DMP5VP*I!:# h?gl;泓el7^F{ͅyWmrwVc_leGtqѧ޷׉2r\A !Ka#C^#9!Bq<:h=2Y]Oؿ?k}fUur`xy2;!8cIab%$El2DgrRWGs>(Sҭt|Ή̡uϓDv߇/}v.?/Í)ǘ11-M dѲ9z) ܤ_M:FƒG<J1r /1X R«X+jlJ5Ƞ5gkb0MHA/1 &1#w:##O7Vb0wKO Ljzc>|\]MrPLfK \7QT͂gcp#[+boEю1HUԌS"9aj>SyWO کdRV'EN*":2zjRN@AV*< !7}t.H r/9NGvtk5 <鳰ܗȟDj/)Ek/@@>j##7dH7ֻ=4"qhcB}RR44ąm?Aȁ!x3E|~.?&IJ\]Lay?px I+&E"![^lXJעɛ1`ypv_!y;{^H=\oeZqiiwxc˾b>L{v]4׌$}XщW׏T_rU.f7|}G0v͒ώW}*CkzRh\~gCpkak;u@[{Cv*;C~:|a RŮaM/K+8;?@~>E]75-jZ6]5n5%ιŃi^7x}ψ>#r@.j谇S瀵eaVt,S#}zEXA= ^w+t\6?2k>L Z]EJI'.O,#|Fl|&n yFka?'ΰ[ 1Ď/Z.l A=?Jǫ8j#x>?p2,y"SAJ-rl LĊщUdCƩ}A13ѦVzF3X\y3Qu4 4tOy37z4&P#L*$q1e}xc[_ j]-? }#>^>"t.UdC1}~ 6RW9@59 ip4B#b4(D<#ՐRR8'D8x") Ԣ$rȞ'.*ⅆ9Er8DL'Tpw'=+~<>$Lʈ3+,'YVK n3Rꐏ񔥕A #ٮJJb|SO l|w⸇ $bSL{""m#rt)UA126ñX`T[|XS8C%]-nKwD[$PmsxE[eiq\͙"A.C(+km#I_e`ّQ`{.~TK)P|TϐäDQ#k3==5UտV<<Fߋ]!p@܋"c>3Zd$&Xa b:T yJ~QQ*+1#r!X`h@" BP(D萌}ҾٱZ*ۍiӣEIm t/cn!Zk(4WJ`ɴ$5Gdbz-^ u_65V,) Qd`0qޖl0$*`@?65*-C!=m?º FgȔ 4$i眇2oiԴ\(lĕUؿaqٽ r؍>z#gdܧ2- EG:i@oZC!?(JF*4A@2 ћ}~Bݽ~n{uſ= w]H*^2ߚM?T s݌&s6i[v v~=[L(~Zrѥ'xUoLIMG$ݫF; mC'mbN: gYfחnby >i~4t a6m^m 8J4RvDUǿQ\cӲě夤.UH?FU9a^m/]gl۟hYlZ맚!Ru'Uq ?m?)Y[E96)7 dpzEe1 %c8ݘ%G3vtf\#4xG͂#zyo =#c<22B9kr5 (~X҇%phd'"ra3.mv߮6Ǟ<ċqS8FK.X1Y}@tLYg]Ĩ z~:͞MiKnſw}&+T&02<< ɉAAr<< x$4cMA{[NQ2T Kފ`T$M`LJT+ 2f@ ) Dg'6u"xZ]pEPy<༾xW$%`%q.x/\,5nf4FDNaMz+)Ei#!3f8aPbn@yxtQeɝN}VqDR 9i2,\1f֦0x*dM2(#,cj7xt^Pi 2[o ֒- +%:eLXϜgM&k&lu9O0h$U !.yHgQj,"z%ˎfNPf% ,/!,s!6fؾPA2^?zE`@OGP$ȐM1ID \ EM&x3Cq CsJq76C[1:9͌eg\xN2K=\8-<ԄMLGf z#gG{N趟|Rp[oEO9%7:h +2E6dôQp(˹s>+}^pPķ tA-&b>]dt927^z=]Q, $-& 3)W0FE0=h>"H 7%G/FC r QWA er@l,KVuG]*Ȍ4"󄯌 xLh-S[>Jq-یuwL?O_-3FM)9n0q«gڢG}^V4ELw15}(1dZjS`<)w@hIr^1Tof7)HV4Q(7F E$V^nWD}= qo<.kԵo$a3åH-LGg[VGL-^ud]4ߴ~ǖm\,]Nvv}%U)p]AU`~ l1`2`q:Oes!vY6eowte( /gL ~7%r?y*Ծ.ZCyT*\\4]XIT#- kr&m7'I懥F~_":B@Ҍ}& ]6k+noV?uVx V anz]j~ơV]';?/F V[r>i:{B/~|e1ˮ}WԽ1:JGqcu=I%3y3yϘztuVkH du33\Կ$'AqRPwv>YScc9l=ieeFrNihU+ݗ( vf}ȃZ#+Y4@;M2EJaׄO-/'K)ż?}B8KJNUW8Ԯ`+Rꥂ38OE ҳmmS&'ӫɖlB&5 \Q۾GjW~miܰжF`abHkQ>- 5Ӹ༆DQ(Vb>e;zIJ??o?r=k_^MI*kt#`,"CudCf&`Dkq1h-Ά%rEK;@`< HɐTO%vyUɀP 2H2,w)!EEf)9̉_6A]մ7rz8GH^b2,J3͌M@iٱ%jLҏ?  s s/ٿ2vX>NFa4idZ-:eKit.q3q8fpO{픶f)h"jgUdɔ:!bϢbwlM\'0 Va龘NEQO5 Tv#uvmꡝ} N (}Ʋֵ/8/-Y6DoT ]l!|EhEEKoQqv/UIF,w N#(wE+>?|^9O2 rk~zd1nے[3ƾe]?nf}ݹCk_wb+me]ߨ%[tyog"$ PK-dDb>UKZԺҺZ?= yuNj=8mMKnݽۇo?ܝoz>[wͻ<)Rnz#w ݭ׬U].DxkSs6;ħxWYC /e=Rغ g| @LIfOeT_: ̺:u"b-Y pu~{rpG3K܋QgH[S[S gB6$S QHIY9>}]-*{@0TB)Ƌo?>Dt?2udrx,A$3E&%R(GeTf =:pM]Dߊ_ <"RXH%%K-a6,ټ $XJNiO:P[鯹3x^o%NK2S: \;c 턶jN6zrVHt͢٫fr*OMYr.hU&eGl & J\Ϥ֌áatao.t=B8Յwo)=1ơnblAx?{WHdPf0.2 hb{ ̗n4x$:7lɶ˩#]@m) &/qL_9b#8iMv*"OچB|d#NZ&Qtt%T%=(1v:,H{>! R d%Sȶ@zãmu%vaZ21[ӎ=QD Gn )Oƨ 'CNEETRtΓ(914N{CjpEicJ Y Z7šyԐFMđ0^MQ O*YJ>Tbn}%. ^(l @H&jiERLD~ƹP>/:.mr.].krf[CJW纴ZROIvP]\{ӆ'YD)h-SuٵI+pʨSpԆ?$W3נLX;{7ބ9 - '㘙R5"FdMiq(C`2QȢkC7l^G™.aj"]&mO@~}?­Y߀A@tӱkҞRR]jR_3M꫸vI=Ik=&UJ }oI1,=퓯㯿,\J3^ 3-8-;W{gD9ZUtK#l ȒI)/]IO& 8e5$)<&aHSB O"w-8!(JV|A()J(kÍZgǚ0q<:Z\qMGH^ݱO㰱AK$G?%xTƫ E WGC,)X*ㅢ5y1r9%/V<ត9]S[f-!=+ ]qk4+:Ʉ2Lt)ĉLJh%T^/LnKL`)>O&T0$>_yٍO~Hӣq us OͶ82u5 &W@^<3WWE-ҝa87/Q^pXɲ"VwB}wq@}'^EA|WpWNMeh\LJzzKh^ZhH`mg,*p]hhݢRj-WcSsM֬:9\=K`y>?KZ'I) p=\;B[aL!;WU\UŹURAíu+UeWJURAF %:WU`* "i;` V)"zv %~?]xZJ:XMT2k?.K5R^MP(ꝒtviyCs;)Ge!]6JLW@ۂ*s]ito&Zy77:Ф~Xx`E&ͦ++eןECs*_wRoj A⭈dEzm b-%۬wBynECm朤g/!̐w]\}< \7i}3Ka=O='/y~+BGфWG@NG-|f#`sDPL{0&@'# 8} ~Ra.O^7>7YsԌ{̲~rYxߥF O 勼x|f<vY/>r_Ӓ_txn0Ջbwb_UB"}Y-ۧF+~s3XͶMݔ M}#9ӢW~:V 0Of.[X(X+bxAG%=vqE{6q.3%@'fyY̕&9k}RT>JsE`qFTxA8km,`dݵ\5qg{BTMWm6MH3ol[to}+\qZj-6< D qšڤWhDYR0LJ0r%9P =9L[T{#vEܩdi 1i."@HFJ z"h.!] Z[ J9M&ʙ*` VS@!C>'SrY%x'm`kS8O/B f&Y4YVyU/&^;YVwOVWW!8WjG_~CCu ڞGW#I!a2~;U#$ƗƽϿShMD}ا)m)F4g,ڗp>e9_$X#Q̄$:?y{(H&"srx8C3ك+Ƨ̮ǔ4/MM0Fjǵ[ 5,0xM}w<}ɗ4x{>k4A֎^Zk(dDKwZc5o]zmi _,b[opՌn_f7ߞpwl6oܐdt%kx7+p+7x4pŻޏy'@ޮ+s~-7w;~PApWFmi\ q`kz L#[昮'}rq۝.۷=af=8?K煭v[t^yڔپ1>\\~yMܪU2$[ZM-Qj@.7u.78/۞O .=eiWr$ ]FKP Va1EU2o*o"$f U.:AX=sҲrkM퇮s\>V!3Чooi<ۛjOZ8=&Mj5=MmfJ`fr$'5C /*Y/@[ $ 5EfmTI\{:myt x"޺,޹ KYl|d/Q8s9]`Ik$K*6LǸ&z_{vƺG|~n ui3yh.ݮơl~ך'YD)gh-SaZN`8J_BN e)mjCFkml;7ބ9 c .+әIU\ݙUCy*r~哔r$ >mWb?.}z?0ӥtÚOd:n(ʤ}UqE$fk0{ɂϴ`TK9Vsh5EH.YhQG( KN^$t!k'=qqjHJKSD5"uL M.˗l XE[VX슥7 tѶ}C+b2 ˆfo9֙UDT]k}тSv[5#w,X!"mo輪zR>k4ɫ$w"@L2eZӥ'*2Y*tVR۩X^GyA 1j%q>~6BֈYg+'{ؖ;|ONDptS4̒ F7si <.&\S1medg2:GU 8T8a.uXQ8(S.,3L(@rWb.%%I[ zͭk&f+}NI|`Pfj@ޱlzxy-!~- %O]%=<M0~b>Xijͬ Zn>[`#L$^!]&ֈXa{> )+n=v1 >Q8EڐNh 5y$(E׈~~Ue@P <ʹ;jx5qυGCXy’f:QzNhBPR:-UmkGEȧnqeCb_..vQne%WDJ,ӱIlj8眙3*EJZ`FiYOdSL#+ *0{j(\jy\Qrֵta{{O-&LNo ]~p/Uv~zoAST~<EK0(zgs,^Y'FReT?s54EF\b5: HV.Mֽ\x/ x<<}_IZdba{Ii-dYl :R5Nɘ t m:͏:{!+1*P6bg, Ҟu+F$t:Z8b$oͤHV-j+ѣvo>,;>jk̢E&i -4jAdeIy_;TS<,bHCfPj2d*R.Y6,;Մ!4fPT|}z="ģQ={cwM l HnXhaZj! ϶Blܼ;)E;1 jRq&昜&KV"|I+PTPߊ} ~ՑqqnuJfRr,.bc\xaȪlR 3d/SUJ5Z`I!xW@s=.. 6#Dc< @X6wՉ$>s}M7UwY(ڔa%?t'(bBIp]Nۉ8^֟:p6iMv6 $yJ ف&.AI%POBΠ5.٦l5XWf ZENY2lQ|Ϝzq돣sxη쥾\wVWU)ǘC"M+a:{H2H#q=" "TY6htڑ,hX# AZb1 h6Q6^R )($xBpl ˋ gOR_WiΖ|𸪎ܞZ2;xdV pW NoxkJ:KV2;c=Rx-\TT%c:!Oro|셞y7䁮NJ$TI ;'Ab@@5YrG&ꑧ Ts=q{5^@p2r|Z=[& 4"gSd?T1^EoIfZ%~JJ'5-G áC1$g73 ^x5I|>{P$11; !\D"w#S>^u fQYopWRVD{PȆx!$̖nwj<85;'yBI/YI?ɳu?xnWmWnSG5Ogma80ѕo59`D{37{m}$XOEatw<=$=MR&YKpy{dK%z:|sM8KOD|Tk,bz}]oc_zokzS͝}fGO>ʦ9j;ﯕ NyF;V!X6SX)DiGD#{% XDېX8!nAwH>Xv7E#؋}v1KcI}$ܐQSJܺͥ?e6|t#zx9r +0#}ž:FO$=]bLʇG\ٞ?yx3k-2Ex0s?0 I8ZEF?NMм;t-׳ѧa;l,Ny@Q#'6xL>LyW+Ηbktnh]Mؑx촍ok+-}v'e/]f,؅:)$b-kIQ "5<\R=+қO,F 2~9e5"` B$ȖT򺠐9a:]'LR'*\紸>bdD OZ /^ #y V\u4-D9BC0kaCSlbx?߉~OՑC҈+Ԟ'GZzBQ;6*Ԉ!eZ]"kL$VBj[d#:d}XFuf³H{VQ[ZWm}COf4Ǜk[ay^_F~ҐהݣOfh$ϙ(PC՘d^&T OKU$]6B&I%FhZH%CKRLsOlkmh"FQ PBkP 4$xt]UO:LS= @& tv{DJrؔT1N W:k#Y3vԳN:S_A*QHKg,DYhʁwVMBK(kOt^ZS|~e Q R(UlcA FT!M Xt"(כZL~R'm,;i"TJ FZn"hOye `݉RH2&qR[{d⨭om=~>d&Ng^o&#UTb%Q ^c2e@׶fFÚK/d6cO5V>RC8Y|R1+$w)&^$ T\Qdȧ^^+ ZN=6ZZ5ckAЛy]XS>CݐM'lK&tDM;Y'$&l77&R|'?QκÖsv# _x2y}'13oFl=]<[z"ܮ^_/* /'%hmd^X瘅^$$5mh DR)A#?8 cpEC62̉1%{[@m΁K6hzJ;Bͻ^y-w(^t׵.fUB}sN{" #j(^}Va㥘'ߒu`7s -٦oBK @Iٌ :.kFR"kV`: 枪@^҄V)m} e=D"Ř mu2%k lF/w5U~;I%s4 Er$PXKxI 9 OButDخcѾs"=Q٢PN.?Kݖ*3? CFA43\޽E9|/PMdǹ3uGUʌ|7yr&z>~i\ƤDy*|jneJjeG dB]KY*o ^h쇫k4#GI>ʐHx &vٰoA((AE'ҾۼJ:bx^:t E7{o׿mKCBϫ?>h%ޕzwc۾1&bӻ.hg—\sY|{Q/w7^kXacM*O*~) )˺)]|z*-[{grjUV彯wI:ɞ@ߍ%'*&Hm-LCF\mڇ,TY[ݗS4F0%]sGxtсVh@阜wAPv\gt5joQGDO+L|'ğ{7]3vx}.By 4`ל* ZJΨ!yCC 0 4o#@,Ϟ ) }PB ᢐKUPlDQڭ uT.`}Dw&d `[8m$zXeyev6[OmUc]i- ճNz .gK&a Ҍ*HyʂVh)cVƬB J(N_ ˢ@P^Y^BYfj>7c}9ey Z g{HW!#y~TwW 0\m-ܭaZY_!J֓ICi8͙zDyA*$c)S} G]3O} u } pܹ!<3'0鞝V/e`l7|j9s4> 6bL C-@J@s4t¯L+v!ucA;Fa@5P%?,ͅ{]Ӳ Qh:#y 1!JV Q,+eAC Tw jZ$ f5S(-EuNg_ RcP9 GD'> $^5*v5u ٧b2_NJgSLE߰]F=: 9:: #[`std||x< @J: ?>O-| /`;-õۛ@UO"K͍_ӐQ//O7W3SV[`ܧC,I1\JkvBI@Qhmjv䥴HC@& Dv &tN),T%{vF|%o9}߬ṓy_1xYjz2En ;ʊ #*@)vY{/s܊W6G 0Z\KKARQ![bI*DH~cN}qO_m >dS5zZQHʊ]X F)g])KQ202F#}WVM^&ײ^5X`!pF Jݐf ֵ܊ %h9qhaO"_{/$}{VTa%R ^*IbA:A 5^nt8Ib vƊ*rbD,P+Pf6K7tzJ*Vvk9:P{FbR(,H X K:WI6Y_Yҭ=`X/35Wu,`|pUfc$_WfCwL)5|vohS:F~ l<~^ЅRb^+~z׀E~?eTm*5CdcAvŘl#Yd:9L"%[,Q*گ` OcO?.MaZ~0j#?ՐQ\4SV2$r ]wo7 ۥft#X3F vcWIrE_J>C^?1x|l Gl[ݰvu0qQwmhq呟NWXѤ=ߙMOg {}!Xgquݥp:=ވqu|6]g˛lC %uU5~WU֙J堮^ 9b>dGp5N|XY4Z@̖)}[yv, ,sU:4p;yU ,u8 Uxd-#Vw|BeT{F*PE bA,)q!J.򬌤!Y vjgg%aW'SkT9c?_^2vuu?s{2~*<~Tꞩ+}uuNJÝQWL+RkvzJy;`ΨJ*`*u ԉb팺*+RmU^b+@!nKVncNFmdYD;<-3T`$́|3{;狥1wL#U#HJYN[[߷Zx!5KwFMWrvEMWjmcvJ7ר0kWm޲]hʖ|%Mnٙr|XDge~Xe~P>dS67y6Qń/Wt_Bbb}|Z;u=қF}»7hD)Y甘>QRqBHO݃£mlZop˛rdorہ/GaOCRxu_q)#2^|:oOۦ\o0]H/yypK+.AZfd*_I9K9KArš=@WV@V*{A}q1nJtJֻ͂Tl1]Y`!kLAJ{ ’ej6et(|>nj+@F2F\tbEHR$JdQxV9^:ԭ mS|e句R^e@0waȈ`dI3\:槌{o YֲuYux/jGI^Y$AoG3:.y&9iAi_'PkP >5t'#ť* 5oG4>׶n~2_/TI \8^_zU78.dF.ߏ.ezZ-)ggGyQNa5rHK[Hĺ{ڭ$$?:t(,Q[UCmM!F(i2=|KdٝW?^ C xEj\oV{٫A7$R;tVk^$ݛȝѱoV5GI>ʐHxK!]M.0Y0ʭݭfc10C<koL"ϝDOm/cM_$I.tb×tf-%4!oa̕F/X v#@@\\cL>[B(F`^%'(ftK3Xk!͠?"BASO>PmQ&{K-3l=nMDdYu! d# +C} ֋E9cFLٻF$W>N>0fYaϾhyDJܦH^OdX$E%V=Hf%""2OFfbHcQQBe0HRARds"q=wZ+ E 2 į.-L%ʬbN0p5@sȱ9گޅjS#}u(g!Gf-6Cw^oMs=73c)Է4zJ%Ǥ66((q K~\R geW2KQ\8VgMd3 ՚1wT)ɼ>l6>[:eUo֩Xv~C4,,(]t7j z+#yHjuʫi}7n"{۾]Q}^sz`y4)3JEå* X̣2uAoY Udzs}( t@wߎK܏Un=05B$\<]OJ\%*5(TZs9k7zy/( 5"PRD0@=tq'VF"!*E"q=QhHtEF¾(DLIǐ<՞JLr b`GQd)u%^"*VlӪyAM;3rhba &9Q)2s|S³7qf)oP?yY NEC&2I⟔x APGH)dRĀ왧< ] 99\)dSJ\Z%oX"iBc3eDӜР9a5,# Ub{\B8. {/>{R=i~i+57ϖ{~, YAV Pq"T y%nj9 TAZ#iyux8^s b1i,E$H*gϸĮT]{P^FRzQ5u|^H %AUBDY`|s!AGFBBBzu?i'(Z^GV&ZhG@*p}Υ 8(B,Ib\ w_fҹB#%{#l{DLn}xC|1F5o~?~g>WQ横7J RBJկFzxTIq QKaKuMFBLc}a?74GRSτW!e44z0ԱNRFE4D#A@BP*c` 2=>! t>/+{45%w%+G@✍\k-R1O2%QWc= 2yR_SvI} ;:i2^ƅ^'*|S~kB*|pêGJhH9pVUTzPZ({uk~;2K. u#muL1ԃqa l ;Ǎ4ᣱ&A 4xkQ]5 B2*Z߉-HnSyop5Z{Mf 0]>4?v;6Nr-_N?Lgmyvo51Qdu2Ye@5>lBHKydO6*e&8/T^(3#"γ$U ^b=bf?EO?yd&ˤ gD"p@#) H`Y̰r݌KInD:ik*2AFcBbkaZiSiY1rKȑ:sM-T.^t's!8@>HCԹ?7V"|pAV.u$'D`i(C٫7{)Z#@jEZ1Q PlT*)@XPG32jdrb zAyo@Y"1Z(OҞ9 ŗ*5 )Afz\"2:nqODDwj7B29L~< j[xxdwex\-=?&  j_|"BRZB%p!*dHd'r (7t>蠍4-\tt~1 S% 6vm^5{@7} nvq/FҭM͠qKEMĿ7ӑtPnw2MUW2?WW p|Xױ0 1ܒۻYv};to{D$m]g1SW]Vkg p˖0rjݺ`-st*{vA7Kږlj~Xf;k< Bn{:^tܨN\֨wnf;ኻR I<_ϐ˺v3X^Ҧl\MJ(TV$Hc*1`$"Ea._'{ܗٵK056_~\2n [{}Y\q9kuz!He)l>&1lY{v2'L 5O"|Q)BW,Me]29_y>dUQd8IJ)OrlϋŮakұ#*2b~$_Tͧzcy?*xLr;] Ŀ ?r2 <&G&|_/7u%}x5~Ä㇟f3OZvJ#R9͋ XWV}ڳox/~tcՐD"5%ҵf],t|-߮t]t-T57<ܶ|׵3X|y姚hW_t<[rOo~~Hr3͹E\~BgEeTND<]vz2'Q]k^\Ȅ[yნܳQjtUdNkaW; T$k(w>{x.ٹ@ݡ@ l Bc%E]| ޣz2ce+^. QkW]@/Eo*(!E5Y™.1Um zeb-~ +,Fl,*?Sal˧ ]J#҃9Xؐ9,oBc j+H6+rQd(fUU|m\R[MWvxG[싩TlZeFVχfv :~9^7"Bc,uuRJEIŕȜ. .CJnW7)&񮫛7]KoR6[ pCc}s68Z(49* ƨPyI+ٵ+ +PUFUP2w 9~IKm-l[7ï>&qL'tBȀH愐j!S9!dBF?!<By]`.bFBWҰ?]ypDOWHWn/GPL>%}M"y3gQ 1 d[,O$ڣ^p'2w35f#7탆5ݞ\(·5<.oa51Vrйbqzӯz{ 2Za܎* Z_L)Ud{itD Gn@W]zi+l\2tpJt(+x]`t&pAH4]+F+*!bZ$CW $CWֺvut ף5: smMZTnvfk.%Caik1f,,)N HE ]1ZNWҺN@#*!b.b^BW+U3vute&DWA: WT"vON1ڤnF ]1\ =VtI3hz:Agi.+l2%M4k"$YҤ ˵"d 'TG@!Hd5A~Ti ILœ?Cz[ oZ%:(h^Eczs&MHudjW9yZ3H;7 tЃU؁F3EW 1brj҉N5S+ &CW u*th:]1J=] ])[1pH-4 isY 0oI-SڳLVpJ-44:O'DW ؈dZ ]1Z/NWDU"]gOi8'L:F0õ1Zy#P؞NP*!b++E2׮S+{-}t)Ҿ{̖&u #M]N@z#Oh$ ٝ`ҥ~23Jӯ/g[=;|uW.!~3H%Pڎѕm@W]^gNȿ/]`ҡ+ ]1ZmNWBOW'HWJh-tBtŀO=kutE(=] ]pƫۣ;6{lϚFh1(tut u)dڜi8Kop&%&^A2,pbZ#Ҍ۞O G֥DWC:J%Õ"bt%J)ҕ'lJ'  +9a`ݧ+Fٵ=]=]yBNI%+'WW1 GB0Ԡ~|2='%ZXX|pGǖO/~ 9:Y,^m]}oj߆O_w| ?Ng=rQp:7Q|pk|0)~|!WG.'x }d\Ǒܭ8&YzQ[ssnn> _,XI. jmӦ#C?|m]FlU7o[0+XgV J$,Wׯ_ls4Őt3hr_Vq^]u_odM?ߌt?|m18/6)ϴ֣z8?n۞|N#̃m)5Lj-)5aL|sY| 9S⳺刏`lvmO>|Qޡ3nw%T'PJ\ЩNEU'Fe:ZZCNyz<[4<[8M|j"Hh6/E4#DӞZtɇ*Og!KI 5RqqYa[YQv[VڑȸzBreeՋXXTTE__L_uX]cρTff}"3Y?"Bsh\z;%xڋ=dX-L"/3Z?}&i|!]h}}lmЯ]r-`5Xf޸x'%OW~p= D{L㇒Lt]=W#!#+) \b+v_T?':@"Bq#+, \Ac+V PJD] ]Q1ɻ2.A-ѡӕAh+ D1:!8{Cnɘ4 \biV"yI\*YDteކ) ]ZBW{ >ggPJ(dLA0ECW ЕA+ JC+p!ѕ,T4teh+V Pr] ])%ѕ++q4oD ZQR&:DTO>K~}uo&(̣vK`Tv%3HJޭ1ͭfi)-)e -r]mNE[Wb4Kql=vPE4ߕ3D4->1(e:h(rS=EC@{ XuWe^O_yվXVzxW#'wgCS]~l#r>ß|\9+ aZY_-/ {v27wǫ/Ei# ͨ!(g\UUpLh*$򵫿$ eUX]OY ʬ8.tR¬jݨ6̃VW>)X-? >F)S/5R=Z',5D|۟O <I Gt8XVG \岙۟:1tS4'[L A!4|tWJqt;pͤMtS Zhx,β,WS'^ܓ4%/tT笠}'$/8)&=m[OGSr !F%Pڜ"XovTJhE{\3ҫ0xKy], DඨÂkUQRsE+paQ4EA[LPӈm0mj.j-hA+sS+LY}Vhܪװ)p;K2oL񆼌W#/) :x",ҳr Lr1{HYrz&w2X8EĊUiӴ@Hen4m ٣5z=gg lVw˿=RU?Gǫy0/ߵ90UӶZY[07_vN/kצANᗚwwF5֌yYK3U\i[KX(_\nPNe cډC :쑱';2-o:[EM541 duUf ,W3d""<̔a,!(lፏc@bɓyY/(Yoy2hʸLT%3 ۍPFt}=UzZ%_u*ħ˧gm"R%8U;ڸ&ф2e3yCLzYbW]fb,sNNy֠]~X ǺGBYќg%~\DC~Ku4r(/- " cA'XW :; IfX+h糲}V/aS\|F4>fib(7}/ZO =0^S`e`eeEHoͲ8ò64c=l'w,va]ݨlG^ie{3V/-cjg<8(\2JcX-CJ:%v(":֦05X\F d9Vu'%2gRNq}4K1J2q('apM4O.̱)aD b_4AU'g{Ź7CqXl3]W |\fgփ.ObTV,RАH <9 B;(d^KY,}[<A{9Jbdz)YTuMC<[>.\JYoa*wlF5ܱ;% ; zK|8:M:H*MXŎoI2YbFx)tQ"$܎L<$ܱ;I'OYK8f,.ܱ CH(7%J2zicryi6u32qlU7/~&+PJ1sL,Eu?/*9!&h|rGm_p0`kiΌDYx*7ƋƛҬTC|h8>>G;%]䳆$F]h#j:랧j?`ݥz#VB)o%C[{ z7HXUHPeU\U .4Q65AU#kYj$J^%-R){/#s 53ro -d`8Y,q9}ܜccjwk^GXǪy;0b*9RX5UjVהFT~@[_LF9'b6j[sy ]$"Y pjnvR&c%"|etr}._N&, xiq4+S[J a[>aTѲW8p[pe$D,طKyq5ǝw^˸hCG cfNV(QPLTIT V~8E82Ωg!.[JK=?`tqo8igZwٝvsAxɭnXZ [ Z!A)".H(1CEq"GQ(" LucpI:\;w.H:FknFknt$\pc [R"S#s #&NpL`O"(&ͥõ#'pt)R.) b Yl O8 " $D4V)A*Lb8MAq#V+[anD#s4Vrc>L6O@JR#*8HhXBeP$5 1vy H-6Uq>t]sCۮ*-w (à1.0W ٥ʗԾ28ҜGiqd ¡0m"4NE3 K"3L}7sʧY`Z21qa Χw]Ls߿>ھw Ho\nf3KdnӤuQM\K*>zF *|#D)ZQȲ3XQ:bj ' 7AO'A"ˠ; ]uɲ؏LZv<~XNm?RS%l֙ B ODӎ]ܻ87\s9*v͔!8pȁ0 f`wݷ`\kA6u0sE҅<&7ڜoLIKa܊nFB93*3p>>puNFSw~|WQFP *=~usRܷ3UT݄ol*P'5.P60舋4[?*tT3ܜzn~~}\HO?һX ȂٙweE2lR7JRplMclW5lڸ{韽R7)~aOP,L9"Qj7j)QWYQЊ2, ^rTTySDJ$F}Rdo{7, R `М} (H[yPUҜOWo-0Nԍ?-=K2Lij4cˌT ԝFe++*Rնi:FNqoQskj;_&Rۺu##Sq sDH_|69_u>ɗxdc ѣ )M9/h݊=A=Pn⒩rQ5 vh;]j>]%Q4z&W*.)F{ړ_(W0^bN/nM:f݊nUv+ [Z5N=r}$\"Dv =j~>^1suנcLdܦ][tLC}AҢTRaT1IZgWŵMJS&:_NIC䛎<B !S [BHz. 8UU$kmt]Zuwogvɣ+y$/Y\>^i,uQ ++ ?5Pr I"c6nDRm0tQ r#Uo\1F㚧\YC08JTp%b%Ra$x:8t#(9P!^4~ccA|{͜w%D0:.Z=-,//`V@6̉ HC u%.ߡ4b  li K&58T Z?N?2́48j g*L~G8m$qRB /6+ 0&c- ~߉3H)_A} O ݢ2DjmHa-ާbC'BEcv'[ܫbBCG&/`72b! ~ [8S()OdWRw1֬ 0ʢAdL.!5\Od0ü|SsN@WhS%JT/HrvuҜDQr>3nw@GL'XoԊIJ@R)|:Hz\ȑ q62%)5b) &` /V᭡_>Y-17y:Z<>_e0LpMizN**%6wxe"y\G]\ѿr@\#, K3p{Gd%ywYϊrJBBRprZG@eRo[WRaZe}AcZ#ƅ,PK-oȠlka2T0gL"iK[M))M5J?TWJNηt,t CXUgk)Re[;-VG;1rΙw5.3Dym3$8Nz,)Ly$c8)Ac +%-V)cgLQ3e`l^bض2,Pq9>ARc)G uhh*mY%Sl}}MJ[gܣZ PAkjRa@:._Tz)-NRP_T*ˆGq*`{00:}ٷCr qNhڙᗯ\:;x{Nƶ"8(U W܅옷I)"eoB<HJfnīZv)9  cܷRA++j `>|Ḱ%=6#Lk9N-lez j91/ 'kA2M&?wOVs+΢~@$h 1C1%*0q/k>&}^@Hōp\ܺ)|/΄F2F,I~^(XrX`ua}%5馒i'x<]إ6>kvY2Ovgd"SDFhkMDpPnxŭ6%5鲒w['>n5nW?;l4@5[Gsb0;.#6ӌ7_(_MעѮo9_Mשl黕lʮEܪjiJ ڲ9 Fì]YHr+B^h``3?³k? t5: ͤ.J"TQATחw83GȈ)II2-0)N Ijf\kD]?UE8!I@F8TL RjG`XKU%Lf< L!0):9:܄a:ͧ$#qB $ AԅA.dP=lq Bi [[| x#( 0 t(ӐH"`B)YlݠgY[BdJ<@ uDЕ}ux0P*E0BRC"Q2 ) fP&6JVR`23w-{%F:ZKܲwptIB<ʛ,"<Z1h)wO_Ҝ9U3G@iYE.uNS EF]=W(R麴V2A S0Z(R`3L$R̃$AyMlm 2!LywiV ^SF{JeD|#CB:T/F RF!DF"aaJl8P Fq̱dMY{0>pqZ}#Ǩ;rz{gdu^3=xAM T*t\+ϮKH <P(&0Q*VR6K;?[%h5T )wy\"s.k䁭6Կϗ Mr_kq>fgqN¬Hsoţ⹑2B! C`(P$Q!oE˘LY-9K#MPjDS@αC`;K?}y1K_ƼAYT@U^ľtfH^Ͱx.]3.x'pQC@" -ZmX౛WLo&Os쌨^sX lGߘBpS=HMKygi~2N"B ǙHb~t24j1v]# 0VZH <' Yr|jr.^CW `> 5LDC!8ΛDo8j Gk"Sm=U?=zּp?Fs*0)1F C5sbBWi4&׏lwp+T}z܏u-oQLїkuxѧCq p ] _ҍ(K†.!jof>,~KY`%݄yT7 `)Fkaly{i DKIQ3CO@!@612_IHaqmV(Ssp1DF|̟glroHi2aTHgOf~3=KYpDOO8m ~,pih+.[XF2a yL8f^b:+Eƌ!!(x;v^Np IJRd`r2V}/BT:׷~JAH(z"2ؤ9Ε|Ə ^#)94aܪ=b"3 }LjI=wudgwK(}ο9~0x'qM0=vŌ!^>gR%AgKKX^ӹŪ\dGq2|*M 2'.+%L j].piZA8Ǘy05V/V`8oşjT,Ś.%.ϔIs`C07wCN/핬t?vP{gp(*_)a ր W(v 8nkq'tvx^+nV#hl7 [|i%"Rc&`-hy gS/y1&g}[Qyvrql/{oО/1_{unDR{kBxҪD>505s[ܐFG&)"Ш4_@x;4h̕[tlXCF50N7JFZ[&g1Q )D?.#|/N1Qpt&K:60b9zk0nE,(lv3hdt&)eaK'iZ$[7 y}F҃Y |C:Bř;' cTY 0ՁC\Y(/ \7r? &T` W*ăqO*ӨI!DF"aaJ"l8P Fq̱f;'P䂧,?U OyǓIP,Z(>"Pƙ9q=DkbJ3Tq"䪲 JU ߷R=1r̙f :}|mluX3Gu&4}$~FY C[ӒGշ,Ub7-wd 7Ֆ:SIdI}؃o.,,EYj=l7mmTx}6]{sAܥINuffɟ6륌pz,x/e%-V׊̐tn0/_7 7']Hn$ˢwAE!҉ZOEa2RW,i/k#ҩ|}Y5|H =^WBj# t+b);e Epdddoa-6#lx}N@)0nAiW`‡<Ϡ6H$9ls߉R,o%" !^.to$g›Swgi- I8-ّUo56 ?`]<'dGIک,s~T`{DB{[M2ɠM v(`{҆hEcE}dZ>b R4z:o!vG&{65m4uIv?PW E b!G1MBJw(Sdꯣ++~Xz< i-!$W{ڑלa #OZtc$1rmLgT0zrn3"wWJ퐗޺Y(hJl^Lf^"ٳ=! d^?'U(D q x/+fM"SS4/2,Im0uO>~~eQ]y>A>3B>Օqxiٿ'xaCO,Yea:M(P#-^0hq ,zI_oBa{,X#>+.[-00VN$6„RJ/lR q7zq^E{ob-f!tyZ]xKPϗeJKvSu3|vS #3ߚ~X_(Ud{C7I%ukmd+tτ҅%h >a9fPRk]` +B|2jLyVpRB雡>XTRop3p w)Maq\j;얋S:CGoӲ8G>ߊsLj:Z1iSLe2@@(~?aB]F%=JWwOtbpTÀ@bG?f?2}%Jpeu$3[m,ׇiH?5%ކO~rKܼFɀjN=%])6%IT$/wc1XQ|K@NQ!E)<,%|t#ɤ3nuK&hf%T>RcBMoCq'ind讉3+ms) z­Ů036mt;h m6=%@JLPmb9w{?M6Oףzs܊_ϣҬoKkJ)SXI"Ȉ==ڮ)^v^WϗR6^١Z8VTA&(&:D)Խ6=jO(ѯQ##Vp] h#ڪ-Q!OB5sB—Dݣo;>& +nA80m#Hg(l(XR(X He 6>jt 9e#(/OdK&vMJNvl:VhqPulsԽXv=RqPsOI^\R}8." jQʉuؼux5QaKvúvg!њO2G@r )RE_~-zlߖRm *j L4b*:XX@J{jB[: ˑ嫱BqՅ+W<ʿX2Dx%b$-تls/`Bȕn%J(ƙz,h@`D]JEjR+[^cYL``^IJ 8vNwDaN!j 7F! G %`A,ICvw:((OHx_F9*w6S1ci[d}"M_j7gUS/=XN]C,.ew]qe" ! 8Wc'%K8"`]<gV%uuk \Wh ΄/^.b'+Mu.EU6EVHٰ̟ޓՃG@Uʋqrɘ.-Tjw˹`-zK0z\WZp96 mI WlK*!,f;L2;y,b"$b%՜\x,LB BP8&KK41[Zh "7q,]4v9T)&+[c[̴!PqbYU#I;ۯPl#ikx=_JAӼm;e!%#ɬF>Gmp{-lhZ.˴]p_gJ_ SM]Y` ʩz^YaUe vqc~ic /ik]ye ўdã;N7I+✈h ޹`YZ+DrXDS$FqOs tB~$ y--D)+2Q}#$u(&#@3L-?FTsWaީU1x^HBw:#i*㕴KL :c&6O,_U)6n jךۻ3IA,7&AW_#N[ʰ5?,uP0R(djeAx-$AY?Y 6!ܴ\mzit0B:u1MmV1$8]d%K%꾻 ?:HþלS(JOJ\y0b|c.>@8G'PZD>M %̚7 qQ`@+,}].~j?\U]^Ca #QD-I=߯ p2e-4ntzmkgc..CZZc@{7(LC#RZOI6ŝwiWkK*n!|wO5,d;dF#KAθ~f435%q6qc8s7{TҿLg4mD7&~tPBerUEbK;ftH؏^]}GM02L ĕjYA|-3\;& E ֪VD(_?_<ޝ|~ˏ/" ZaMI\P!s\j֘8|mؑz{:xJځ10L) o&cO> Ɓ'MX Z2,ng,hL1 5G~?5"{6a.vbTԡԦl!>e nHd]_cC[@{GKRхfDc>:5惠ֻvwTJNUeGu֕-D)0sE{t@;; Q/B(>v{K&R(Կ<`#ކԸs;P e^?fc^?fzN4edVK"8ᜆ@wZ19O`0 ZqyqR?jtєx\8t,k&fk&f_(J I!{Qgƌ"NRF 46e#[iNIGS+<擴I+BO H7 ${*%Qݱr\ O {-|3!g0~9dIM.t+ 0g.~= ͆K^;ۿ_,QD7A0L {p8l g?az?HNq)e{u_zdws-(.BX궤  RfΰVzR |jk+zYv2tͱdܩ[_+e/xDzkl _4h!MH< H!BQ*dHw.|e=/>t4)LnזaӃ'=Z[ Nz>`ڞwsO8Íƪ=]${ܕVqz{D9V˴6lBQޱ%9K} 9G;"KX+NX2,bȩLu@Tz\me=J WZM'<+yji 'XKZ!Qu,-,x?%XN4BfҴ˨[Yl<!Lc+:'%F1#˥SjJDKXJn ݓiBLz:i2K*MfƦXTUX:mRP0J*Qk;yY+ؕ~(N} R;MN6Y@DQHyTE4W~K_R?>?| 4.ө7cZfol]/eOs#qʄp)(鐣7[+( ef1EOv@/K~@?8Ў"NB qsQ77&B/.3aoXΈ*l%"+uΑ yWqIð`*c]+>BŧmZgӌj&Y}|=cɤ#߸mdZwǐ͊J8QUI3Nh-J5 [ )E I-[sڝ!fm a[9, @ ;f<"ٗ3i#=ʑ!눑I^v+Zkq?wPl'CR>.t*܋GPpPE%m(G {X6Dq0G}6zaΓGZpL. ʉ$s ,_(yrfNc=?z2?A*`7nZl[@h'ҌYCo4Fh{rqp!C.M&]=#=nrXߣUf*BeqO5>fj<[ǰ5f` '"{CA=pL0S:A֥Cp>$e7R|Q}e!uc*&8~݅I|ia:LtuI䅔To-Y0E_u>fg_cS+/)wαkN7qH;7w=urJGNY ȥ43•9ŭU>ոMjQA-uܰ׋Y"ڛ M!Iy`\|oFEDR{`||ZKILf,1WϏ#T8砨dKyzf>8I`sw>*\;T:VA͕.],=8t$*,SK"ECWu)^HЄ? Z{ 3o_ݛ+_!.(2돠R*61v4p"{oqF _-z{h>*^M ]y <_!̜ZL3Ӝw}7(nLŇbZ<o655kIftIwQ6Zt}69&Jk)M3!"!&H#?'R4C"mpF QBY.Kh[JeKJWT<8LH%!\S7>,Y%B00H?_KgMc6Hqs8M[8k]ʿ - JvԜ61^pZ "X&v#0`9aD%`AUX'3{Tt+ux[46kY@]][Y ( 2ʇ$RB}JPk3>g/oRtFl٘)(>Ck3 6Ѝ9ҩl[{#hYW!Dw)}` E[ >2N%cV^CJm\ix ,^L "EUGwک;eiuʮKZ:9K[! yь0p6,Kb{h8!F{ЃrKՄ[1k֓6k:S)!|fćgٽE:qM7_7cTM05b9285<EY0~lsHXΓgg)vX\w&3G `p_\SoN9+t8CNm'3FmfAb̀suWO&"MeB G7u!񬒿XL#$Z鼅mL×i7wJ++:Oc6ɝc]N" g=`O4eHhgz?^gGRʷ|OT7M]١wS+3ΪQ\wܣZk6m\JRﷴD_}02f\6BqLo/݇iu>ӥ]uU=7󳖇o=F+eqȂBnZM,@aC(\U;.hj:2HP|@yvYM ^ fx9e8a|-N ssa6ZE|s?ֲ}ar۞/KtT:'&֑Vts݋#m_w.:n?|[&]ԟ^3YmW^2xSKél7z`{Q{[dhf4FuB=os˽瓔+(.4hGZ Z^mkCv6`߶]+xFa6PCVm=|)ZF`hj` މgh4Q☩׹CQM;lIv!7G⿸zxZ[ pHʻ @Hܺ%S9@mfIչDEe+#6 9ݯGV%sN3o@7 (ںfdi hw䕹ަr&r3tI0ԟԨtOSch"u L԰{ILk zi7nԸ u,'|[6ϡ!1R@,5 *LHi/8V6dO)l _u6.x F &@u)(Z2?N`=co R \[Ĩ~垩P ׯ(v\72ܩ7:}ѝe>`.EaQ>b>}x{%T5-'r+h+~,NW:zr#oPS4×v=C.[UًNk_N/a݁kY`et"}F~,զ7{s {EUCZpNDsⰿDE9]IqxF{Eq.^c~C T=9Ѿ]j=Svdtt| %Ib[N$N K4!rTQS1Hz ̢uci3loje1Qj tPxm$7ǃVz$/Igoڰ,rHw_(c(Fӱh$"|۶2p4!w| u<=j\k+#˙]tWqm#Cx=pɾM(q*/-[%=Êtoؽ,0HV\5Q>ϼؑ2lǛtcx?ͧmܧY:ʿy4S<,ӊ>]}vuyiӅFa+qH.팏 ؝ɪ2tNFf4kA]?eChth2v{F Oj)t'wgVY+Tx~<>Kr,x5+mT.x,- KqλP8(ݙieY nlzgrᎬ/#\Y>J] `?&őʧ<2FV+CA:pb t.]h6O|iv;ݳ =hv56\;nCkY0LT#0?}dR+c -lVף) qYI*2 謪7]OVkP7jF)֪#s-!Xs)0b_Qhg#.ehޑՄAٜ]|1s~$\#P{ !?M@2?D;kocكkdXʷZx8PW:@LҲ]ۜNI#IJaEGBk( ĀwUdhxi΃`-mS(P6l55:j<)ĝCXXS9Byv3dg5ɬUyk>ʊt>95k #'yKbۇ&X+hWP:28b(bM Fs uEVځT49ݙmbp?M.9EWa,}l]۬kN`Xԝ9TAcA:!L EjVDQ$ F*hDbF gS>^idӮI-(6 ꮧxG{g;AjlR=9Cyx.dp4z;Uz3\3?E-kr zws$\2*h9IX)Os4 }yj.̫"㭰/}4u-x RAzvmYM/%bA.m[|hErF}^ aоWOy}g̀A67e5+^UU}˴>*ykg72j^e<R͞omScp$b k<9#fj~[|k;b;ְ;cl o3y:¤Qn4{>XG/!BQa17c<~>,'r\*Jef __pX{+/X J˶mI u8{OѤ\xɳRC>M=ɛ%L rѬ Rom Z+'Nd'󲆲Z`%^yPdc24J8znh; gw_cٶ6tx @?,V1MXm4i?P:iD Ȟgq)kLớLlr;E5c 'cv{~gIsn}zE >#1ϴMgqci0%`bÃBg6yܽq]6`C PIv(C7ެG%B00HGX(9#M4H9Xρ8LpBÄ.<(.0 CS.()ub/> y4}cD]_~\N\I;No7G/b](R v`yht`v ]]vʗv߇*_.)8CH~)m5!;owٶ"[tڋ|Dy-[]aH`W>+xyeV׵,^<>tTT y䲓Y'^]x}2xfἆZP:;3ʵ԰FC9;ʾyC98a1z]3icCfrv(g#ֽtu"~TpSbrY"dNI%Pd*$qg޳QI|:tWuz9i{۟*Z ְ~4|?r8OcmUD{] Ug^U~_~ U) 5j-Xko6i`ܸN0 98|&I! q`25hFϿ\v_CXʾ=(Z2dC+҅1]N%SL@b::ڽ=204ѱҀ`ԭ)Sk )#)BmO`aKY:Gi`{Iգӣɨp/?|H/J+JD !N)Kh3{VD:c=; F^F4Ԫ6CF7,Gh5UٓyE"V ԋuˊQW3sNBbZt^t)30l2HA sMHT 2oF΅ۉ"Q'йr cѥ*NՇ0jƻ?cֽ Z@&bƣŕL Q3:)!VIY2l1D)ChȟXK=4`bђIa+ }5ҲgHzJF:s8.z묍lxj1hQg\׳{9JEovqjE ckRl *oHLRN}x8'(PdahF 5%*O)d$m5xD&C4 OQRQ$m#dVHѰ}x8*,A^a=ylYhrzw\aZb]N/LC%iE;e;sxGa7M{ߙS1iԋR/' QRMY &WC4MVQy,/o`˰Ef;Οr̒^22+"Y~qF׋P^ZɢP)[h[V|Іu 9#z*BL h2&h%b9ڔ.Zȶh9~exv[K$Fo(Xy9Z%oɧӼQHi` {ޢp!,ko;f-2Nqq=>|~fGF1 ҝJD—D'Gٝ[-ɒ8zaYd$FF#^X^(5aq OHSI>BIEɐ ywU8dY ķTb<٧32l%7ۆAy0Vì9j'q˶դWvY(Up6 ]Bkh,6\Shmy޿YkwO D݈X|v"K@Vz/F-xuCpXa T*{@beOclOVH X(4-V"!^jsƵfS>yOt:Xe` )$x9FI)s9F)vaƓ*$hЪ:Ş0YKb^}uȆCq}ΚJ 6*9Q{/8r2א=zɓ'jɲewA1@LUGV٧Zt ^71$?H2V$^j)Eɿxj5#I'9 e\LCb*2i XWyH[ -]A2mmV^lnq<ī |HCKa؅'h [yr}k2EWqւ_`->Arf<#X|oeutZ5-7*a<I~rMKZ(2bn}9 G+z9ySk{0-mFU(Ґp(E`h-leT3q}ҩTu-B$يF06Ti L!l;AuȽyWBU,aTiuFZ-umk]v eP6ɞ d }mnnd75˘Hd{wvWĺsXRv˩O QZ#>ֳHQpj;tJ2P3&**?\WXP4튷]WJJ.ׇ_[qWJJgu\W7vw|4di3NZT.]z$Y-qd|CHkH8F'V e} ۣ֐x~q^mYʍ7E"x .fQDJn)\6hgA%,@ X*!Tb M㪙gīc|әP{~ RdNn]6 Z*c#SG.%%K Uw& hz^?\\уj'mAidD#* t I/ξ2!_m[sĈ}얝 Rjkhl>RNK}%!x]rhwy2%vɡ awR)㾧%iW<z~~^qFcf$$xg/1Vє Oߙ MX/qpW#诤lȪ{-Q hиYAV].Cz*$b,n_^0ּxZlnǑO>nY[|Ucpe:[אIi<թk-ց`-6uff"Sŀ [B - k'G7gdDxԽ >Ey hb`[7xz%ΨLp AHxkm\ To154j48{>)nܝbDO=S3Gψh'>c rr.[E_2>kEL<3{޿9A vk;uwo..HEr^jJ+yAρKɗ`jmX,$h6_./O_oז:_f{ ej.Y^Q`t4{r|'%|lu{)8\ۿq2WTmvnٿľnܿ.cyiV<|[|7>_)5BfY. E dZ:'ݺj]4vu(f%^[هJ뽆A1 ^; jٓjdb `'fL>^lVb%YLĒ"#IAq4N,#;">V.%7@:@Vn68(]YPe7 G5$KELy 74֜913Ga!g!jm鹫cڤ#&vD!rR}Y*q'giwjru>[qx284Օ{F1@| lno2`r3d6ɇ O NI퀲M&d m{tr^(S"(>Vk⩳qY.4dopraE=( I^wc嶬to.ֳK>' 2I{9jQ +=j=Z\.>8ҽ0zO[AiTևsЇIE sXkY>_ci͵%u]&Ys.= 1WT;rk9c4+%7*!?x'bzʶrxg7.OZٟ1Xt%5^g YgG>ԊXr$ wmq8~VQ$u tg}FAl'c;Y_.s\0إɏ|t4GL>" Wfdl5:yon36jU"ƠU[Wwqu!?ywIzOތfu]Lb4a:mƝӚŴlfy[ȇ]H,FۄYyM9C#tT7{h|ɥ,,ȿ?2dG9p5˻g ?}6}fW.:UXAUDPUмP 5&dXuۡa3&@wD1F4>:;Yt0UuQlک s=OTh 1 t9RR5-#7e-rd@)WϭQrr;;q8b{+Nr[L!j'Izl٧˲uCoYZ {(o<&sC+/TanZ|daVy]RE;!~;{3C*Dfa;. *Ya_>9$r;nZ +{*ޱ6vothQ(t~.`.,fO0k)~1m9H`pЭz}63 b -[#8١sv=G.['G}=OܮC[6ŠVQƤ-[Ӛ9w]fqD9pva3[s%`>.Bt -7Z[bAj qmn[) dp>99&!MEYaxc&By$2Ž`cޢvkEkg]ec`gZg/̽63nuлy!2yF> V ooc 쑺=RGH] {U! )VO "\bM'+s NpJa֣ԉ&ao.za Ƌj\ 9#:k!iӨ9_5Qԭsdr<1@J|G+UEGKԉ&I7Vthpܚ K綸֕y(] Yy?i1c $ >[gP-5)*0|αӷ,,SU9pf|aw9ELZvn]NGМS0GLU},ytvAjw&;Zf1o%_uI_ e]<׏._uį:WC= r~aYObM1E.^'-[3f.Z-ᗷm0E0J8ux?ж VBV3r5A fs &8 -:v9K=qOEE+'teADuBze !W;K VeY.-(9ԈM~1faU@fg$BcxY]wwXPgmЏXMDjEg1jhD9DMm8[uΙ)Ĝ>e&4j6 TL9%y9S$D({ď%$JP;-P$ >3A9^y6s; S{K`-wcЌ9r3h[ `¿do,|#AwO_^b9zw^mv`̊g_L߽׈ͫ(MՖmkX1=jj78+c+X]OEr$Л>^"QX|&//^~B_PËch=Z<GWڭP="VZg.l#\`c&x;a+˄VZUOhU=U ZG'N#6Sω\.2AZ=+RHwYpi ȓ|GtخSo;O?[c zkzt-n>/>~>_ ׷ggN_| X W,;_O/V|\?:d<0󫏂 >?_O!t$&1~E^Oؼ>'׳A^T+ȷ VE{_a=?kvƳk=/;c~84B#YvǁjQpV'o1ձ|NA$_5VUp))U2!Hy]Z̷7[BרmDN\UWMuH'`em:h_DT]ٛ _%יr-Qrvʸ,"*#Yw%~JX1/6n>|$G0Ga2>aA '=u>N7^ɗmwU6\ 6nrR ;Nz[nWZSs‹4?Γs7v62 ZV>spۄa. O,-Oo\@[Ȱq K0`[6ޤbabvPkkMTxFn3r[Dn#BŹi[?g'1 h9L9'rK~r`{3AW#~2?x9 \D2='n%J7ٍNNϖ禿|><@>uBWsOӷ%=||, z fsA`o<(g^r3-/=m76BGo{;4& O'+vn*h3ݪu={}=}[=x{M*L=!H Sl~#Asa߃({aCS'_ݘQdWC &C;Vfe9&H욐kX 昌gxa}^kAVLHoQ&ރӲg<7> z(eQȺ6"\̰4s 'r"+1mFGk]d[KGu]}zDOO|c(Zqֆk`Ր'mQi]&o#ׄ\LKЇ Bep2ܷ ̎{-6[JIo}"fP,Dl9鈘m~pb KB1:h&9nXAsvC{ a#>Źu z'9`1,Q}3Qz/G L=rȓA eO:`,{xj5!Ȣg=Bd"k ƮQ%"́9N(ǃ2NAb.DY{h[jw_b刧c)1ȯI ^ d\ؠ eֻ(&Y&Eg%k N6MZ Z JPM6953DmI'֦5}F?ȾY`mSL]yj%Aܾ۠qݫoY#d7I6bIF5ZQo2ĝT%['3Q4{SpKa&v6čdʾh5ZThj6SlLoNOx2J|+TϷR|Q=&f(M$6{r{>qC$`?O6Ջ**(9 Ɛ.L.2*z?{WGcCm~ l\7f{PӲFn;$Dz>JU%uq[vXR*3x j^6I%;LusMxF 9n1_f*ݴSxhTU>0D9р׭RCոX}]Wp S&x6RBV?2XefSjJVTI hg㝅pb wp1lJՋJе BZIVCVӺa֮foXAk_IJ k_lh@Xs@պg SuFr/kHRQSN4H`]\9elӗ35BC~g6pj3|giʇ *N+ $ybU؝he47!AMJ=&tfZJnb)x- @FMr8 ]R|{UZ)hFp2&3s!+rHdjh!q ]QAϺ'58V2pV*w}8FD\  ! v!zzxRKGyw`)*kY)Xvb4Xp]$ku¬|@^30 \'ܖ\@C(1o'[FrKk6aQm0Т" Ґh0nԴndj EΙM$LS翹# ?Qz,S{[>lJgF˨2llMNEOr`ǎbʢKA{0?B{JiX< Ǝ+X3T`< oACj`YiTD2-d5bkfM H ]/]|QJ0 *y[{(0P/o~|Q~pJZ##b+Kkzq]sdFFڑ+/qSn\{.:],[OB0!B.*#I[Tv\nt4 ]L%]:/zkL0,!yR09G>0^Q; 72$`:ۮ/!"V[ ws4 W@onNQ ޗHETg1М`2sww2kRlp9R spK9#v0,-4g9> |8U̓6SO s4hxUNPu4- zDNf &9`: 7*%ę;B~0j`F"mK F7 Z(N&Vq֒joTϔbl MDw!\(0*í- #?9%gM8vZK|CLܜL%%]3ώ~T.;cˬ{Y%>fc3vLc/(1Eg]B8ű0k,w'xa(ԓoKm&gwXat 9ɆqfƹtXiqԜ},8̹n]:JM.VF'C-`׎/ݿ׵t[\ 5g ֧Ps!wvkCq^ݵ Z9NȎ:ظvkw63.Kީ/GV_]A:ZŻ/tvyGF1 J/AEy9+=P4R 65ј_YAݭx|s`tlDjmdX}CBtՕ_|mZjCjutk?@r^?|w7@o߼9T  c?t|~oߚ[]vo\ hjV`?]k u_+[7,qnZ%Kя_ji6 !cX߽*a*_^W~g}Mx\S*k+o1فxUōpW[iB 78) DU>Uz;~>%ИRy KXVo-;q?GYRa>z$j'JSv7(; ؠeۢ-$lښB=BC[jcuvU=0$g=|R~J}3eJ_V mm G"_024L]yTNiM49AR4%"`ےC,jWzRif]p9&_ c"B+7U/'ר18#Yg)@]ۍ~5p\s S YmҚ&Srһ15IbMeK (SU]r0o<[0!7; s ldn 0D8M5k6I6ɑ!ZN?8[WT:T s}/,XEK|*8g &B:uDyxI =#@A#57)䡦٧d?zu? 蟚gه+^|}-; p>걎Wӳ [ԱiJ pa*DTT? ٦&Z Z-iH'aPjgFl2 ^ZZPV;}nȿzfWRp'^ZސQ!BwpR]w4rnql"ImD:&nyhY bJF?ǷT;=P͛jo$@??lgȖ'c*'|K~ЧRm(Av*R4ufr=Zbg?2\r+ݳTvrgd!S&")W&{ue Jl yūLUtl%zF.Ek(\5tDy0ZMX7^S3>HXcy ĞE"l:IAT`v)Hp:]"9̙`rxWy9UoGLPtCvwIUyP5ʃM, ^t ls j΂Y|֓Ϛ;e$2|]eCh*`r>%CQ'+}&xM pk5Dkf8"Ar᧍C=>Nx`vxPuK]Ր+ck$wSH v`q냹tqdL<~޿nZCIQ A2gHхqT iьq($4lL[Fb;|8Hl\,sRM KOh?m_'6-i 8zә&o ,]{?pfgtvoIHcVGP59l]~^0ⰿw{EHv9,n~r3gG͸Sj>2jR*xڛ8OI;xQui"8NwlՁyj17_C<* 1}gjjt8PÁz) 0 HٻnW\yܭ$VͼMmZNlKVV,WfS$?MvV*J5*?Mrj'Qm3բ5rQ] l:xU$EKJ&c5 !Q.s zzpN;t`bd@iwzȳI6ZeWTN%9ṵ0YSWj+<1PR|B<(ȹ9jMA!^HmG ƻ|>o1E=gݱbP~FaZ/w>d<1}+"ЗDcMD@-hd7,X>ֺR~eC?ܺ:ExӬtwlx!0/5%s:O4Yc|?y' ШTc1χ(t~ :thZSr0Q /y\?LGK-Jf0ǵC5f n<cxW@F:%v` n,σ A\݆g@~,h7ohRC;= sv+T'G sbVh@B5]螴@ ؚj/YݱGV }>ai|@;=Y.d\COf,h7ohġ#>4:WW8ov ۭՍ N-C s2lWh@~shԡ]XUzcZ*[tT\XuO5VRUJU6-\B]H_bn VmYb+RAJ/>,*K v~Cd<UjXg;ѨMUТ('ZS*)+JP b8'o{%Źծpw ihW(YK{^b#ǥ}o6ٻ;rtO[knA3zG`xz76:ݝMNw,3AqūZ[]LD^vt/`UM58)!l&WFrP|RkؔE\PAMpuMJC`h!J̡uCfE^$#QJ&ݸB )`Z͜1]yQEi.{$58j2M_]]։JjvaJ0U?gDs5 lƤV߀UC2U@ae袏8 ^NHD"qޝn 7!>hyVKSsd&=OABQLl~ _: /]\jس(jSݺۍG?ezځLTa#!OFA=\$AM=(w ؠӦE.g"r[D5zdz ךV^JtOl)Fkd=z[!CΕ3cSreրT#6UldeD|b)o{-/:Ҧ?.?J:rNZjWMH$A5UF9r Ms[uA8= &@)0yT7zXSjMCEϵ0_лE RS0X Mb SHKY BFcav":ah!ŲֆݖYu^u=g/3d_j3=Cvv<_ u=q%smtrmOc'?- Q_iK--#=X'i.гA^i95V]xkS9ld M$uU{Vl;8!A;5;zSo uu_~5VQ&rmtT^\-548arYTT[ghz@&nR 9z(GRc bZM3pA/N2er׷_ԿY$` )%UA]nD;a.IgH(٣xWyׄjF^Bӛ|E`SLD%$ '  b- g*HaC4 [Sr\rӿDRu8=rLK y48(6A|Q dI\p:WF{>TR8 LO@;spG }oxk?[+VM$WކF4*)nﱕJɵɻԕ' =52D~(ԢVQl,V0@ZVd ,A:&x|c0츛ޖZ{-{= 6Ojz ڮYy8Y/S@f =ߛoKӝάr|KJ) ygm|g(N}=фA?0᧶! R~>up4uRFu[y%';FYX1@V 0=Ɇl(iJӚ|hn@E:UaBݔ@qϩ ;g^5YYoo!ahlꗑV)<6μ)ё:,K'+;$ o2-︖w\#i;uާo<mw@E ה) O4~;oKz&SGUz-/ؖl{T_wΞL;jUԊ쳪7cU l3j8}8ڪ)9ӫ<#!8>+fۑa:bY-Njyc<< Ƞ8Y <!XN%>)93u*/w<*0>.fvsl[q q -gF9W"3S>nzHY}suUf@PDF,!YSr0SWy8u n[ֻ*ڞEa\/nPpVzPqFn_| 7?|\u=ի 3@PGZB 0ٚ`cڌ]}t~7Zy6K8&cGYOG3՚;crWޚUr#uEy?}|(aQ748>䈏Ċ73\F6?QwqY_Sbs=p{0㰗閒9EθT7l%E{oB6P_"K&g^}/?r5rl=O]ޓ}Wgp3!| If tLz"<3 =.ꫦ;g\G 849Of$6 g$^y[5+v;J1pc F /:o)9S5Pj.WCw1S8T_z;,r5\ Kѳv*/)ta߆,K\c NzӋN/?4v%WǎB~򶽭e\M{Zs˚U,Vlt*9WcItyR*Z ؏蕒zoYݛxҭTZkg/Ooo\?}Pg]/:^O4ZxԽL޵5q뿂9N\Y凔8ʃvNk&B(dX ,ye {fz \C џ 8H':5j%FOgg UNt`~6сcLI E2-7J+c* ki/< l./?/vgk:5 u<4;x~슙?o/fW^lU@,6Az&" 7#.˲&ES8d6og_k4kaw5oUlc'+Uسҫ*) \{SG`u=ΣnMJϗݰj{"@Kf.͛6(Rц/6|2(܀Zmytn=#۔lq_2ˇ49 &R_.QЊ٪|VXzAlsW3U[Con{ZqϽ\\E 崙D=TY9(5jx^34{,UemƝX}Ljx~L];8Mj,~}_p{Ճpkh@c앃16 %c?.]1hoOiCuF $?Vb4j)O 1M~v4)}hm<0awUJ9[:bq_}"lgl4Pu] ,=1mckSEf1:xU~X-!=oΞ=h:UڝylL|rt[|-^t?JuUQ}ʭUKsy캣vsKSVʕdFPu&[@錎۷鶼ݝc$gѮDD_D9dŻwpezo.{3Dk8*cp{ke?˂ 2lM'`C^q058g͆7oo)߸pޔg^y\gmۜM "0 MV[2ʘ5gebKgn)]k./no\:W] w6gk֓j0-V?PPhŔ)_E3kG̶ZVgo뷇N8XR79x5VG7HD*еy,JeR F<{LѮ*!e擔fYjp0U]ނzhwUDiox-`a >]ot?EԊwQr:AvHL,^58'[n5uNY!7)/'u++7X`c-oiYfݐd3I7F 4rw[,o z ž?q,t}&p3w?}3}wrsWh=x5ug:@wCy8z`?ctNk:V [Y9V/ecYVݧ[*ji{܎&>N1*G_{;`l.}2Ju*\O_:t}e&Һ8Iuz;4!sF/RqK-yP)ԀJ #vQO6e({FeP6kt󠒱&e5ʸe)N?a/5R`K[Ua2SyQ&يwQ=cHhҁtlRι}Ŝ*!=Hm?6cԛ#9d" ^'LG$0j0|aM#Ɏ,.<Vm18br$6st>\)M}chzָ]#+(s}Ptt(Z6U cP@jdƜ* ](:/FilpGx3~G}_4ؾx3Q}_ۄ䝓yOۋѸg3[N''GradiݦkC~묬6C1A *ղ4kx;>j^)M:T2J&D( \{3Gov= 3egN"٦E^^Ur}0r-5(#v-h#xr ]c:y˚1̙] XB<:%Λ%g%肰VUXbXx[cA @[e\!Z|Eμ|Qk #(]A&? :#-dGZ,DD7-d/[zHfhK:Ze@!^h@Y5kѠA/~lfN No'~rvr,Ff15kF|{\]Bk < 74b 16Hk4$U>q7oo)mF4.<xאkfOϼ,p> XmT}_kD@ du*Pˬ)+ GX,,K mލu7ln>QC YvH,vo,V bcL:6X|%A; .%۪0nhXG/yTY pY9`2'hj'"hL1ffJ2ԜgD*љt@,6FMnّsF ɲ32bj<ɲKC[nXvOTv{1P#36j-Z ypެ@kѠ.<ܒ'VZ&!x(rj.Te1;gl}Q/x6{ITS%MxmE*E\ 1FT&KD%'rD-C"*`c' Sh34DTP}4ڣ^.G$Xlz ,4;˪#8d2%ETԒ9q}ke_}꺨_gMq)Mf9-6KnK_%ӫZJCGSk.',ୈ{J6}٬\ߌU巣Oa IQXa03梚@QwYLb}f}#*=ޒR1ԁP*kL*TqY`WVji(/ yɨ<^TwW/Iv՟{K>2$RKEF%FPEåvfț%P u:bJT1Q6 ʻ9jcU_W:9PFэ[gr'9뗳z|&0:>7z7ۊQ$"7om2tO>|Y.ڵη OKec|VVm~,)0eY %ɽMj]U# s1KZoQ_Y ==e*+ }.TF&3U)#7q:\Wl ]irs(8@*s!s*Sb?ng)vR=m,EٮvG9qe|TemBD0zt*ƠJaeV&k(y$_(A{ 4tZW̘1+@sY8_4Mx:Y:]SC1<)z,9KJBӒS)rS2iצHYLc7b9!v U9Cd%nc2SÉC1혬ނ*f/1X}<+k|2iתlk2Ӵ>{C2D瑶ZUC:UK݃]oGW~ٻ XM'.O6%*$ _̐II,D տr ᮆF()wJ ,P|ųtՊWj/[_қ6K/o6vNW}]W/f>*4^Y6|Ҧh5YӻC_3_itٴ!h0.RiڗeW}*C’龍%6{,d0KugP)|KTRy/^N0!^K7ilW ͫltN "ZksZ5cJB Ob%=iMv!_`&?L`&?ɗ/p$ccp>z7RqA0QNDoH4̪hD+RsZ͛&< a-26C,;$ќ<3WAsփחnHU`ÁQV$rv:b#꫽vTZYdʓ:mUo:`R~Gdd0d0]9T \qZ[cg3mTR^zMU $bܓ  >|IDI3f1ꪪ6b.9%`{w0|,#5wf:J "_R:I1!br.4^`>Gz$([Be#*X B_,?5&kHb{8"N`r{76 ׏xP`:kn{q:Mln=Eqq:3gn:o5}M!`̛}t?.>P#ζRHw$ 솮1¦$vSJ."rJ)?T 'wr#oL:)֬W&QZ4K9`$z7MgX(As(d@1**K_aT9B z^t~1i'ei+]XfT%i ["!1R̳`7#v<9S#r1G񩵃~"UEۿR1O-0#牴@c\%Fr xgQ8LE-̣ ,u`B D-mCD#M"q5 kQс( !p|QyLiD;Nꉲ9ǩʷ6bx3)P#25Ɉxp+!@jI` t*"fD0c6yJ1{:5#? -l<}vt)]<&` * >x"CZl),4猬6/'80p+ GָC#MHf5u{v!Sy OG5'gi%pE+]V>O|:=f%3'hي&< e%SeW 90h+A-0%RH]%ZWSU #7z Z3u>[[1Og r#!9w=F/X-1pu{TXW|]@A 48Ϛ-`Yw7ؾbS&E҂[-#ɝ{ b@[m4*j;cmӢN2v/M7-|Wɇb+"=nt3x;вeABeb kL^m>8gq4 d%ʋ6.il>My,= ,{4hlcsNG}} ;}7K&-VHSa%f9lf0M:a9۱ׯ-c|ͳvni~:yJFj%  @XkxZs(&LL1h yN؀ZS=NG!0/1w\W3!$#m;K=Ǵ_%J?N'eu0]'$\?c "YpMKZ U$ c:_ s~.\V;ÕE`i)ȯsAa%jbg\?.p,,c,׫K{B:/>RXJߧ$S- * s4~<CBZ,33c-FBz|,s w).ŴCiu~]y%6rtn!,.Q_v%:r .v^[MM1zI pF(JmGFjqe =L1]Ak>Pָ,kjD8w`N+NlwyLUBČ}Il.\ B]v^ӻ!*eTN:PHѮ'ȋI5oJ.'$sRMbBbxIQSAvi<Rzz1_DŽX RW:Aݧ-ԛ:^@$[M$a6JK^lAW^/>xd/fåjx04,DLqM_‘AK!wp6B-R|oԅ/ū?or=lQWZ޼}?.% xU?܀\ʍqUxXVBows4}4QW֋&Y"sA dl>A_òOK&:y=<@Ԇ lPӝTXl|Vm/JI=um6Um+c05[k 0k ְU~MGS !B*ui&lT[jfX e -V~NS} %RW_u=h;P~0;K( h1(e5haF.vᏖd#r a;ذ=ם.I .)ys\xA77Sk0A苯mUx7#3gZ^(ģvm`1D bػ)(;{p pUlQFGAPTH,w ܧhdZ0 J {F@JeB V9s;'d# G˼noxrg͏k[cSsɝ0*`* m!*%n{,*OkIS8I9rHo7u|82MLy`؁mW[t > RpEQia^Ohi^ܑsb; B"X@mTUv5F]^G'vr{76 w1.Q/N'׽o}PJs'f&)!?u]o)[=:omL'yMmfGNNE@kb"ͪU f}uىmӼ-h(d'}SVh48.AUd?bӗGzND4ys% ĩZ=v{/R*]Y0*[}aa2zκ! EW(!tTIϨLΦ Rq?z >o;P,dCg}DKo-!]N7|܇њr!njz=C}-#O-^`𵹽]:Kd,")Dͭt&ƻZw+FmDғxD$qp=n)ӗ"u2n^ָ~'"10 B䥢-:_zx_x.VJO9SԜ;ӸK!Nу ^6?uvۊ'3Na1Yq3dz{T> ^ kJ]yE9v Ny0: t> Z\}Ԓ# bRW09:Thwo*[5J'U LT YVX%voޓ5'PdPʇl|O6O[ ٚ OJ@kPQCGrAyG@o՜ݙc<1~IYQWno3sT#Of4*N⡬1fu%43߄9XrJ./.0,h1j ]ҿތ{swk^Vt¿!!MC%Ӽ Amw4g6ue EEd0 h6Sp29Ac 0! 5F!wN|#AFEp|n|/(j/fx:m5{ةLu::c* |~EQ+ٚK(GXgΣ.7{*kꨘ .s$1*-UQ8 6Z#ȷOi:jفZKy&C0*\mqŘU\~v1Ig?sҲl.v9#vιhj )(n E C\AajT)U(hoT گ}+P STr L]0WPR knLínmUvJG.¹{\]r뻖>`H*uQ9e}QZN9$gu2%ؑ>T$gs糂pLpI'HRO ZRHY9#vbp]5D!Zk y#v,Nn 9aWe zkZCna؋r[JjZ;rIg%UHݰ" ue2M4cQj;Uҷ)ѮOSXJѵۧJLN21,>uH:Sm x׫HHkU$,S)kL+>Q[SĻ5]5TU䛦tk:&t4[qjލ+5 %RVs e(Ƭ\+?[E+Gԟ*'Ƕ޵U\U3U@:WRz5Tr N4v@iRg4M<Ʊu<0?6$c}_r6[)Ѿv@բ}}bk-5P_l2*{7/B{],p>2T,EF80ӚMŸ`5mb0%xH`QUVêB@i8z###-֑XHf#DhhA tH$`!'ߥSAQeJhLѮ_U8)[kƒ_J:nl!.[50DƳ10= Z tX>3!aRPj Y$>B$&&z<P}k_4RZ&- t7b۝k{Ldu'JʻTs-v/<1u^]@P+ [ؽu..3^fҮT'fT4=Ru_y)@ ^iwJ+z @#GV:4hxtwĜN8k*Kog?HK_/&%zecpg8Ѣy6}]j>iQ69+M<~?][WzuW/Y_j8Nߴ8;b.^+CMFֿ}  ~`򟾽`1<} =7afesx3_};HY]_&?a]d~SJ%BLXx9MnWHBS]4 SҤBir5l1d(`ﷰ,>-3LHؿާKVel*寓7Avgk>͎&u9&|dNpgrq_:EYw_'e|yS>ḩ#P] 8 +Ok$AIbFPE[Jz?̶0Dl69P͗" oi]p:Ox|\J>؇f M,?Rp%l g;'&;K6̑6O1i"~ǖ>O~:;CeqhhS"^wν|z_ӞVgYc7Vy:P3V F?sc-*q6cGzT1-)<] )1{`6Zs,s6_[kz{!$Vi=K$bqh=QFMŭjBx\y2Ӏrks=5skOhs=jO6 %sU5XgWEĖK1~%84 .٦*O3x΁e݌g`zg:Ɯ*~z%<yRr{$8ĸyZ3ڛV eH4zb|kl.5r~S: C bFǐA2Lzϙ\m]D[auryC`faE>\N;eVO~Tᘑ\LEѺRbCn58@沞iPA=U-@ WGh&) w9;#pJQ@Z)ƵTzTߵ m9 Ȼ c_rR(» 'T_,xh" RP *vA] @ .tw=x"َ ES4 GLDhM|W.txyr1R v 5K[obzpZ{:p;AqFhΏs?wqOկQVMNZ?U4fQQ=&~<!}Td.Ow u\r( K D8bySGL{OupXdNWmu*G E-uV;غgZ:x籯*1k8/[Ckx.L\ٮcp`G$c"X.)ٝsJpҝsw9ws8;ιsr^q4H1Q!0졘! sڄ;v<[Mg^t"mCaqZq\1٥<.&u-S[J` ւ>~óIoW76gH./&9o۴.QJы.I$`騼°ʵ{;8۲Їm"HvζvDζsuζٶٶE_*ޕSB@Gqd$fB^  Sك|m+_.ɥ18K41>b*>ᔒ*# נ{@P f ,6a5o4u25D(}PZS^U &}8#@clI9a[Y'~jd,$k,)t%=h+*> qrTc) ÐIj=(2"Qq%iG,D(X:  *Ԟ]{_Le_i׆DI@JkyiV[2gf3-O Xy3gTJ!BF4A!+沅HZ7IjG\r5,͵P8#M-GZGEHP„6a66ڗݔϚr2/ z]f#4v:ʜ4Wl]WEyˇ.AuQ>z Kۑ?"coƟ܎4,{ 3,(qu~qū{F׃8c'Fkp'2n+Ǹɕ4 E -hDnXG .tkow5pc"դ=h1^pm#Z"D&6e,%2]8OZr WW~ -[,9?88  1JK/DH#o;V-}"Uڭd)([V/u"eQ/}> vu17Ş>`q ' nwg<" J Y&2dm$(^Uk՟!|Am>:[uWgG*!ZE{y'xpPK|l ڣ0nHUswi㦃Sk,w0 քG|'}p*nCǍbP*+)ApȈ狍E?!/wmف& g}xpTu:lQg݋-:͏ժNvAVAp{̠ZWP6 SK d P[U8Tq{5TEφJ{0W]sP[V,88%r8-wu:QeeF]k{7:l9NCGhdH1zd@ه7۵5АZn&izGCJK) JWZi|h*gקWWd:KW0Vm YyZEʝrߍz*XusiF?!k6ЬcxzJD.WΟTWNEw+ b6/n.'UCft5z:`CBV:2/ T̛ f7uy\j0CGgG)bxW4\9;oNRGg-&˚/6` ,6)L2wEѻ]}BF% r}OjGڨaō̓YmnqDž^zq\uO>I-=Z] 1Z ED]+v+]4?ӻwK3m'y eLE^GZ䵁ؚUgӕ WWr0ؕuꬨ3H6z ?zom޳" &Ye%yab$ESQB5ȡ k Δ'r^a1&h9R*($HAO&s,duBtM Jv-aHCq:)FTX]2)O743 &9n(K0 CcXpDžLQ)UYHh'AnR;0ߵ\@BHOgw9-[htQ7G3M"5S(}1Oԝ}>M>ĈOtz1iYJb*Jx m{T0^Bo{x&4 oyꀨxk輒Ъ@9E$r0) | q L;D7޸fiމNw~|a0F-$N.5~,Uc~Rr_&8_4Ao{BbcAoӥ{E/$s RJyVQrR3kRzir+sW*j,u9nG$9Ȇ,LX"F2~#ςQw GCwy|7g΀w7կo דR:H0a*ƫ[L(S)1[#Ղ_|5z7F_3bwN8a\, 9]_pó{e`9͟11hC{ពt0A5>$gGC]ԉkOۉd  hMkD78n%2ÄazeE>P}fP-!vEMCwS0%]UZ{-FI(@i)ݻ~NY xԱd*Oon: JQWȧیn`Ȧѕ-ą/B{qjY,#K^k:ފ}nw/WCH7;IN;}WSWŋĄ:BQÕ>^xw$:͖ˋ~@vx^kmBk_Vh(W1/jg^G>\<49yOhާO_: [4m{#lI`A2H^=H6lf40C"^Joԩ&]N,ι_bV}JFI8+ +xERS,ٚ^S]&iI])s8[ \5^r=mw9F$d{f1r]/[ &W&\ƳlhI9fF #8g" C騛^hG; ' cWEww5g6]|49a6- >|,ڶ!e..k+kv=I2= k =} Enl &>27=RigMa;hݸÐ_Z8{K;vi<,*׸󳖅)YBoheoxeˉWvWOU'A|r1J k{0K&-9:w0lNzpMln?07Np t!c?Gehח|< >h~M7`3A&Yr=pn1h`?~Dw!ZDy *v^c~{D#-۵G3 PrX7 #ߑy-!Z4$)aXnP8EQI:QD.(BFj!%Mx,o _\bwYtyeun'Fe%LRLoQLxxĆQ},&E&RT)v)fn 7UF@$ãg/4}e$SDSfbbpUldת8yF+ cQiY:5ʕ+ojeq.`^$wzu\YCղȨy-grSy-B R^Jjܮ֫i:B0[k(~@$YA&f/'fM\ylfPd9o\TfQRX 4*Eȹ, &BJ4 `9]LZ\*y&E<ǿ}h0(.?~frN6q笯7W\Ϫnb~f5SKdt!:;|_ưꪡ,cI3ǖud5j?զh}WX)DHvkTqq ΢ 39JRZfJGJGTt'B)4#.JyBZR($R8"#bf̵wt'\m}uI>jgh SIEIVam|H `qh41MV;<|{v=`v@d]5w@| YE('dM-!LP>ɀ\v{z8`m"!A =@C%4͈[7bV=@i TX677ʦI!%QtO)yu{6qzi N999d! DG ֩E:ѹ"HTLe<3S)C]\c`Cz፧ːa^ھuy7ϘO!bh3:XPtzr"Am`=Ҙ}4O!p:kD9ʀ29&HjS%͘C!"av;j T ^9.cp0Rp#- Q<'C\Bwǥ݌Q6 f2 n2]!v,4`s1(e)1½6s"yfNK|QTsAkMe~]nohiOVܾ.W!tC/=@0# v\-`JE:h%ijT%i*Z FtY#۵@]G(z7QZNCۅ)(48;)k.s' T+stA\I26(jz! N@;FRZʙq5+mUf7+sby'. Y -uNXnYpL⿪B||nVTfcWIBtORkZm2phLn6{=[B;@WDq9C6izRg rȆ-͘ݦ1oG'IH~R:&y/ NRuB}ݻZl8@g:*!D^~{.Ik ~UNζF2VFʵ2-;%VRְU"7,yqAxs??xB\u#@R~ W8 ْJY/oFXz݊I1@mMk{b?5TdEplI>1c h?]olϰTbE3[ڱ !(af=2V1 gꛩu7!vdKoۭ{18"'w_/^J v>`ا*K!x^2(<ۋ ؛aG,?UY;?̋;_/?:.:R|OG;wQd@8. 0a:߷~WOq2~ٲ?BfaçL˻OO;d~Oa6G?ЫWWW%>jOmXa+dW?ܾ Uc4I#yn zU5?-969gJ ^JSaEotBAr 3%$Cdo{4cqn=c¸Ьϟǹ=]x^'Ίq(|*o6ޭۅi [UKBN3F&1ڛb,gUuM||u1ƈk:{fqƯWY}jΆ1m'\,OKe!DdOUu.~8x<_/&ۃv/y{lk_Ơ5}X5;`Km\dSaWxQVvyT._KC/Vij]bi5|Vghև\<ڬ U ߬l i_"Z.]N c! }J&HyRA4r:ՌŢH?x"'CaZ`"/~8M,H *lrB0 yI ifx rQYCz)i&`D$Cy)B;vVm1^Gx`6+.<ˇ4ēC^U}OmTU1Kh:+hĀu(S |@X5sV(̧"̑;l iB-*S*za* RM.+A :/ct,6^jP;ץ;Ǿʀcd&0P3'RHNh%U Ͳxgp]crRc 0Z5s5̥g=xc+rId K" *Do 8jif4F,EYkp;VRCG)Y;[xhS;WOg9NCl ! !D(s!&x &;iJr+D<؈I[ǾcUUiC% (6;TCn2_*Gdx(F6z E+t4KD,W()N(,sgR .F [XJƁ)$V<'@:9dje k;w)%s$X.*>I0r} R5} F(&Tlrmb\T$@ l pE;zp/eiQ`.+oJ0xu7~q}1 ޓ6dWPd3"R+sl͖ GC  E$H*[%8կկE~Qv B)ZY 5iBiT) X $^Z-ē[B[ , %Hy 8\^HRu$yn<{D.O2 cFY:Q=D~2x׸ Ah.c-fl+ 1.Nί϶Bh XygזЋ9DՔ 6FK *%m VSؘ8daTG$|p9t9pCIFbs ?@! g7+^]C xDÊ➓&=L0])1?*m36wk1^|B`_}lXPr#1b_?ڪξx"ᴍ;r1RgnH꺮.Kx8!b37ǝEoXsR5tJ{tJGbv(%qx8!fB8]s0;sxAKZَúA?B4 Ac?Yjrߍ; =`B-zhH<}ꍸl~mFНiVGG϶z#~T mu&HgK|1<5򸄹iTv.>3'zȠAk jhȠ'rȠPor+1?<Lfne4T Ljߣ1߀|;F NGb7 |NMM,Fxq4I̵*ԐT9ݻCR/qJIQ3@#Jj͟<@C$۝ZSA`K+2 /o4.|[[ Sl7oԊvEղu咞f/ES:\;;KC0Luu0MΏ98H@ >Y^n4͞77,/ZZWZcU ed \P5WtXԧ߈aiu_.~3Npow4 &&OS\2ySR߿Ʋ~0YH}:tn~=X췍7~?w~_p2X*?p"#6|Nj dpٵK萊'R"%&̗T,T0gZIX=eXF9qܸs0ٚ5z¤d:Of}R;/K@4?Eܝ'ae901GZh"BKBOr;yE8駻S7>-pϞ<+ʧMl|c1&WVP`IjG}X uIFcr>DHllj㱦0&"b^@@_'sg|]Ɏ |]̂)i\ zufiMXawhȲ ^CP<~3tOrpqa1$Jn V=+9P8c%?Tw!1֖ K6$Ǝ"1ViX?<HUfN|Jl}p'g=W ֞d`< =BGW )^q"\bbrWRo7L>Z`;nh)9ؿt}FAjۀ_a^U+cw\źέklѷ2l)p,`ZY}yb|K]sG+)o0]Q2645Ć7”k)Rin~!qf1YFw6,pھh|,'%E/,x_Koډ/Qɘ2X8}4OoR&av%Q89=W'>.{4qsδ%Gv[6_7BC8A1"bJBFL+_@ |g1 ĕ j =R]/g;Hͅi{/6Mtl\݊6|0KQó BKW{z6'|X[`'Xw bX[`'*uۉץkk;@ T}HP]F~ٍ&}PCb149*G 3VaV1՘"k]I>;7O=BCϗRCz3zӜ4 !&Q>@ )v82<ڸq~}ׇDȚ(I͹?C ƈTBCZ* q5$]p$iУ1RZ|.cd4DtM׭4Y7` ЎBQeg 0.ȧRaS&ǐz3F'#.:B}"&X B>&ԕI>ꑏmSY: D0I.^@@u#8>&>j̄d^+|hMRRUNMw%M,75bz,~vIh qeϒL$w~o_/Ta?HSih':= n1;&|(PX(ºG =u$HO-'ZQ~?W&5m\JH0RRMUv-[̂:\#gm5V[8Ǣf&M/:Ql[edf7eCT270atғ^w^MOWDl2MrwT@A8ˆqVn^E:"HngN,֛!,Rw! tlg5CJc*?59b?_p_t|Jrg B02{Ѓ7bJ2vH nGs4=DV3~a(+& +-W~摤UJRnky6XQ`XjJz3"6p- '!ܟB6U/#~}Xa+H(4Ln E%[B^mVGD 1uvq1= 7 Q|1QIރ?m]vCz)(ǤwQZpjJ sow (  ߨ}nj/м`juj}V= {?͖2ns$zZx{#6zo&w~HOx^Z۳k-)7 ڍe^a=`"ЃDFS$t"eKl3IaJ'W5#IվI {}<0BlUERzfz]UE~8fata#r׳;S`a QIJg(lBD Qщ ymBsXr~5Vv3Ȏ~!w. [l%˾?mcۿ"kpK{ŀ?oQ6iz0HiIoBZ5p(5q$6Eg3g x2Nw>zIr3N8^-}n{*# J\T\2(/0#rky ~{43jTLr}بYFӋRŔz[y_89P8fַjϺoG&l gOgxv۵F8d>3PG\> ESqӲJFWڎc@,q0ͥ O9`=8LD]nw4gm@̱k. >3+|}s1Asl"ٓ fN= }1j)|:Z{/J{DW ~RnlN/%ֲ׹ZQ-K9p:MF!C{㸚Z GHO ~8JLQpR9pi (?'S L2uyZi퇏HrAPR;>BzUoq-{m5jϞi8BD S[(xR 6s/>Wm3FSO{dSd\G*sA?kn;љݟ|Vl:dEuX ^?$MBDeT1 Ӧ@xG4f(Qb,"FlQ1l/nSIAZC qJ*Ns2pQ2~2!D 5q!ܛ!*{멐 )A_Wa-ڟ)aE0pKϡ$#bXBy*ѝPsom)+j@,$W;" G:,| rmxd^>wi簟!={!PBMdڶxk$qtsɞz1y_oz52< WL޿Z5Q$I5MB"2$!(aPiiE)VJj+M Fb62N&;LfꮗER|.ڳa>s(Ɋ.A:Di[Кw˚g )]#p,0ލx!.BA9Xq*iQ?v-D:5&5KUGKL@ 7#ݽs#DP@ 3˞R/H^'r^\1ݑc<|HVޮ2w|K jB(p1 hrlo2ֿdzҖD~T!Yh/VstMWv9 p5K٥)$?EX쩥yWwL$_Y$`rFqkȪφz.=(m*r_7򍋨L{9ll٢kc](S%ʩA)BxK UG]i`O]d?H-ql? g;~Ec16Xubueϱp`R^Lrm8ėB|9&<؂->)Ǝ§ar}8wSXRNE!{EE*ܳ5#"~rýק;tH%}޽cArxjrv1Fݡ/ fMNûDoX1]g"9y&Zb) M4B $g̑l!tF7\&)"(%ެ&*}u.ЧHǷYԑD5Y-D;h3%x+ғ 怊CqqB+O-]X!yH4&i4 R0CT%ZUo$ 6ʊaۧ 0*@<(J!@k׷X^ڼݘz\OLwj2ObZ5&^E?#}Kl+F+ZƯWrzmF6Ozg}CKBN{V)ڕaEcނb];N ~Aj9F'\٦. kg|伿IXy,mPr.(*u{~8* 艰X(3Oy |bۋJGfϋނayb؋uE՞/w_<,O{'k`dYm#goջyuՏƖ[0HaR7z;_Mwͤ_â__!XV C^L.8c܋sd+) Ski!myIo!L=$(^fzf^W3n R?e-j׀"S.l^C %|Y*ЙNiQyNfEZŒݕQ(T;6-撡P(8wN8/̓!{##z-Pi{4M &=~dyizFb~lL5jܿr.|<ׯ>7ĪN ;Fޮ%s<9/??,jf %;BTf2K {5Z=PR<,rn12 Mq{ 6ġmj?`3!r\_Iep:z";b46,hփTLfӂ/*耐\>=Цu'Hؒ [{.ڇՕw*_,̈́E6NxxeqQQI!QPySCm0jcZE0;"ob i%])A%j.Fɫ r&fy]=aSuIIJ;v)&cCc35e 1!4"gaJd_$4H[)DIbh }fo@zʣP$kPKV )4V@T+{%# $bSQK"=;si_WW1=HuccuM7EhlikcC^1hͳ/} jp2{kMfo#}~__@pzQ~9ԟ ;"[8\=wߙ9#`R'!;\ C|D@JsW'LĨ8s)ϩ6s"8!+|JDJа] NM۰*Rۥ7^ [2? ACB~6|斠9=z{,;##dli졗!OmVr13`fZ;J '᧰sKA*VE~+AvyJ&A1JNgA~G>r+й},Ų7bZd_UGv=%ɮ\ڟ7s'qf[=(Ip:>-?IyPm@߂hh|S9D%3.r/}}>G^,+eh;,<|;΅^U +dy{mq]8oPŕOl5ZM-S3ޟ;XCTNBAJ-@VzjnC֞;CGRSjw2{ڪ:5JOկT] IoW.Uߚz@CgSiѹG9wh-qA b7rʂR) IZHMxko9k4(@"ڻlhrS>F6pIJOn[Mni'ZbB'{T:;8B>U(+:,OgE 9GZ_V9y+ç6ŗ8&jU 98@E=Hhwm=(;G-;vK ;!-FɂsR;iEaiξiR޺s$eerߪDceTX JN:SGE,׌gK:@+/娽uu,| n[mn"&cak^_߈%E jhfmmr^%V#%ΗfSpkRbT ‹ cET!Te=QcP8PN7T @wu B^Df_[<2^G-`ʺ>)&f7>ES Kel63si?{28mdp%;`m,+7[3[x"a,]$^zefAE [`;{)i  Od)6RL矋8aJഁ)_kn{ X< kFӳ'pE+5Nj0]T|du! SS}Ě`@QCI8X4DeTQrt_ߜ2ⴻ!Wq9et@:2FV]DZH#R4/U|VYoȗ\d ]m3;XMqv8oixf n7ݎq4޳f(qDezw}CS/ *t@-}˜𗣪+τM]GW̯7̀(!DbJ)AHR,⼄a":2q M&:N-\Ī1($a ,9W!X"S%$ڇ*^ܚiaHyM)wZr;D/vQqwX姏9/NkopR;OsZD"/wmRkG6;->`' o_5ȃ\,iI/ܜ"h傄b/4Ω}Amsr8HVI p('UWɎٙ9P[ x0<5 (akR ~ƾ8DҦ!::oS'x5)iL\5PĝNs: j$ x NڱpO^)\} Aˆ9 T{C D}N!P|kIsXk\?BQT?)f]|݊fW98Z(97@c]Ah~|7X{{c5|S$4ZXq)GSVj01 &(E F8J?4o3L?epYX ߎ΃16Y]wM?dz7;{G[ٱe\@ajr XX'D4N(ёT CJj^/TS%To-sA% 6,+'3TxfMM6@e4Y^d$Of.ؗ=1⋙g}yWGY;{X^ ~K-C <;w_,{E`̂Fx֦D>,,=\4󅇮iaR:.ԮͼihM_\M>7z0^ThAwTle`?nl$|G6C>뼎mnvVNZ!(zo(/TWAA>Y^c?md26p~{50.qVa<6b`>a'*2њR{pO*g7Kwj틧=}Kt4~oڐ&^=hvyYGG*׫[ϓnŮv4_" O#{4~0-ТwhPwS@T!ҡJ)N[:Gk0n0V$o &RxεVuOHAq/P:g/6'tz1u3zr:Te[ ]mI? [,n>^؍ґV08yr.\gυdXVS{O,1 %HP4Eӣ_};Z,石^z⋔DP%w"j (@Tj*+e@8ґH#nny駝M_`\-Ukԅ>_m 1)f}|VGʲV 7WH1lڬ[ܵY'\%<ua2Z~.l{wO^`_vuʰ8JbJxf떀@)!nZ$fX2XG0I4!`(R4*tS[~Al9fgkV+_O.qg 8!d7Q|Ώ0}y#uF ўT.+zwa `p՟Œjpkb0'uNDTDj6W&8mh@\*n2(> S| 0e"6];X PyS2Oo34Y5l9T˕'˷ sЭZjƼ$V;GQu1 .<Ϫ%d/_n;w>rKv6̡s4fbv/yԸ,es.k2]ꠅ.>s֝M.@! xf\Ӯswp)*RXri+/[*Twj $T6uLv҉?ڪ i<5Sϱ=t8}[zU`I.WC_oC F31>zuNjsCڢCݹ<>4ctHq5q}֝݇ALP֟g}9}Nw|kB1sJx1w{ .^8vǷ7}|c>> BOp s B"4BTc#0(&5L)()sZߎ v~TPͮ};!6 8kH1+EwU\1GC1E&¬Fu0Ff5Ɗ N.Ki`:/JRk1yOG #G-dߞ.\n6>sQq#MM3&D9-rf䤻*SMiě\GaUd/&TJQ--FY$V%FJ@#S{^7*@HTY,A &7T3 PDHRm`(b<!#QKt$l0fE:fDc 3ɵq]TPd )6A`7VV#9UR<6Wr hsrxϹ,\xOo-ݖ,-!6:.8ւUwfg-_ $tp:]h-N"1?'Ӈ &UHq{|7<27p{Fnl (7í?2 }mt{[,9I"~K\$%JJێ;ȳ,Y?αXymz Oh\c'Kn˕}i-N;}(U^'veP.bu=qy){\ŭ!/EKxb^nw@GV)ctۣ4한[+jp g2-5fR Jq:s nҐdҭ\VC^8T! V|T%e컗Ȑ;˪{^1zT v KmR8Ku~m4ZktKR[$ň1 ЉV+0`خ-LLSg% >JZ[̴mu`)\8ܥhxjjuB(^ށ*ӾHfDC/Jq#Kb_LH5Ndȋ7ME y,ZS&$axO8a9H iΉ f[+jp g";Ts]QQ uJ"E0Ŧ;*4W΢;TQO,-2\4OmxD ĢOZhriGRd(u-JصH8JU{돉.ݗ.,@mGEKĻGvz8 ~<[;@-gv0nֽM>!.udChqMz/?;Fir3ۮ1~Jws;.,6;x{y7- R?<}tq paao6wu{W>kt&PK0&l64n6=97/VUJsq`tTD>РU䠺.\nRM9TSՔPkt֯мeCFaBd!{ qh6J6an92 5a-!XZ}&u DNE;(k!)TP'2SѓSq2S Z}hB`/GI \sD42 ZRpF}zgՀ,oQt;I'|̂߬[|>uλsaDvpDtZT˩A-?&uw}=97/M0*Ʀ變{+WMNٟCgW4Պmh B+CV#AoP a%ַjN ah3Nki-!N8\/ VmQӃ&~!K/qB9s I/FS;z;\d΢<g0 ]f ␷TE Ãg|Qq0dS@ ׂҧ *D> Ϲ{J}EEhnpY}꩙'NOyZ)P3EwI\}]^o]ݺgP§^uwwsuOMAb@7 ,!J?¯ĪWb NIU l/ 3%2[ujv褒G0VcF*nps["r_pTVQhMC LF1/}CRNGh9%Q?OTWxE-s@ "لbv`L]}TyP Isy1P#UF3-G٫sUU%˹/כ_HgU"^I"Ii6^mqhaޫ1!3دK2F2TTwTݜ7l;x)g5J* 0j8av)O[Zu:z/2ԣp%B9HρXVDfX$/!5sAJsy8H 152Pj^c6' `+8Fֆ V`=B3ߕW8=@L1d]"JN^L׮E %)Cۅ,#F.ʎ<[`?? uxO|2{n/Oa!/?;>g7>+}zҦ>"mTFi֖%飋o'2ImV~sC;Ѹᝀ3WL#0Xw;w\}vHQ\9HB23n;o6\\]6Q}x< u2ۣP ]]ބ#d-7o(S'o2`S5fϏhuDWf/Ȁ)vuM{応 SGExO29=Gf>Qi/ QE dik]4q-mcݞuy2SfLт3N3䑟ų{-J~cj]!0n<zֻmZGxD 3rzVNF&F9zX*z`с%qhIP2 Z׶ HeА&n8#K%lbMRH DК4F215ڑMTJk6)pD(t䲍_|DF} #s>[H#Q5F #fp .](:d X߲iFdCrRi+AIO[ uR{ϘpL(V%!2## 5QJ/O6Ud>E,ʙ !I[ɜFsݑXHVswܧNoQ{zJzjQe(r)nU$ 4YOҤ['G <x^r c!=\$NM?WG _[/(LJC'Ԑ'̠#o~z.A6tp mi5&j Q ͶI/sI9g΍we$#7r9kY~@UQ*"o,3\gv\'H%0k( f1QԬC1UDB<ի_'U4C+e(a#u@4a# r _IB2Y`hEs>YAemvOi͛S\d =z(j Zg}^{ n92CH-w [[>wau7k~S?w s~RKZn-'eTi=F$yHHx_񐣍ݗ!\S.̡ER7*~(!tyہ{1!0!1wu^`+(y)| QXd{: 'eYqR6RשFOGʓNbc(y+8%U/C2seF(ӻTO'#&eGBiQ ܅6VI*֐PjL00(=R!TcG#ʑv]vq12 v1+`ZE\#+U4'rZnЪxEB >ȝ*D< Z iV1 ‚ Hh5#q !0yA az (`uq4Xxq%F#!?{Wȑ ŬAuއ0n/Lnņ$[ynodJ")e1Xz<}ʌ/##"##VJ {m-崒Z[4ȂY -( QH$^HNxi{Hηk y.4b3hl441[4˟*,6oH1kefO; Zf=D|iQIiSmhD{iDJ\Dbkz~m֚m`eT>{*Y?Ƙ/fˎ Ugۛa:=O]K*=Y_Wo8ajqGWZbo垖-T?(c9Daz Gϳ|0gI5 <ٱOKh +zV`?T{}sVuC孼%zPJIXh !/¨փc5톉v+AtJh;H`1ٛv+&4W"Lծ58dZڭ9)v/dZKZ@ hڭ8pY,+~w?\E oU/ ʤ;qTS!ydZ C܂B+)i /ȏas #TS;%qXGa!hc7Jy f ˀ!X$l:^" `-Crx9YO9s]cslr2B,@/e"{a}{%!r׎B>ؿ2`?[4;''LgyJ2Ɛ{=ݩk_gye!,!YY` JVkp.oBٍ|`x(|!{M9XQ\yZ^L L˚9uyV!XÝi*-wu%Ǭq<-cZ*ǵ#ώږg%׵/*ݵ<&%0ݦ\63ul^M;|zR<HܛDwdKtmA>ޟDwjfKtؿD{#,~c{dly6kU+g^! h[RiuV[}2]F,}پRK9{/[Hh}rp` K-}/,/;Ȃ:~bϤ.Di=f鱝2bh2G֙הNHIT͓C;e%C(R}N@zU‰JuSDl{ݯs5{b2e{5)`Uқǜ&e󹻸(@>^.Yi=&9{V~)x5hw3}s~f߬~x`jq(2y7#> ۈWīB1GYnD|_mIsbPY-1uH g,LIScT1j|̐Z-fT;̚ [!=pGoF*4:iY 0v6H@ *%YZ!ǂ3H$`pb JVQ}tH# , (a"$yɬ~.K񹙇Kx>5PaW}~H^Psp#R͔2w7k̪Wu> g'N_!5(Mgh IwlͪgWwV~qy ϧ n[Z'ۣfJqxq~C)')1YH*'g2 %!&"\,AyGo13\^TU=A$̊DҰg[%BǮP({'b],ZRP_R6+LI2}бA8UufwI2{':v}&#H()$l ҊHsoiE$U啗ݝ &P̀M${&aHj,}ŭ0[]%*}xX'-oJ4c?iUy(vV FvV n.Zm)>W{ &wM)?iRˠ_M?ץB9 ;TD')iFR7]L `|b_`+1*YUʥ?JKmQUIAS )؊2bf" Vdm]$^rhIGf;Ӡa_/-R?1QW&-YR>ҧѨw8o9)ƞL79N9cV/pӳBRʐ+:ŢCGb :4Hg(v"1DDJGӫ*߫w<*p@=|1M  _j4l~::Ńf lǏ?>78)h=+:ŌPM yaH;^cuPH:$Bd3"{\:PKf{W~FjN9'=T%%6a`J>sL6bq<kG^%YzVkd%X`bc*]LiEȆ@Tb.QYo?ܼHPVn?O޿|i?jނC> :ј&`K}oǰjnq~NކTErNn+FXlj=:u2,_G > L_\^«Y^ARĐd&p?dr %kS.1Tbޚ v 8hsN>0H }6a9} \QD*K"JRň̈́Ơ[v+ZӇPqH_6uCaٚhbi; B-61f<@lA4W@eeefee=6 jװQ,"c1 KR+b(rONː9$Ik1s\5VDFv^ k-iHBc"v` Qj-  A6I μXO}G!u/D? c`#t3j곃6mb9X5oz)^>xOpr9~;4ll5 (9AI&ݡ0t(&P}l\G#KU1׹]>M_.%w\~]K`'Kyҧ^wMPLb3V0hB.Ql |bδeJL$*txIyNc@y+$f*y²=3I =YFI_FlIBv6MW@4'DB?GI.ur 4znz25nu3^-wXi4q[9]vd1[@>{hz8r诃~rfni~̚@<>?ΝM{06-LZoF㾳]&kx mg3YÓV G;/yҼC$7P[A/2a2:DX+:D,aA#i:Eq+@" nZ\i`/kqw{G!s> '~c֎X-(VbW|(|;`V_*n !ZXܚj."vQrJgckR]ѴH`B(20)FZq7&U9uJ7a[Ң¨5ډ^%.X1#k}8Otڦ9liCNg!K%W@^`O6W뼧jkU:r^g-jiPv2O?5V5Xp>ujċ`HzW8z[J-W"7xm7t[Z֓h0el_*?aݻԮicOo̪fԄ ֟P7Gͤ+nGqk?nO&ZX izJ7ZQ?nJZ )*3i(JӘ`.ֻ߿p=dǿ_t  3ف?ܝk&fJ2Ɂ!ۉ3>z>-染{<8Ƌd?Ç/f0<- =lݴZ|Ԉʢû?Vfur (8r1SNw2tEdNzi\>C(ˤC8"Lf"6,.qWF ⌌BSV$C]bH.A9#(t ߏC)f\7.u rȸ/b E.Tڐ8$PLM(X1'i.fJbHIcJ|!Q/7dP-7Bjao.Q0`|#7Ou Vgx?A?l}$:gޓ O,Fiw?aCX9<v r >v8Njji֓x8oq &}gb?m}g![ cowo^Q.򉁆C6tEbgX[k*2e6 !8|"4d``#Ft8g DƜ5=y1 dzL^ɷ^>8YgH4͎9%biƁ,d&v$3y98my3r=Y aTB6^Z @DXO4MӋ-ʇG:6Sա03d`T0J!ퟘHa``gUX Făwq &@};x2񦷗zS 3s1 7Lm8y<^8G댔8˗pȃ̑NC2G Bj2R&rTTC~&$>*;L^ҥW@U(Y*)e.VXeĘps"IHl M̽da̅ {?}I8 -yKDh`ϡ0b;,sc+RM@ 2QF^w.$ &ٯv&`}9U\c4$(x3=fVab RAIOE[ luz_cr&坅h#(4d66~~ߘ*LGrم6Y5zfAXa".ϫ{UC)H=߾? . (@( cL[c{.wc"BLRLO;jaSE6L1酐;jΑsV^N Mebm]:q-F&dݗ{{xG1Z^Ճ fpAP\JL8ЈN:1pI9PYG$Exc8 &Ƒd)h$JEJsƅ #Nqq)c8cm=64R"( QHb66Fuc@V&$ȑhT JUv^T:f5!S FLHELmabB; nH&᱘ ĊLO&Uq:)W֬rX\h*K@"}?b^y-HX:k[g_>Tbzqz!#:pPnbzwy5/s@ .W`}o 1os-|Ԧ@s]S 4vZ9/Td.Yj+W.fiu—t058wTQ&Fճ՗ȗZzBV7F!Oު]8u[OFrT/3XŽqW6r3}%zz)&˚A;lN-O$i('qg:P{czcL)r;Eolw K@[I$i*BTFꦁ鷪M^1l/ECxz4mw ;+n+V ^)gO/jH^w^- ڛ?Bn up n\o֏LkZn:@Qk$tOK)L)8p2R Qcr?Vg  r$QМeHaH[L)a #PJc*JZ͂; "" 3&]Had%0y 5!X-p%Ua}W EEMf+Kt~H Pgd=P4#]"E;n FT#U DJ)sT3do$Wѐ?TȘf/UןjT dkxAugi1D',ٻ&m%WT~ͩ/n⳵Uɞ%)mciVNNoCҌ)@ 9|h4yg<iYBPERjxv[-0U/9wEx AT$FB#h) ͔u47չ<JV95:ПüsJk::Ưt{o3)hT띺!OSS8{ZjǫBDmZ0);UQvTӥzCg5x G͠[,&p6$y)8+Q *I1S̿Hˀ=gsj @9\S 鵒gF:6oPjmoaC:L(YrYHgDQ#Y6i^"KYw3t-)NS,é_[9#)5'sAuיfF jK4Yud~jY+W'{%w^U%3)X[ ^R2P8IrPhcYh(@`f_N~a/':CT+/'X‡uuo $msUlZDw~RCv3?d&+i_OvޝwW]ҖX\h%W4vZra(,NI˂9rX'K:%¢Q kh9ֶaexdrD7 b)[y^ klڄ CXpKWqB`\!N$mZt~\Q/:% @ᢣ2$n6l5= MDpuEm]m#!/âk(8͋J/$0x2fu0)jXz V'j 4%%,Kol] dl:.& !Z?̲j̿>+6c(?)P5eReLY) RӒ 贆L\pKKUJx<cRDb 48[J\\s!p]6⳼p$åZ!*KӖƥ0`ƹk Rg!/sXo޻bj AlenA%Hp@IAJ ca _<0!t|X-& ̍'+ 1ЪQ`=ʲ,i'QW*oʂa!\)Ғ^6fR\9%:"`x98YKg6d%ʑRZnW4{Y7܀{0F r =!1`(crťF~XSIB|~ !#ZDJ cb- g9OsD@P6U9Ĝr]|IpUZ D(Yv AR!+8qk$rnsV)rw:CR ?'fX憘䄨T|L3;1# @c)Xqf gDrcb۷Z~1/E*y3/=/ZI[S,d{W tT5 RY0C8  vԔV ʑ %V1BT0#10:,f!JޕBR(EcGb V @z-Se)rА%(vXpk^U%n9j=o`u/޹&[Em&f t[ZMbS|rKy?Zmpf¿ |`xe9!hT,*Ȏ.'Un{RA8: ݡEqbH4RQD%d-)fZ4~$[ZWMK[BP;:@ѡ ?wA@ğ!Ɓ!j+ݝ;' N %Q1tE8pez~R]8~ !?%.h]r>~\ LH(@T/H3 u/IOgL6Jo`"]]3`+LϢ'XLCZ P79J`P.!A錔J~I錐JԡFE!k Lg@ur5(z I5LnʼnTXy3C:$Ph(B`z)PBQ<<-]~E fUE_)rۧM /Mi+tyJpr]4kT] #ff]SUB&PgacLWƖpxkM6!F&=ZGH{qqX溶?gOտX̑SΗb܆0}n)_ȯU}$ XyLk -;wQLI2=ߖ+᝖G /iװM|:K(Dy,WPF=bny[\հlupst_ui࿦ŏJʴ{Yd|pa:rO'AݎyJqP\Xp GGlZ?? yYM!B ZHVIG3F}0ASzO%-}=U{fd{b-M'M솄*7&[,i;ۮ 7a==giҸ'*'*}'[nк^FO[̻?,?NWkQ<:q԰ZOFqr R t~A1o%LȶҞ$jBu0̖C/֩aEv{ cQUhHB,^:p0:3ބp `.s7mb:ۨf] buKX J֭zR?E75*۸q6]u k{JTxiӇrQM{b<ם?tL>Ͼk-J)KEXZᣌ{ٞCF>! 6&zdhFvꝸaWAǡvF>띻QJpdPq:jpPv=t7r0hf }>L9"E4 ݖ# {ͤPzT E_7bI``$H =L$CBJ0Bl>ݘI5"-_~pݝ2M* ) uŸ_'PuaIaJ)a^}%P~đN!=ԪzV06t☢\v"{~|FcFU!l'; jr^$+'}ir)dS=%ɄT>N \XHNtuB3/FD<{~}K݇'r b戋W˴pᵗX"ِ_պdA:(Yp|p$)Q=3:D5;Gӻ%H$ H~4ԒȋoqV4 rvPfeHT|(YE8'=(maRd0M)Bh'oGz)Q 4?UX`U+`'幰_4`^`4.D"ЯpJ1t8 (zp~T\oNg넳H=rxǝA$hvQh~[Al:]%W4'ꈨ[ˎz;Rn :yymG'u_j ;s6mWaLz6'$T{ )ۚ# ؂gBd7ח'gk7 C]' 瘤_O.p دg={$DS\]μI߯4tI&Siyk}=@(ނϐJ:(`R)QbAp)S\ߑAĸr#{)>,ͬ8Z&w㬸Hۀ|OOnш~v߬0|Mgӥ`I9ի[8V?paT ýw+BVwmM(ٕ[օv8bvÎ렵@$f?)/p㣆Oi _t,(&ô @&^A9CDbh;wh <"EG}f iR`$s(HJRy,W+khҢVAŶ `@5HрYM]J)8ӎń0,WA8 TeN`yxQ>㇏dryB|/+ZkNEG>6?^FkGNc;v>Q4{_2W;?+2̾-u:"P̠ƭDUB!#9RӨFЃZ _93Pd(1|>ތedӘ4! f!4zWtk`zUC@8 QV\*q+ĮX)kGDNk ha Rꙕ&Kw0WL>V(AE({,5Sf?v/OAG"Gc(( %|DcN ' f,#muOjQ^*gGY\0VŨlq%8cJ=r-N)zm>jz_]5IwqD#hPǫ̋TPKHq @7uS9ѵBw&,ywN_~:Lg<{T8r_; y-TmxWEA;[n4NiXpYJ?u+hSC^8EK8D|jT.[n4NiXdN!xjV<Ѧ֭ pqJߓ[nНLK1hB>ֱng2*֭Juk!/E{֍I ֭&)c붽| ʔkZ6)Z);[n;aJ1hB>ֱnf*Ouk!/%D|-ud' 4NiXwdnmj` hfhźUmѺbЄ:}cݶWeZ{jV<Ѧ֭ ppR󵅫֍!4JVEEQ=cEHDWN"NU&w#PA|-֕|}6Z` hٷ{qG:vu]]{ȾǪϘij_D `~c3vmhbaQj+64Aio"(3f;:Q`Qklu5[jjoQDd]}OWӂ&hԾ gw ]-H9S8j\AnUMd.3.#)v9.ܢ& ػ3%t9.ܮ&.L 1w965/w9f.[Մ#-1J嘻shrc r]UM]2ɠ1w9V5An1o1L9嘻s {c\.ܮ&h.Lr]SrjڻY WZAQ]Ѩ1Bd=.F!5`.`\y:۝B*xHt(:(Qi+vBQ1As<`*P;fX$t"F(6QK;@ uN%=6BhCA{A"Ry"@9-FL%L 1F |BhP5B|ЮQ0.W|jt8 ,c6%ZiU"SŨLQelHQp 84vn52:I ͢ _QyU^HGdzyVFz/hZfF@@ho O N#w ?I ˾#p0[-K,@p:ň (.uQZ͝hԆ+vTB=W￀G&ˉs6ܒp$dLJB0j  XdS aP: /?G4ڀ樿EěJ-§ӱ t>XHp˷.zoGMhWT,jh7/̪&^!Gh B3W}G]3S:9կhv..y-KC2 ] RƨF ;g6oDߕi 斈X~p}33j|plvqH9{iv1n @LJZ, f #jnWgJKg1fޭ0e\PI Goߕ{ۯlL+_mj]WS;;'oS:̹E|9cd֛ƛme7ګx90BNo%x3ѧFi=0bqg% q>Jo& E50;A(j у|1 p+ FY)~8ލ&1}!ޑpj9`Clh^?H;o#v5p}ѫ5A5%=[f:( ;]V%'@UF9㐂68dh Eg U GbF9zU.oVu~9>)oa­gLD$1q?#Y6%끢P,s ay5 Iof/zMڞn\\$ oW6g{CorWT7)E64~R{eYKU B5{O?D?Ł_jE I&|z7x̗ 5<9]ܛd?h'5nqh~r~>XQ| UOq:JyNxy\Zq)ņQv"ngo.FxAb8AMh001uW_g5tMM'ʾ#|1hSe%kr dHWD_ }re^4 `Kz N!EI[cWG׵+nA!9cp]/y&WYo.{+7nOzB .I;ky~53CF |?6dV?+#jEu¾[f4?CdEX䕗{_y*2yx3 2fe8pᴮ t4H,^BV7@O9{>;%UVi',;JP"3Ks>jt8{Y),{PƜo}/ȟzL (,,gl^Ol E~EբCЫ6'F9MN/x\9Kï tX֦O5Y 䧜4Egh3K?$r@8/V+6$Ku>؜4FnB<_+ZqaIqwNo7Bm,_MB<"%D5R@Zqr| &L]F*4}Nl>`Oiwг'\U/F™5P@D'BblAѦ:{ZHפOtm*(QPMxn4J{/,g8kY,v[ (OTc=IYk?s'Я"O'R_-wVi= yҀOMzVa1:CFqع@hfj`j\3434343zF#)VKtXn6L|B9B'Q!jFX@j)(SSTJZK^p[s9!c4iIM%\_ GCO">!n|wp-"?Wr2Z!k!"M9ia'e @>SlHIiFB,1A[C^CTDׂU/)ƼlF! 뜏!a,?J%Z8k.} F"urG% 1CE[SX%iX 94TFha&߳&c2Gq0«?Qص D-Ƽl:lkY;d컗ֺ p\_$=8S*J8pJ1Z3TA{Oer{󌚠ЁFlȩFrn>Ш<.6>ШbԁF+)=d=rjsbH]NAkNg4͎AZ -\81HX`P\q7bjeЋ <`L8߃۬__0Oړٻ߶$+p0/׸EqK]XjdI(;nf)ʢ$*^jINT#ٙٙΣdTG~mpL&w9Mp njf&^TSǽފn;84~MVhկ0J𤇐0)wahIHM2~sV9>qYAJ%^Ʋ/;W>j ~H'mxGI*үS2l-002a7~av 9}RT*sfNKttBQ=Nv0R6cӻMo4bsؽ~oT<J#.qC ;;[4\?e U fXhʹ9BrX ra=i) Xd+dSN̿r<:Fxٜ[;s:GOw-Z,F7;"]|Oyy1TCפ=_  Dz(gR! /mۉt`cj*IG7 gT/Jե-%oD=o><,횐Yij4P\9F34@Ӓu)u\Q<L\"P JiTIB0<I93xuX)N!9lFwfs% hޚ8e\d4]CDT#zaizϺQTZ 'n[8yk𙘡tyxח6ttMtr;܄wAZ&fLF'%C:~(W+UUlE7 A/S7bnYjǽwzD+wK߉fLI%m]m>{ŷ*xU1 6 n?\F{vLh yTe<7gC1A茂btZQxKҹ6o\4 a ,O]DU1xz:NNi:|Ұ$||FȊ28RB`6:%9՗&%ij/<񝥢-;IB 1.uU)g( j(y-"5n@-AƓ90NyK |TCRhvvy;( X9 šspsm5Mz΂/A4 ?LRS=,{3z^ۿlKu#of'2XaoOsD46ݡݡsw4B/q &IDP1y3`&e'{XQ\ UVǦ@'&`1oށ٭7J?^*bk1chaŽavciB˘1ر0MGXE $R1#A1) 9Ⱦy)G+髨ג}/FC1C1,kDŽ0#iɔ 2LȔHYSD$JFQ%aM^׾ ^z7}뿽{÷^{_sץI7o,oHĀsyGFreX,J%ZQhbU˃}L otٛp׬zq6 KN"-/N LY{|Em3%.iq{ZF2|-8rGIŵHydJOY=1=1O ʘE# G[é˜c4dUkL2 &#A"+4da"Y|?Z9%_)ׄi*u9*aa0fJ RNA8Ȉ(H(ك-8rٞ8\tGuय&pJ\  s maI8m? >s{޺ 䧷o–2(Bkrf}"wsѝfWJ} `# a_`-?=.?duiVЖ JlȮ0&+Ok.Ӽ/uQ{(KI_Ks&B%)"g-5JP&, աqN#qE⹄,[OP^]Ɨ3> CpyVh)blxN5HZ Z"w*SD T(!)U) 3j/omY I%o,% SJBH[06HnbbLK m!}4aM\( H2ڳ,-4=>LC S0"bJ'TX󌫪4d@114< "50zTU'$^AA[x-;va0R1(wBnM!4`Ba=R g+sa 5z֡vUGXRiJ#UlĂ~+ A E2QK8зJ0kQG,(Fb_M8Ђ@U`m+BSK39 M|"wU>X*fJ5VlŶAle4PXsP)*1C(QV8bpY!ܯI,; 2 )9J8a&?Th1ٻLIp,B0o m%[e)H4oF7v,H18@J(gɀOP/伌A`ٱ&Dホx+aOs-cm;L !@7=ISsv zнܚp>#6mH7 iE(t+#j{{kj7r~"aN$S<|}/9\O t{M`KaNf%h9x8{ ]܅-P_Ih7ys~7 W¸pUmv Ye-7]fevd_Y6D`6Jju2x/{(x~R<1qmm87ralviDʋɥ {'魚:bÙ=8ω&|%baaZ(` SC8i#I{Kկ|sAXAd]f[ɂ*hC -GVDZ d`k"#4NB:&\'D~ǔ]ŪɩzE Jlwp{(Һ0vp~e|,{ VE0M@< ZaнtOv[+kn#Gَpڞ9ce&J$e}*R!X)`[d H$x\4%'t5%J'B^yG- ag_5,=c7+$KY=q7O^`aO((mF"EmP Edq%(JMb½3H m.G-|2Q2&۵i੤D`Sq@nRő7tm50:e,ȧ!u9sH#C@mUťTmՋ썙N9 wTy\Ng?Vz}DMs׳]r z)^z:gJB4ڂOo{?ՍAp*yPOjv? ~˭͞M̉huCZ+f8ID6bU'£@ jYQq=r:1 sg՘_,(Ǹǽr.ʛF(/_xhCZGjJPnp~PYBpkPo;Yw3< ;(!<٩ݧ׬GwLάmY"q{Mv0| qA>#/5~Ob<3.5^n<96Q֚֫Sz;ϕ>C}g&)k~\B >t@,m#V0{dǣN!6Cc2$X-ú> _>8ʩEOH_._{ wo.ȂXHGS_&Z" yЦަS\ʀA¤7&aq+L;IʍKYTBKh=O uq3\PzQ>y/own4UDk??9 wPhT%J䙉]20s~eo^wjQ զ0l<ӽUphPGGCMWs>m"҄x +F4_{9ڳ19~H"^kgP>zd<|&Wu|Y5Y6pO)BWn6'~)xShO x<Zx}6 ==tyP[h(  77YWo͇_~r~aW,p*<<W{4 ~b/9G`A!wT?;¿2_n.˞KLg3=/ XYJ1*ɒ}M5Ea;5;rqy5U祬,OCh]5b7ye&7()j&K@*VF(` 8B#-BP)cj.]͊dӲm~,@R )[hkqn]`%DҶa1DAJJ" Tzrȧ[.(R"p\kalZ ZAq3=[*cWLhV@4ںc܄{&d:eάv}̽M?R~55"%ߡwoW_lNz吻xW?㧧vp}bWJ ?go.6;/cv8; $Aς!@~rUgCn GLx 5- KWpJ)0"?-s110!I"h{S$92PM۟N޼WʏB ɌL2 ůP([Z 2ҭ֪x/Qa2D`XWnm6[[A&l^[WWs0Bd[]A5BD+1D-B\0_"Z]"XGjc8:JjJ1&#cLQi)+Z ܥ UVQbWPF 89 E !Hd'(U̓كܬHYqqbMVTyl]XNyUt:ˣ:n>3,!UT0CRf%oBХS!+C%Lq1iW:L)hCZn.a'*U64@X1B:) *m #jhOꤿs" E#\D1+ypo (LctS1o=̩TUaM37*fz#9 .V0./R *5} ( UE*$?,5|_l|Ǐwa;^ ?vfk D>AUPà?B0r_LqSsjuy~25FBW4yEY~C/+74_Tt\X?Z7F-6ok4Fpm$V֐d!_kiy'^?| )f]fjhYP6lgeH-e&Bp{nhTIiJĭ3 Cuy`YRaX ¨N21 M*h@[d3~4dhq[w M{W@]wWZ{4!zʧx[3r yޗ> %gh\X`́sTu>9ߓyy!CVNAj}\;㳬$DZ U5nb`E?" A6`Cr#OdyGB=_ 2b~`q,u4s F&Eж"x6CB_!! 3U:?qˀ07k\ fY| 3eCqCf:+-,/+q?B KWVnP%.q{7b-5]ejUALkIM0PZf8Hia@)4Lʄ_.: -Ton&¢|S @ya͞w| y@ 8_4cj= c I vX#dvҲwn֪BF[{Mx>!moDh[WQEV~Xb+r :؅vm[ .cf/Q;{B4-)\#JXtcvٜzuA c{A7 !lxhlmk0Y'! lD!+OB(osu(sxG wAVc fbSow_*B HNplj*03LA :j0ēO]UwG <3bOX ^JaaogoZz~:Vδ7L{δ7L{S<;VS(FEKU:!P2T8Mj(q4DwZ(MP$hz!p[$܁/%ӁO)~Cs>  y㝦HfHRWA1~*?!/3)SR-HS1цZ0%LڼzDc~ר #XWDpO覺@VpeUc07`˼Ɇ5pI-$JJqt ù5OBu^Zp XW!Zv`g5WVƦvgX< w -:Z0KOŽ,SꇾmH!ȡI4х"0UD"GI\m ECH?^G9Nik{M/4!Sd*{Nd=㕮@JgASa'-[V%5}$GGdth>ZlxTڢbR04ç9Q6"4JL`݃6Zxi%h=adٶ ܒPZ *g'}%'NjTI${8iţ` fU@(ZSCs` d9A/~~4t /#:|9$(IdUb),ډX)@,@. 1KئWQ W#h=*8ΊOZD#1 1yf% 5EɳPLnL߬X%艔/83爙P)0FNY%Ui ԮD9 ROe8-U?|lr.@٠<1X@Dgq5QB[0mFϗB#KlC.)PEhUś9hGB3eN= ̘^ 7xmޢ^p*U>6oV+9zQ@Y7$: <(0:ĞDtFTCjw2AXv3,$hϰdQ/.-‹w?aw?h0Ib~NfT GYw7W׳ẑ-LŧoN"~ynnw5 ^% J_Zo izRb3Ne:sWfjgRh8LAHf)%X.OzxR>%WOmG#$Cz"/ي6(,O@%77E,A-r]ѣZP*GW3j(vJUу8E5" 0zԏC񻷺v:oJx{bv[r}rX_d_B̙SPՋKNk DQBxxQ9Ilb)ѯmA;;i2]#"V=Q؉`|X}{yv;RZ^8k' agz1֒;la{}>Ư~&ԑKfthgHqz=:uxkfo!oJȚZ̙j,^~(In P μQUmxǣ%eۺ8mٌ6 `O h;Bh&?T8 -¹jml/([ Gv.s(C~pܭly}}mFJ]:]||<ϋ`V̼zhLD] ,KFW=c?rj)s?t]7xaѓ]IU%:$9iMV-7Xg&G(h..RL)Up(঑Hz0_ Ӓ*10 G"2hJ djNkSuœ)%n :u"l3 6Qo-uPW^hV>4k1ZC l QpQ\#Encg8y{cY~`-Z܊rث!A6^D[wY0`\lw[d՜/חo=*M1iuOO:Xݼߋ_^O;vF`oRu#z99+g71qϏ»wD<-,g_\}\ ]C;,6tv22s[@j'$Gze^Q=b@Q# (r~ 2Ilm#A|Kzu1 @T\ݼ.KZ!۷MDm/شo߿>=T4#}/ܞʘ]?0, 3 s+bfTmEa׬ ՔiytZ@98˧W}ZzWAM{z vLO*c×E|na O8_ qi ^Vs~#YZh\iny/:͞_T0ܯ2%ea=;`3_$`*X/߃<<ԉss^t8K`ȯ ƻpޖlm'ȆbW4ZS%ȵ>[33:MnD?3wHX0|p?E$86dnn)w]=H]|Cn϶_׬U0C mR{WtKdaΖa^ɍ];i?-iGWoDĪz{i<}qC+v|UbF$Ku UgUTM}VkEL`0T-N pWK S- К]ij;UJܩ`\$m-K;-DbiF$J hDA־DWFy h,\Yql\0m:VI`36TGVjK򡥴+&A*8Q2\%ڹQ5vm"+-OhOdzD&$V:t5!;n Z9?4LV!#8՚ms-[X`rX+vFaUx®aw 'iͱmFVf܄޹)jo^MĶqAk_L,0a bLlnm2Azz}~z~;WEZ,:ϻ͛~HvC˄)07+wtqmZ̋LjfYWmB6eFnzm%un@6{:K( ߼hy!ޓ`]5"}t`-6O݊EO t@{yM&e(ʵT]`5&!u(`*^}#swqEE{6v#Iyq!t3hCMҬ GFx>a.1 uu;ks "Eςqns9^#?_R4=/䁐E/㇫vw~lXpES-;>џBe20P"BC{ RPT/wOtR> B\/vC0L(I13C,65Tv<`mVلB.[Q0I20V=P%BfS pXj! |Z M0~KPl j} %3bh\L%.PwZ;VFtq_nj:fZnv~;"Iie͡X-X9IZUJINDVγ`᪦I*=F @kh}I{ZR {!֊Ƌ% UuEY.uRlt"* 9Y֊FM6Mz:<ld)GgU ~L68&@rťbk4 IJ"톍!߈*N[NU+]ߠ Fm/SKP$BSX3WY'Q| RL6kGxn~LG7I 2vJp1:@ SۭH,pwCqŀ>Or:h ?ǜ/o'[-ss(#eNځo9brqܕhӳrGZ3EiuC8z}%9לnSSX tv膯7y tךYA@ݤߞ&6 >+h ZbMo+- w{ v._f4I[$;Gl%f+fQDVUbAX>d3 ښAIlqwߵ4wjc˺Z]O|9O ̱]Lh'߻' $*ݴ⼻͙LOl3ďvux[m}uu3WWo~ګ-޼H< ukY4?rby06cwy!_ 4~57`kta%=+QܙYP<$'$RMFaRy#:kn^S[lHև"$S[C8X7 A͘-QF+6^Afdnc[hgR>=cGw1. Hk%G8kxIÁbT7.lUO=jgf?KDQabe/k@a sz,NbzvyfDHDl4($O"ַ@4LRO3rOOcuYW@R"҉YNP6N1҉$C#$ MBT(-Ai>$jԌrtW=6I`[Aqp_.2A%W&pΑjٌ!YI^TV0SѪj :08x={P< 8E sDsh4[i2#HH}8_Z!ɫrj6կ~%/U7tG6'rOk|9SS?ITVU23 IDw@t<ϝǦX_҅9}MX[ck0φw.8cVuzǨoFژJd0'a󌕭)A\4gA +.l~u<cRy;"M'O#r=;?:WPr^;˔Fn8`l;+-2ڝy5DHuQ2PҊ;:NjYWDH#]*^ %]0Ҹ.+wjг37q]b637v33%HN! ݈m7-1/H}hv(=D6+Pfї<*;8`+;?t#klC7ڠE) .C7VΎm4t Rrth8t# h(P>RnBKo} zV d[Ƨǝ;3XʈBxM~*7H!/q$cY!I[!,)KG%jYAMJ"8aQCŻL>eKHH 0_)V0*kHVj©IJS93\2M®@Nدa^& _;H\ ‘qT] %H)㤃a)19=8̉RNzݓ]n*bѦxǔlw}ֻ}HO.I2O AӒ[*bD'u6mD*x(}<[SK#k9e+8t5e"xBj]u KȻ'dQ nbu톪\L2UO=٢KO.I2$O$Gn<5XWٕ*Һ!!?%=ΧM#FFn<5X1)C[&֭ EtC"'?SʇLr ʨK Jg-Bt-hywFoI8r VD ht*y\|Î"W@qMG}o̤N9u">7Hs;{u^K _4ɥSŵG2'}=?m[q37ɿ;ⷓWܿJZcɲ chIwBHǭmIa @=kPؗzA4'a{Կ'pnj$uq HPIN0>4֘E}B w۾?3oovUC'~trٻJ`_jl3DbO;_=4o&8w -3V_x gG5**B+,)vqr{-@8f{{xň:9xxm]DY֊4.-.BjV)wktwVRړ : 6psXϯ.S Zꚹ3)2ŁG%]4h"q>bd]@L9 j݁*jݻ1p=cz4j9I VmǾc: C:!s&/-@k ˁgEVVT?LҖLTYbӭk:: ЍY}T|gk<#ZոШPWbn%.vn17f ͚DK=fM:,%d b4_YeYc4lo'2NPѮiNIӜ}4-I4. 6X2Ŕ2Ҁ9v<ٱ-K{ͧe?˅ -\@HOK 8–1/浚%I]wEF;:C;@j%rsܸ=Cj]IssplL峔]KM`1bJ\&_3BAo5nֺHׅ7hMZw8AI,+Y:s$۹.!5/!#wcX-4asB'u{_ս՝3RsN_I4l 擆̠fȟ/*c0KJ1hs.O\,q'c%t[in(5̱6+Q  L*Ũnii*Ө-X?%[gU@Th T{LL!4QDC3XX;ژ ǭno9˦̠FSlM: ћO%\p' gBJ#DA0kc!@py(\Es(Z_&iӌc^sCZ%Uli!$Vaodp%Z0K2 :4m.cU OQ]s0s1.G/H\b&l gpMSn}PAPޤ[en\aU-[xILp?r+E2NxX׽5Br ͙ȓDzb4]*S ELfP'r+5hMCń9K|ّ?&W9YcHW"!yxTD; ݬ7I6]y |n:_UURo*% @ŢTT}FČcR%{Xyk-fm_'"-)->.QKJ@%HBqc9fQBƙY]C/T&t}!;7XHe/#^AVHsίЅWEV"OT c ͬ q1r Y w<+tn"HWRD72Z _GC(b 6I9 }@AHYҖ cu;?E-aç*.FjHnOk\D\z&Ju5n(ɗ\!)6 ]*`Z*TD5fA 6cb 뎖rV +昐)ƥḤ8s@ڌS5ƸͩmЀc맫OZB[\#N9ŵiAh&_ 0uR,Cai"bɂ* .jٲd`fBml,Eh0~ٍ=@Tuglz#p]1Δ>)čJP\{ֈTcu(f őؔf1p1ŧbH 3&ZX \gm̧C'7|YhѾ?Y&ʚ KV/ٹ|%c;{9wU[fa,*7##kXpT1bNO<}_$jN=7^@=͙z\Ъ$04AĜt&6h8BE1eɵPb"mАH{?~q 1641mTrk|!\Q*eR!L/ lj 76vJdʝ5ڋ7 rT=GRZsG e*eS@av\j,EPmciJP0%|JP5fMiBBSՕ\^j^n JP|TpIoĮ'6o"tz^C<ҩH-Ӡ bZ+F<)yHh>T #7|eʌW#jټqLo,R%a|ZHvBc`2]=`҈NsN>{/N{6bmC!5riLD8&9P2{ b0H(z^g+J!>`ťNa Ȅ뼍d颯2 Lľ7=t u;39wbiGX`M;>k $*hq؛wziƗ#DIF-c,QWd ߢ Td_ΞoNg=Dp <Ge@O׽5LKuԌQٺJRV ܶfd{'߿ ROo+ ca*@ZRbJ)I.;溭R G1CQ{33RtV KepFW $XpX,[a5Zċ[6T*U`ňWȣ0H!cpli-Y'nD~ڸًE6Tfo=?gsT.{5g^.^ZJ0Cpc^ZSPZ,g*iRz ~$wJ79N{ӟ'0O,+1i:9SW!ᎹwGxgpvojE _7wpӌ.fN{usכR4=e4<zgw^=~zg yٞ=}`/wۛ7̞ ?> }qpbgnwGYo㋗=4'۹`su띘2} \Qty{>;W8"qqg3/wƼEi2hoȝ^FToofqs3 gF {U̓3=5{hxjeh0םO09;L} ]>\dQ7rͦٳ'g-[1_ս@l~~m8t71n|.{o4=yyዽsu5ދWzo|m鸋!dqes"ae\8<pww! y?q"ݛd'k.^Sc{} ׳V k?0{CFsмt:v^gޜfV`k˳p8<~7 ( 쌝OgS4&xvqKX7<9Boֳ[[#uk_7&0dg=;~vv~xֿx4U`h¯)3"3ظMNeCޗrKpE'geb0f ,-݂UebLv;:4uR/eAs棏ʀȔ@b"<gg3~;=B'{# 1.84૔qֿ7Vr3 U_m ˜ג6Qt^x"9g4;ېA'JjBnm6v C|KPA )A|:T@DHz֗c:(q'5DOABT#)@# JTJ*%AJ)c 48%mq1)4Y15ƵƸVg[BSW$lqtA*أkTbP͂WR˜EXx46 a*G4J1Gi5V|J`V[M&eY="[}p`ɷjE GTo39{-WQ.7ףS/خҟZ`8IsU*AFZI` I6U9  0 .&Ye-ZiSB+}}ܢh[8eA%Kh SMAU 4u$ahAxܮ@L0l-VPc-ξ@ESAFw\uo@X[~%G,HwGY1!)R&Yc r.N0L +WbM)^ԤĪ UڀX "Eƿ^ Rm:.f)eF*\I0D8r^Xoqx݋ ~0ֆ_/3J6<7zeG(cMpB^I}Ts}m| uͻ 23_Co{!(u`!Zbx9H]Y?dj$ ȥzT?PO le/c8 ƌ|H =4o?9|_F `Ean>qMX{yXVoRAid&7[8*P Qs[S{oLUڨYrM~vsV@-C\p:y·>WᵡOnNn u)(yh; szZ)L.P,Fԓ!mԇJD^? q!P)fP2 T`Fuh"pgy}$ͪkvX͘_+1sTA*D^CcxWOQ4'1rzkG}_:_NLg;X4w=~Y3y>@wXSrr=!/Hcb$_Xo\}CqHH=pQ{G e<}*V~tRv}Lwa_ۼhul͋^vԣ|"[̋! bw6`9# w,6A`-ofps8ج9 CqWSD$0մ~4JfH ؄\Z |us,O,VԠo$0 SJN"w Y^; kycNPFG#3`I%Il")}A^1 5TA\sN.oG?mLc}{uh7IO :3A0}RSr;2BuX3Z 8-WYkW+zYW792*0w\tdh)G(>L XEษA՛ AfتfT0%54lcIjlQH&f]j"[^FHK)pvWw-# g0xw7GxOۗSZFxӚw6VX@eTy+`dI=j͋]iL~I$k)iL % Dw™yL8n2mN𵴛y3 ǎ:_a*F:. ϩ_a[襰6<=io.*`՞i 5-UēM״7}`y΃gS9K G:2Cr#ȚMs'˽\qo*Ű![)> qCKz'+c,I{xeh8k%ڙl6R;9}U-%=[؇dKa-<7(3+Mt؞oY Y%:C<ٽx2k-3p4⹤'nLjg jޭEѪ['7eY7nSvqMY?ΝR/;:v|]f.On霙;ǟ<5)R'z"ߥojs^fļi- h9h;sf;gXyAF2{ŌKPUX Jt) _gYD.k m߿..>]h;yθv̸J}?1_65Z~8;fq;8ZvMmw0u_@ 7n{~zfyѬ).>?-:6;0rvC .5kap21^K4R=Ո`\M!'k cljB*T=Wޣ:J7i6W/ߜ>5 t!9Լo#ye!6d!+ KqqHx5,t:HTs19gsX;dk`k!97RR2ՔRa?̠=Yw ~b^*L5YhPH&Ws~">m ڪbF Gd:xxCRpO{_T)&N҆A~WyBm-mw̫PΤ섹K?H֣Qղ<'i2}x7Skû޸iέ`dDZ/^D 8A (qP{.Qn#Ɂ@ԠF%n;pc2_Яv%ıS>YmcnnrpCKZ~KM>2G wˤ#Q]'G}oFu;xdQI"BIH[I&EvjX;#Sy BI#ǭ@KJ Xgz{Hw8R97V-Āa*poIVFK%uf)Wl6r]z0@@&grTv{M0V]z\G ~ ֕}o-.HM!u[+U -{sEFK%닩\. R%o&KZC*9GT~PZYU nˢpTaHc')yLTwZ:3Y oikM//:i|N٧P֗¾[w/7GCšw;l'vXz9U1zcv]=UMq/Ş=B#Wk]̸́Y~9n_ U=^.1] CUO_əyofkrޒ]7C||uO No:9|OojC}}kdԆf")cJ^0t@?5 oNo pW^Eopj;@BF"Vm-ӯΕo^_ǫx1X  ^Qizz_WFڟWn8]$WK$ U~ѓոB2G0z>7Uy9CpHi Hentg1x[ r~Ri7)n~@~ uX6)2L飝$bLsVNU; lX[NkeQg%K&W儐 kM>O!sZ|'_z^TQTX qlZ jz:{Xer A @!$Cch &H~|lXMqu/[AfD{ ;B޼mw9I]%@<+.쮩аM7z 2%B+/M*j*z~ؚ5% n^ ВvRQŵ\wT?_(frf*t\rlzP.@÷zzwQDv\zaa10abcq\~!E9KY~2ұ^f]Ĕ>\ǐ?~VbΡ ɤ &œ}0]YoG+ =wFЃۂ4;6:b*#˸22GIs VUqxk2[M}mRr_d_3tSg-EOˋxq~Qqѯ^y)o_~߿}_)؟V?p"Wcu/`L߫\7[)#)_˿Nmo{LeLTJ ! v_/ڧ+nVț sܘ:Ͳ=oa73nZ&OsMG]ŏ\NUoSzۘl@ J S*oV+4VM]]:biͼG6u];]H JNJqbp5UIbe6NКic&"mp͞Jfv $t0%qnΥ4qQ&Ntb؉ks*HaS~:P\̮ћ ?kCɧQ4]ØϾ+nQ$mO>^X]fTǓz ΀)GSK+( ^WtnO/xYrˁW↻VʜgVHGeh'cLBAM[yED.d S"Cev>Of ty qAS9{ 5NU')/NgZ^Xжx(NK2B1[Ji)HJQBʕL6!QJ JZ?YF16QC^ڧvvzi<'R&NѸ]mll\#8SMA5ִ5-$a!wR6\2uzƇ:Ǎ8U 8 THJ{,'@+Let(^m5!ʒQeR YdAzHǢwĨJR^H[F9q]fl|]ևٽ2-Fd=mb. %p 5mw3ꠄZsuPPRnpJ72cV!Q]tBζ)BW]anC rF [ye:#bzٵEޮcBA&,* 5vD40+fQyܦ"\?*Aaf9}LK,䰞6WJ5O UԒ_eu.ϟEM=U]2D˾'ͩa5c14֟&̳GH.^0L1F+.t?)eBgnWT_YjDo~}ye>sy> \7ސzC:Ͷ}@Z\TR)<|(X ɔI48\FάbXL , <鈁O53٪=]w^ cPUUZLkYs|^ZZ-ƹ(OMI&!YeQ r8hUdɀ-1l.&ȃƝX--%GSoihqՍ+9e/‹*GٛP zܚV2Y)Oƺ\Y&^9@*;IIyelnx'A)u tNEh'Xap4!)nf0{؝HS84~)OE(@+Xh~ԯ>X½QyB Өk@>)Y&A!d܍h~ F%o"&%lKGܕZf WMfTa+.~-n Bm# (i`DMIrU6Vc0W9fFzRVE:PH5jW z>> [83 B 㣦PBy;iIՙްgl`4q0D-7zM'2%3Ǐ9'nn]3׺<@ L99l{zR)\9Uhatz͊u@b*n8T6Et|B+"+TBOJK}(Ș)<'q7odBc$}D3.|Ez< Ȍ0YWNH6'DK;pi55M5'&e b}  {?OBY }M{xJ h }r厛$K;܃˝MAr 1GA+`,~I|[`6פ1B\;qR`wyU?cgϴFWۛ  Sawlx! 7a_>Q/~٢f'8CDsЌ?Wt^&Fcܢ-SaTvD9KNuT*5HzuF嚹EcPi 3**{xG ["g8$,:SL1GӈI@IcҰ4+9BI3&mz(RlЃ40阱Si9fH#rS'{)g,MQV 5I70$RzRcZBΐ\s~>$yKHrAGg^b)I'xR9E蹗El]JUr )pPh*Nr,K%G-9\'rq9bRCVCXMJOyr;Dѕ48h!geI4=KE CRBNZ^0X .'8ܷMiO}耨9{4bj*QH>*HǂtnN 170sssh!e0T㇩$.Syܦ >$,mWLL1m8)}ݸ,:\+wnS&LR8n3L_bv7HiЖz%7m7aYa^pPBR{k'K/ʦ7JPɩ4)tMkn&H:І.l"چw'@>vQ6 i {juIVO 颈|2T dɜwETtq9%-GogRkFRO@Wf-}w-|D#Ek)%3S9)#wk-: U=PA*p+_DM:AD Q?o.|DUi%UG{û͟m抭|KN'7暃e5{vaya_v<3 ۙ=;?-l9.7Y*rpMCid*\``[fuͩgmߥ ؆},O4wūf*++EQBK!WIZW<žΗ+-y6hȂ^|#^~E'|xf#uڪl좬I=n!#TբxxN.di"Fh+bQ!zX2p췓[ڟߝΚԭ=>Q@=?V?lPn5zp~d8O tS~>䌿?v[qhι`4GD/B%f6VfNRSMVAs~cK5:92Zƒ0qٰE~7 g󒙋\^)0X2EHKrNYLr bCALvYvSD]],(?{ ZGilSgOUN(Gt6?rs}qA'Y/o̖UJ΁1|v;mߵ%80`b#:݂nC5`4 Nv0l3E[V˷z4,~U៕TC{Un܎kܸ_{sTO]y ~9׸V ?w~쳭"bxj9hb+O"gl{`t߻SCuy~jrw0#ir%;VܧX @NJBJ&+/nj7_z˺"z=_]! 8={`ҍG7FrI O#5XHoC.!YZ0ء[ ^jgR~Gi{<{έ'^pb~?0Z#i'>k'r2:|\ {R>i{m#;t[j<8<ɭᎏ~>cZ̹{*gt)h\#Ԙ6vWL)>Ű|<oh 6 3nJGuBtxex*!ZAe9 ƿi)vhظ݁/Çmb;gB2~oI7ǵmjq!a*"L!XNQ l,fsVfQA);rOYg޾kN4x̃̃\tBb C =a(H4)vi)"K,->uyyyJeM$n{|1%f<5ci~JFS߻K;e[a 6ee*%IɵB(!#ҴRDm*V]֌&CVqu~}{s}s۟DIU3/QzC>BBqI[#}TM Xa,I;~XJ(N 7ȳ?yK68fB?^cnR2d`"\e-=QN/Cvd{q28DX@GBqLVj$2jIq8U93jGuɐ?2A, 55s(Sαa*ɹơGLXڕ$&{V7UC#(*GBZUbD=oˆtW L"D2OI97{]ݕ]UCW +2Pi#$(x"JD\JPUBB4 C(Y$\Q k;Qx<|O5*WU6Y;aRyvn!dRV'Y {$ߺ-ӐT ).B5^[0Tg[jcU釞#DL 8^k8g7 ZOIKQoQow#ڮS%1rN]O4K?%XP+ ()rD\D;PU6=D8n : O/ C;\)?c>;*to#0e,2:s㘳\O# {uj/[n=u7NIZ~s:ç?~]oǘ(& O)O@\r_=" 胄 %ؚ/ъ/ ‰e+*A[ih@@ߨGgq|# +:4޳`_KgoP*Xs_z<^-}7h7p+y}a:5k )gt{+vR. s/qK|XQѽmBv<@@bJ\xpԒeŔG uCxVKiDZ2ɰrw s]oN0A=U;J̊HnX sKKMk؆ϩG+7[ I[$L ,i7٤Z\d7|[gsbIh:~D:XK%eB¡=/y@^f Sʩ*HN q Yj4d0~t3:NV=!hobA@R-Y= ~0OSı0zr+G.{BIiAA+!Xr8h'Tpxު^NJw=B>d'78}NE v`x R㇡rԆdA/yR'eK~ 4pQ!rTPin}v5CO!Zc\C P_o@ \WwĹA`(5Z/3aAq(!rbSR 'qhJ[є[rH6pT ^SE&$ݟiH5䧏}ϗ6|_翾?<l L[P`ڂTۂ97HylWH$]myw z;]6ѳikH" Ų$%B/0OyyVZ3R2JI8ADi z DbC,9 "D EwH?dSU2=6VkqQpLI¿2IW$j[ Y + dIEi!ReJ R@o($u[jv\:l~9X`2ʨ+*~H)T)„RC!U)ÐPgLr BARSڷo;Qv_ >+p^lH"w=\` 1x-۳)7ZL3~w墕TBrh&+߈Ms[ag1uh$ Ezr8T{Ar6IAi6`d)%zBf?2?>U0%)e1% !ߗJFKv<r[9ٌt\7BuHōdiO0;`-rF=.,or8W<`NyC|p{3~"OAXa˱;C]Щµw}KcJ*shl.7} ىj6n)px1߼O)jss2EXeOuٖv8:ZfWwh׃.xGsދwDt~0 hJ#Ee[oɇ~w/s&zw񖈻N8YaA 5U• hM_Uk3x2ڋ:'u^D4]׹U69W)Y y_@CL/ 7TKI"UbK&+<̉|JB[r\l%{=EO巯$'J>җ b5Y25 e &%'t)SDAS@UAA*EJҔ ,=n"XMC^Ljr$> ؊WO8A.$XNq9K -OT%=s8r:0=m\I}7 i`-DR-0Y/`N"mT *:h0iUJeha3 <<ۈHV&J ! 1bhZÄ"a  1bLõA"MHSwtp`2R$!E\Un^,h%5ԱòH^08A?Bi(1LP9Sャ# D@L0T) ڗHR#2OIklwLnU*}>K$cgA_UF.~|n%6[ĭn9f)zAI i]y 2T@NK@B@mqȹTx *0U6n ntDY~ }NT [C,tc{NSױW6D%$wp8`|A2 -f!)[PsUihq4c}q\ja d8ýNӂA|߶Ebb\| j))gŔ~,ZGblc(K/5A#wG~Y+)i *=QUn^' $@&aJ%5s<}V{vSYaAA(|ڋנ/!Qb{ۦBaW4V~&I'E6Zӊ5E*7))O[ >ݑӳUF[s-ėb'Mq-0_s[')X<˽%[^@?m "0frn x@%IHQ&P ݢ9F<;UF-vFĈWmO`<6 f%0qvtl?jcEnJL_'`~7<;RQjO@XX)a(qZf0ǩ$iv 6Žb4jIsuIX[[E>$=]?GkCn +3\&4]Gܲ߬vcǷw }v&uшxُ4;ķ~Yq:g?9 ^9H; EIU(9҄xG_>*MY}FY?}?~<+1;@Lj^x/aJR$sʂA@s@J}yFKt&Z/j~L޴%SH8eI3*eJPJ(}ހI4E ER,:ŘahJ ؋L.H<]{-4S8\J)ù%o q"_!]'u餕BU;6_  N%AN48ZT-a:F-ز[ֵK W916h2cpKr/~pҥuMD<0iFcz ƂyHᯯ^Ho1)h\[RaG[\WCB|n5PZ`r_@#A&V~uo󗜙`Qݷ9r= ?KOw(yD[TΛK!Q)U``_!)B|m97#9T M\aB@Έԃ {/%!>D.awa\"qQHXAHͤew l].%--PR'4F--4SdͶ%)/d9)!LZj3)(F Y4+lszbņz#mOϲǺgm'<]v8-N?YM/X<[~**~i3t|k}(Kpuwp fYl 'lp2#}Tjg+5^_'!{^Mci;76{XzAٻ_iV1/9~_/R}\,w.wgi1kV3]RzZ-)x~|?zjѤ,ct<D_+c~_D _뭶ߦӟok~b,03XOĸ ޽-.r}gUf߿)[=ЏPq=7,ݵ~ȟg3_ $ZE>կojzDe4-x5M4zK5ߛ gw p;ݯF|Y?1ٻ6nW(v=MnPA ^#meɑl!GG#Yh=3p<9;} hOu.\~=_Ufx9_nza/xRwlp;NtxC:upGqå|GpQGH1sTw8ܓ~j䶓JH,#~% Y:RP\ =*sw>˃/~p {i:/쁽 %V>O _GWywNGGݓoG`:&h$uN 9%O͔5^V3|}?{F`%\ppj>d]P8lU;^JXх+Pn$O&7Udkk!煿^ 3|>tĎ̇WSe.A2݆|5^:>^y& O_xRx.]KSrD ^ IY7w956y7MU.<}pt%84o?_"pxu|v?8BPyA`pCtGsrGzGxMo~?.+(W?4/'P.Be&~yw{dÓJm O٫@l͛?G==e.up[|9P|5p qUuPT׳ݸ\~qWRVq.RBX!_ʢG:VJ-X0Q6JeW}Z_6<)OŴ"A eEQ8xF@pPf5{z gcQໝd.S38,.d ߤ\ϟB1)cLbc甁ŻwISu \nOir &drmCC\Y߉\EՁ ")8M\N HpCx QUs`% J+xI $[SRx(ޚk.kq6Y\#yd芋>8ʙʢaQsXCD+59U^:ׄ-KUeoL FtKdם4:6b|(T0l':Q A0+!w:@lx(gcǫ̧ӝ"-N^<ɋǏ[%[vϥB;bq.rU QBiQhtJN_BҢq<ɟWedxyu-"8ޒ-ɿر<4ƸO]L@ZDeǦaqxfBT>e; "2vTfpe[z)e[ekgRy/w+D8$s G^i[8L'@y n*џϯ M{< u|$'G{@Gg3\"߷r'7dA'YT`! 7߼:7o9tOo~vۃ?^/0Yb`_!؇!ea6GLֶJ6a-✭KQ*Q;i^Sf 0HeR,I by(xmmhFwvqi0\/]ɺW3t*++}kn qQaň<[aςֹj A%Hm ԥ_t86{pՓn"a荾>Ս_{C{}Fr}{zܼbϻ3{ ПZ"8u;A.S, kS1r4UG||(N_|0lg4?[٢:?[gO3MTOiuhf˰j T!ZZG%=H1 4D-hVp8T0N &0Dlf%ͪ|R|")ߥi/M3Di]+ׇZ5ysqa `Xâ`X)y TJ}`=?<:}&vp$ΰ.s⪔9+I)0WDn˅Y D91JLgEw4j 6z]_~q,'Ш9Vw4:)QE?~'DFi6D7ۉnlWf 廝֫2Sy^H=WNJ4O [ٟ߽WBNL;1vb2dU&,)ӫޝHxLZ*uК=`lHRdD«ηYmmX<]JfJhDB[]1 ML DR%)74J`Բ~k5xm;.p.7#\R_y2S8)VmG<֡@u1fQ300ˀB0[ (0 "VNs"0EnV 94Xk r*@B(Sk!@+^@^:1F914vN>?\̀11TVYl^!^ H':@jm.xshO6.d 2?h3F_4s.s.s.s.ιbMY$cbFV^cjrU PrS@= c‘Fϴ69U"] G9KѾJ x<)ih415 ox;3{ &l秭𴓏׸`w+g4i uj1q =|{3v6Jqr"8%5qDkk m=^VBpNߧ;.;rmҧ;0l +aN麼MT^`uBڻ!^U?7X 5Zs5+*u)NE L=5vΞ |x@ 0ޑ^Aҹn#XD'?!&ꌀ9i:>!(v!MjM6UJqP+9 4h D#Di-(`֠.)qԢh8aX% l`xKzEkz @J"p1_$dx&bX1J䣚_iX8+FG-zؔČ KaaUJ[`}s&f`)7D5Qrmf )}PV:5z mA%4*S @QxO#j.px^,hSjx@001 0ChX GI &(9սF  }}6(#(%$e:.ey e^O|JD hY`pr%ϠUtO''p1ݞ?:<~1U\=l=,fJ`WTkmzft㘳\S_HxZ{>Jix/}uO_![R_Vf ZNBuQ)'_9Fwi@gӺk黖%ؘbr*6FVTi 117R{3Pm{a=$t*6J Yb F6wYCR%*Î02kbk%h B &fm"|"ikkDh"d@XDCB2M6wY8Rj\3hʽNwZau10&=qg " C@ 4x*#!D*q٣E9+yD:JũeR&z`hP)f}tx *4"!i &JFv# pHQj1z˴q{q( Nk uB=V{pe|cIhCwHwӄI .|\JB>r1P@c:a|J)&M&]`&(Vǁ! !+XV,)MH=\UR2d N+2@ax5»R!$ VWsvoFC2 TD.'",HQl\0Bcee4z=ƵZg1TiE:j;Ybgښ6_aekvrN_\TI%'MʼnD)$AJ lb]@;NX- ;oY7`w u@I[4XrپdS }zK [&) eP4|F>W$>H \ ٌYca!n KXA ̄N#na`@bB. %540p2ca# 6[nݓC(qPf M]9 N!ad t Ҍ&PJʷ"8QINbi8mVa&liI#C!75Eփ?veD*=\d5RT@sR>̮^\1!6:Q#b{$!D 6aEz&X1eDiOVG_i$9q[6TceD]yD]Q'jpx$DSJDGC/(`BE14Gn&\`rK ly Ӑc+/vHC=3$7H3ƪQ iasEސd%>%@2RHQ Tb€,ljx=ͪ/Iǂc@h$&"#Hҍg[o* H00zrN1'h¦FVo ov#bt-3lWuѕ8 T@~ߐ"e BDylɲQT-'R,a}= se+DC?r{7)76CYW_!xq_3 hO4We>!9Jd$3'F#zvaUksHf TO`̖g̈5J5NV xXzO[5ضW3 XH&o$iNSi`4h"@Egk#Ou~_@;##‡ʙ4D^B0ȟB=h-#N ~]!1g;?;𖌫>GkF`0B4 LՆjtv*Mq;$%KTd=\P%+ nq$Zș43`b/gA+6IݦGAd\}U܆5y1mRF{UcVgr:nY{ $Y&`!3a5WPowWsa"gaI. Z0-`LJ 6x+߀<^Fgb^[O/Іb)sf/kV(wQi~ԄV#;+˜weeK8G3ؘmvmw7^omd(!甝Hd{N80غTGeRy)E-\ޘ/hj qډ!A?( s t2{5_R|X<e4pEjCp{$U=OiP7(3yBHJ>3LJin2$YblV`27ns{[.%Ů]' J䀫jEMu!Dv"(t w=G͛|MK'`krfsk<=uxH%X)yT؉v0?s')?_bWQuAk)EjCrkEJӼeOq?+WW-|$E}oI 0'7ޭHYDwq Vl=!u{GCPr``IS G愐ne;G4?Of1ۮ`DcU\ސI5DV's/6s~Kbo2qer6\M,M,;f@rK [\ / E UR@![eQ{`P8w NCH7L_< (&z,gNW~3MlZ2:oZF)/LƋ0x221vA^{(eu[ɂ@D!q@h[RB4BhYZ)BWEiTtH}PRH}fP+wdY# j+ 679ŏlO8#lApTقzkx:z1ǧŠ3i$uJHYr`x)4-sELKD<=nN=SYet^rG.Z.7r1%auw}2 =Dog^^bơq]A&WhK5@lj!pX)8oI&H|PfMDۇ/K{ ș%qb߳٦ĸ|wq:{W5Y'{W&h%6|<NE5zѸBJQjt) h8=k=IK"LO˷iѵz (#:S,"RI i/$Ց)cOM}G<'uAl%+]e˖kL춨yx;i @:/Z h28m}}G!_5F ήUsܲs(yɤr` Id VH=5R%$Y+׻@'8{NsDad.;~OR)Eᚉ(NOY:h]qS MY"Nsy8OH)ZýUܺn'jiALpZ%85XO=! z%$jzz_Msr|tP GҪQUJJm0@@ a )JL*t cYgϼy ?\u$DoۃeVWOe.Zp>-#޳/{|Wwaz7wx a2#4o+?b黇j@NyS/ ]AldC}0* R=zr RHFooSM0I`C^(-YSGC+ȅcH Taz%QMzui-U{޻xmK?NSSؒpZ|K$W4,;AL4yfL5y;qԂ: .v@p  /*˒Ba d>\j3xI06qs\@ݛ^. 2 p]U_puᤘΟw]ǕÁo_d篯ѯj)WNyy .痏1wM}`5ύɫZ1S_H-˓iJK[(B[#+*ˉ0p0|q' iyeEf-jHFrG3aȜV>c:ی\k4j !\cb06 %R+ ԭ 1J 8$%PvMQQ@ Y͗*XJ&+T}Y B d•~F힒jH]e`im#*}11i5lGCM-Q:m7EԆ-%CΟB9g`0십qFKI>Ug^E #JdHJ [ hch)Z3B)HMt# бBUJ%:#-7Q̆nژ11ss*9P5H.D^&BArr%{gUeiK;@4K丠5Rwztkzh1ڌ/l.da,vN7zR. D)}Z<]$9qsاDg1?a|?}`Mn\|3 %+KSO*3LjV񘊲ZJp4GF4Bh8r;&n|y7{ӳZ~s a`Om<5:p9 1!vjErJj7}לXrbS,8A뻓k"f)Ꭓ~/ Ҍ3Rr s̶hsʱfw3|LA2I<'ϫ;J#֐GpW]@\+i 聳0D#I Q(>j =ډC2pSե%ՂJJ5C&H\% XEŤH\;n|GdPSGriTG ִ D6uEBq8,hHahq  QLLƜEH38wp ?2[,V5CHwϻF}[W^AXJD'5nr|8v`0{pbE1VHb:djE%QV׸i;lօf mk2KޱoKCQc&Jmsl#qInB*qLDX/,Ȱ_`4. s<*(ʟ{we\H5/(9ۋZ s{@<{ D;VKtMb}'(^ޏrXŲVLN L^0X9]Ѩbň2UU% lb \׃0I}q٢BB:+_Q'[b_S"1=L &Bb}$Ҭ9L1`t3  FT\UcZ1 ʠUU" AX:cA :nq>c| J37q@oVm\Ҟ,$h: &%?|_<D w%$ނn8Y1YXHͱm|J]O"T –$-Iap:-A/#*Z+=h4#5c"aWYZV/%lSD ۖHPQ1+ u& ãAKdDtI,6%)u&"16BmhhE[2te/ qe}JFtAXf!6`敖<2lBeR"9Zu}{Y6J(2.Rq (b$6: Q8p0 & cVJqh~B>9:SY@MC9CwJ;E<+҅+&Sv<]I??QΓf)r::ogg٬tѻY)Qj>qft6hz;^7_jYh2FTt |^+zǒ9[~uop"k͘~}Q큇.oӫHkhmOsC#d:j~^~sRRٟwd\ q&*}Go1CkSa9|lx),0ZlBAY$c cb2HudZLBI2Qy(*aj6[,6.{_zxeE :j BEрe7_ٍef2:j,QZ;|n\wqcFt%`dP;qZ#U>:T1e,7,X-LXiLkCHhF42&ao攃 2mdWWKϻ0FyLk`;ێ}d'?sXxpOv`܌.oҗq(X=(!=U0Zh1F% FO ,T) xu' BC-)‚1TV[Yl^fp2UÁO-$jX8trcv{ĈJ +T7#S-x& c#OkEs"(FMa%2R"Jϟ%A%%K|N J왋MHj6ujJ@%UYlIpsJnj D&C^pRR%^FL~1isktOVJIuBżZK/P*w0[X]t ᢐu2/xzx(dF!7pOf3]E!e$< 92֊9@D}>T7_nO=^%vlhW&GɞJ0v `a3qRZ#R A5_zbH@V , T.cI&6,!̨z*lpIÂ(/rE6Ĉ"9 =-Bh#r@%ˍ8μڡގz/#-&AS&\8KcÝN@7JunoHA ESYC\P9[q ܈ էXu/}cL؂k #%RO>%W7X+J}npJRBe?S ,Yrc@??sB)9K\"l 2Ư\z9!@# pL)Vx)*ڈ`v!$)$gm/pr‡'2N8khܫ׶xl "B[(gW}%kAv0"T؃b ;."ZGAEk K$%b {pv廙dR̅B=k{wP( C_ImEf1y)]ԯ/N.3gơVAf]QeRbRo[1:`vpen\=hIO_l:MswQ kMUq͋J3ZoMʛk340FxR %_ON*}/MP\pղrp ݮvWzKڡΖ'T\-LX3 Ep!P[JD eGXZڙZ`s%YJ &kRg[j&Cv"u7җ4!c=VRD %;Xn-xFeOcQh nV^=XAos.J9'.lQX1:ictYn!I?`}Afyx!Kؽ7?cO*fQAZ@6O<Q´@[l^,U۫v 3x>,*}M&@}J~X, d:ѭMP=~trlOfO(esۼOXp5\WuS+ѕ"YCeSJud4n79 v_vHTe,)e V~VIL^ &y,|Y:={e /Aݓ1 j)ioa?ŪRXDRvVSrjAΨ)'v@wpP'$Zl9h =l-$_%\MД%T[$`aO5%n~8.Z 85~0!8;w6:B^zw*zBlbeaa$ ƤҒdX+1@zQR+vbIiRR<5Uh^U q;I-H-Svc`JT;8V2Vz0j|͸~YNܨtѻYU( {(Fn:s4A^.} 3XcJځL|aPlERL3SKn / R_oz&("t'nsl(H5JuLzgKG&*ӉM%:Uk\Y kwRxdqoR/%\d) n,`/cZrـafI`N!՝Eea5Ϣ (6uA `Dme3AFz8 nC,nV$@eiQZ0snLn0muET7%__x,^4bdֵhG3>'OGO^"IG毨)5E ywj*.$Q\kGӿNjYJg(f%?\=(P)Jx*Jxeq {,GՊ1]ѧRZ`(~Äfdam޽oz; /?Z-Q3-bVߥ L-ry3 00ur ò 9Wȏ}/On@&wƅ..Zxq~j v ֬M3w ?`A%#~N3\X[M %~;P?,52 oPX45[N#a+SGL ĺj5yNarH̑9h*gy USd ג¨N CMr|9 *—T0 n] (䄊*HDxPA}a grpAZKj%u*LKkWh`5˶G,0m$0^S;ncU_;N;t.XDz&ԎGlB.!Ʈ`sCu[qiZhaJ Z|κY 1MTT-^o~ʷQSbeuY<]o?5>(@=>6bW<& c65$ -V!>ߩ[D=zV 9Buzu2wTV<1QK"m? L, aYlc݃CaHP5NAmu(8@t *F{5a}qA%ȜiXkA#+6qc>}$8B'; ?Xsb?YY?*b//t$5wut*N!@xÇ@ndjM"x!-f17BP̅R=mlLP1u8P&[<5]:DԻERGDB]z9Y1"x{ރ^#Cغd2f_!-p(k/KԴȣ@x,dI(˕{g0+`jsrq1X4fƪA K#dbzQ! @Ӂh1ȃIEs$0UȚ`b!ՙFuv$(yZyy|'$Ah#șܻ;' 4wɱai›DBL.$^~ axJ +'ܪ@߷#Š䴀~^fT |{ 0jq}FK[VBL/k•~H-R7\5 \5FsYQʍ+-TQ@AaqFje Jn<3]b lt>_-ڭ8Ԇޮr@ܾnٚװ6GwŻѥMn+wID=Kެ^5HCmc&Ap0,)0a dH@m/\Dڵ f9;m܎H7#{w5m:6rx6;5%(-U^J)TVS̔>7iJu8?i*%}R? k# 'w`B/U!)ȃyU4ލ*|oK}TD?gC3BgA7ßJ&5fJZ@0@o0+^+ JXHeψ<^gusfOy]|ӭs[p3J1<}2h0B1$='[5EVl>)];53~{-h%oh%u$F/$@ԯC/9b*ؓ7 o{sWTV(e._ .:VmB^ІRGiqE񺔞U2%WU#HI+Hiz5$$xxm{sGV«J L#4G3]P!ZĐBHw%#j_H]OǙڢu4h9S[-P[LhL.^,M((#\5V…Sx3[ gK6L)Z[Mitgm%cOMVvox^(}Hg+ɢO'6h>${sC@v tzOt90wa=Qt%@6~GV=J0 'a@N€Td5J|I,B - S-FzwX:d9)",t^ %JmcYD0k)s5ZZI$&fiXԩ0oGWwx^x nOizZ7S0Q?۾y}T)8JEQN.VP]4P?VbFHy Zm(͕2|zpONDt"!hq)ܧd4k,1Il_̯,uąa?FB/DGU%~,R?r-zH|2APZH6e €lgܕx78hA݄y hCTC"Sc#yx=v\urs"w9EՅ6 ^$*H,9=4+%1.iڌ5 푳zռDYQA Uz!1󣳤J3;ob||S~ S -Jvz!w_\2ij_gMI/ $g-_`GdD#~! ^LKH1 &2ѱAFр]$zГq+<ʓpK$-W0JYY(j@iE%!gZNR_zZ-E] CCϼ75ѣ=m2aL# i JIiB9H R]Q0EXj`=m#sU[Ì4^4x#l؈} k:yNt aMz?dfM`*)ihznV_I~ K~16!8mRX\ !T} CZeK>8M^x*[_T(uaDH\hGMEH(yƆ=cD歔a(C'q?;1aD)y8 -yYA679$rN!^SSPVͳ3اC,4%<3 Ú1R1N q%q/я= )~) Ѡ;5 XSׄ޻?o`{R~A)Ge}#ilS=e)9*7ʯ{77EĜIB#Wpns/$eؑ9H8_ٚ lL\pУeb3{K^@'wPL42r*8Hӿ%돒?̀XN &%+f`9=mln3Re1eM/9R±[iuJ* b,`a6kYS\!)MA} OkS[¨B{3u[miӮ*.Ӑ;vi4YS"+M)*9s|kA<ӉmDᤄpvθ' $ꙑb3Y|9y5long3|plGAч?U =CjUK78zw3`0(Š9<|g["?-k(KJbiUj,0!WC$zF!f1$=.wmm#K~9{v/0Hsp="vFI$YjLITwS%n]v>eU!`&j8-.V]= /cA~XoW8x75XbqvejsĸF\D[)d#]sDausH%aWd)M5/Ujv؁hBd6wtp؁+kk`a%{|XYڤuc/?NJ+SiQo1k3a Mth2(WdN昶+x5oo!]~,ߪbv͈2Xy-Aaӭd]<ݺ/RM)8jn崨uNkXsttI5'-9xƚ[Tዣ’1c#Z l7rzA!aڻZTVB:f:P-($T'ؒ(yBH. DZwpZ- >rtSPH)EJA!j(8D1p/T5R*Ctt^ҥ& *h}c Tuu.c\ukh&g-&Ke(!yuNkI}^'"xNh #)p9-tݫ]$,QyqTۍR&lWg^9,l.5NiF}lA뷏STBav^iZq8I\HXy.h*1eF1c%T3 J2cՓe$Uws\PL@[2! )@bZpn. ]Ib&/) QNE4UZͺ9z)1jkbˆt0Yi<% 3e|43Lqzf?G0?S0i~;bm=6;HiOj4?^Ϯono177Jju{ǃoxp5KWƠ t.tohTD(dY]/FfZx5* >ּ DM%ᤌ." K91m9bo+߼$|4N,9skx,ynSxiBiuly'Kmfh-q6o<ز [im5vcSn'lM#Ͼ&lJ$ jvѤtbD}FD5XDyG%;-A.X8% ,J(z{{Yk-(]!|̾em/ ,tY GMGcݔM-/+m4 , l^9^>6{/kjVF95<`҆ E+h>dm /]J As^ыT 1M͆x%#!&ts`~?Xg>=nfK8z&~ N7 NObt`x/NRdOKaw5E1> ao|),'; ȢR8h),RjXp *+w0sdITN;p;JG,EMZy}ݸ\\{W0B`h8/j9*U{}-}XXjv U*ySg[S ddnN,2]o~gj\gBh:9]![!Bm<$m}:A|ѣ#s&-"&;M7Z/'z xApOqPJgˡWOo&7 W~-RMQ_+%Xps /4h`{RQ:}O~Sva=F-4}-+ά^{V`cy-nMi%/=kP?}N=d3ͳ,Ðd}ʓt?<| #iY$4J*s0KծJtO6f %Nwx|:?р F_q: ;(奤;b+&^ 3UU?;y Fwٷ({ܦ"Ʊe8VXHS8+H.3 *+WGWJǖ[+bU D @^y}@`$-∓Xn EUD%t5{.R7*y?NkBYqs y lS3M,RLPisͺ7_gSUR5睦mM4:`#Y0?D`#9E-i֯7S`c)FpK[3IȂL!KDa cq;*U !~U*E.S%$H/>w5(~ ]3 ~`8,H#Zc 6`?&ʮJI K>s:EoMk k<7+zlY|8=snATIP(;uȁfF Q"~6g5muP:)Ԋ05USN.Ar\n::p% )ߩg;!Ɗ6(ݩ ?w%br%El{خ̋@WXP Zs}xHs*eZS/pSy2e)bF*b8ۨ;WW*6/A[Nd*e7/⇸6[cE KM^WD̟'[+}475k=$TftqT>jbBjun^ߥ.dSXXtiD( 3cК6M!`ƉE>KɮWkT<ںqRAC˶ J*(ǃPD&eߓ'4v UXsI60)mMe9#ii8A<OqȚے Ӷx^'ß5lW}FY$6JQJd435DloN cŠg3֍9i`֪>>pVKsD#+io#ADSI*ـtgYm]rPHb}+OWFIQK5譂Rt9a5]‚_n}3qg/[~m=!tʣw׌mКo"e k"V =_ϩ)D5 wrhx5erf!?o? 38\Q8ͅpr:l< &KXaIr4I\ < Z'#Im1A5sLW 3$~ y9XԽ pWdOlG~΂$$%{M&(+w9jv a_eTkZR^Df}-dZ|-(EAsh! G/zdogRA(Y>nۆ\L6o~i~upīsUBu&{Vy&^4AzgZb o@]s K\n&u)ci->IX~ۏbJ (vɏ_|L3}N gf2mgnIHeeCQ5nTgb%hJ|fAYi ZE22Ϩ29̔v:m|Jr}L,Ob gX ;t]Ϋ1V;7ZqjN'sIsҴ;P@&@ iɩIk$)UKS2So˗:4j|!&R O1Ih su`G,Be)De"6#ÍĦSaD0WNAyp.JJ2wJ˗On W-Zڙ$ii+aes^Q1Kc̗Ƙ/-|7?q2M}(i2M,q$oUFH`RNL<PV)DB 58oDF j5|iOͤFh.d:XCmD3pE%۾!(M;6ȫ7lqԂȫTv7}5˲5q =~1s88+rCHJ^%A$& .׮_^;XZoQ>+2fmsաO1,5M6Tc4!KA6|{pMM9Rn=nߜ_zur|TbkV~G|}2CvJ2q 9P%L6:LxH-WJ5Aj0DCgا ܘ[1qƖZZUn0TgXc"B,Ƥy#ȥ6&eIn@@&> R-&)䟍.\<.nMϟ<gńp#[",zd32-Kʼn(B[@^O]A~0bzփ8by@nG&ʐ4.f(p,XadL1ŭBhIu Zf8.wv>6C߳_X;&`"0;qg#k?pcዃӂ" o+.'Oο*ݐM |QC'@WDpz"7G{[`p̄Wgއ[-N B_c dgc^zp$& &P]><c%vDI(y qQB*G4QIpj/_-2+wpZ$Rpds$N˩ZBAh›kנOPO$tKa2=iSR P)JTܟ< |2dI@Q52)!TQFuDm|1zEf =~"4!XPҹ׫s ,_>b$ 8W1O/ze`x9֡QJ1obR>af A(Eê#-ˬeJcZ )5O6L-6JSL$3اc)V!F-\эr m>֐k{:ym[z>pC+oĎW_VwhSO8Z-3٫[V'%^[W-nzl U1Ud.kGcaWQ+Mx)о6,0Ǎc}NM 4|+ eyLDe`$7zW jĉ*QEͭ&W`bT(ZS:k4 И-L-ʨhYjZҴ"NNZ%_h8i\: <3 ! qPMcU * ^סK,<)Њٜ߬L!Ҹoܜ8@፻99ϻHqWl q2-ۭK`ȋxósC+,3a@^I?:5`M0Ye5[&PBmQz@ lnf* i1Yz1q 8t/͔91Bp$Ӫ6n L쭛$AxڛY; .gYlS}'g>a'O D mȥQqoWLBI!bN~ܵc|p7)|v,cmi3CC[ *Li8ES!Xa48  RFjEbF3u}o-">z,Ob @o\@3,L_swIvcR>4*T꘿ (L3f$~v1Od|n~ >(2mϙZ.o7sB[·ז= b^g& vO|9Doz71"BuΕ0.pm5{͵~ݣۻy&q~I4y Af\{?,z_B/ ^}{ 4o_Ż|Y߶4gu8v!UG >k[x:v\ wup%:ڔЖ/a;v  YϨ ^x }~ݣ1_}^wf{냗f{ݗ_g|oϙտ^my|x{ѽWw_6޼|{?ΑgoyHޘ{=/{ͣd[`v}tO(_/.|ܤc7>~+]P'dz_[~vZ}x_O͇g+^rSP4:~pF¤7 %M$IO0T| zo= |q5ҽzE7x>IW6qD /4qLzkyxR} Db+&.|3&w]ʟ:0﷟[Oۼt?//|vg ѧ~9BSh+ Hw/ϭ9=~26l?; Cۂ9o Cs IP}p<p GG-sF 8'>JץV>i"w=o4];#G$ {Zm@VlJqlʜѩ⥓*Ps&^H\+i!0r0JbN0A^8)q#o7&=I{z "xU x2TWkzw ﶶ괜qA!DZi?tz컾{2n 2ob*W#֋|zb=42\f퀃j`yX-ۙs + gDbaY2+,WJ M;TcՊv^C>d9H;iz X]rU>RX-@\ 8'0fpRLR55`R^G5^|hPJumܙstVȽ_Qvgd+oz'lPdJѓtJsU(͟V⮪5 ݨ ͳ l|A 䨄'0.3 \jFTF izKmbɚ;+y3xܻyYpCrp6Qގ8u`:FbIny'yM1هnPpuڳ+f1UD͉a*+'n{pN.hyC`5+(w`,gR9ͼȄ2U8pW_E|C70qC70 L|%ZhQJo.R=S@z !-QQJ'X%, d3 B[#L`]6NV KWK9ǚcIXR{j@,WݬP rb1*])ބCa%8D8<4 (!i #`4 QAD2U_": JG5{>a>/_rOO;c3_|t*b3]~2A2 &,on/aQ@tg ȒjaIm*iݪióB Ɠ4`%IqjR܄7&M&E aZ焤JjaZ ^ JJDcOp)SFH Mo֊RQd%QCջ.̧bJ%+Q\Ԯ$RLrF$<;@HtRhrtd-d/rNˬ턟>'Ea]qc\#D_9BXGGm.&m.ԓ$ EM:>P%U䭗 {q0˄Ʊ\+eapjV&~UЯj\07D3.KceiiDgH+ G zQY: G/F;ǰv1(hw#vcW:)هK$'j\dw}Ywes[g4.@18.Uv]1Xʙwu@$:2='yxgЙFw^Y`QW_ i1oֺ 9^x^L,b4):75$#QnrJ+LIQNie?լp8u]$#y!w: \$B Avp(Au Ej>y_Ґ8ӤNJB?HYbv޷0OfCaX+(I œ(Y JJ'R)<r-FB"L-䘒/IMa>k!2f?)Hp65ʘ&;&Bm'*,4S$ic BJ\aAE-V Ѝdq"`gKITQqM4BsCூ; :Tq^jFJW35(kҔJg-XRjA3#p7A{KqL6Jpi9GmnEqF %H/ 2ijEQ"SA)VHc*=Y`&eB D+AqD"EXP q "*A %0(G &]gq$o) ,S(!,4j˜c$2?ɸ}h )]aqy8N> vq?82+I `F1PjhX 31045 :1 3Y+ۡvT`l,(Y>,:*k_/&&!- sЬ+l"Bd4RDzF0_eӽg"!Bp{ӘhR1(ФҤj! 1A\;Fl%Aa;\ǑVݾQcs'`Cf\`Ӌ|(+qĐk83͙[/Ӌj/Ӌrfr1Ҩ@D}jVp[Vk54GPilTj+‚cVM&BC,8Ge wGh;Mowe8 C Riq41+ MHͤz(`t:mϽ.{psav:Eh Fa"@ŧ\a$¢±r\T|J%!ާ1*A\4162IM9(!in.p 0)iuڍ (O@q `o4 _|6zVpN2!Yr"CT&VjSc$=n෤J$IЇ sI͇eTI@IRsoJju"tT4QN,AZ50}~Y9́5uا036A'NX0dD!%:L.( g[?^eM^27!L*au+a'Qt{?L>پ3XS9fòKҌF,s <59{QժbYE3UTqw*-Wm% uH<}-O9+gB(]x $]xeA#V)c΢|n 6Ѭx|C:r /NNh__u9Kg[n#`r)AS/4I/Ԍ[9ReA`a@Tpq*(MAfƥCiq'DN˸sn*Ý0K*L52N4%":) }4wU7Z[rQFt]̆nYtDVq:Bo[DJGhw9ew08R}F1Mˉ;U+k9s9oXKɂ-j^Xvhi/OOXvx{X=]ָE[JvC4Z{)ӋԇK\Hp_&BT*V{7'' ZH%h:v$f__`׬#_9=I;ٵ_xWB}0v`~&#` >RX8ہk[r[}#I ;lQ 9)󖈜1'$S΀Ϟ/ <7bɯv sˆm4~6o|SТ>u7a݄QwFMq-ⴏRtq,[S  D(MNYK$JpCM{?\g?{W t4q9ZMot޻*b-gIO잽oFŠ".U{!yc{bMqa^Hę|~)1gLC#Υ՜%$M`L1ԁJrz+ Br29]3Py&Ȗ*R^u&$D$oF`ot6B*.u70Nּov֨b2D]!88}cegA]v,#aZä(\ǜD) Z\N@ Pv-% ΋k ▃Pt 1@wfM63Bdg`J@ nM2yI?ϷQު7ed~~=V|_b:;!NYͩ¸Mw  ,?༜!ϕj0fyύ̸Hz[dayi &}hr 9Cֽ4 O$df =0NZ',3)$_z奰~ Xji !:kʺO^|mEA D+ H=2۷)5MsCj0,\eω#9$ z(`,@,1)rPA!B?Zkq}(T +7S\Sac6ݕ9.+Q)4"Xxօ@cٹHZˈ ՉВAS降tAnQl*"i3qGb"3JGSNky 8H|TQoQ@B&ajͥ)[t/6Uaj=XnT4Vav^.Vز mbKDD 0)56ijSm 4no;@^Vx|`TS3ssϣ'E᳇E:RxT{[j(-)"sۊ m(t*WH!2uKUL嫸"\'̴$Ļ jszmr5ex2J6 ]oٜ].$ay?uj Y'ݛYԍRp>AM\*wF_Y̌ϓ ^W_oB]1"dhBsfl|Xe߲ܠ]p4eN/ ;`?}ČO&WPto|Zw%ӫx̾-:y7ysHKa↜١l(Đ)V g lx 6|>Paex /36zP# Q|S[~`OFLεh{ݛ?||,|ᗛ7~~58=^}ݏf>T{7-6D96ۀo>ydG~V|fI8 d|1Yp|ځOmlzg>gO.?tØ-ropSgA߀aÉ2~iB~}Ң{f0*og#KIҲItb8L04k+N&~x7̣.W&ތKw`wQ R3G>IIx= /dž ]G:}f+ds`V)m]:,P'~.`I  !{?c"-7$݊:̒l;xc?җзqᅅoy&J`[8`+a#9a17q%|O2ah{~)ƐͬSj*] 9PT" GXՌ ,a皡BR+SKۘ x1^o,9GО#W'>ܠ?"/׬ BKymԯaqV-&@m9{D7<k}r-\PMnDO=ή<ƾ'75LB5:CQ6XtO{Us>մ0OUY ҃V5Uw^S||-j։y>V~\*Te9Z0#(K)Ycⶕ3gZ(.XmyڶL, Dkg|s:6kP?};D[$,MHc I[[C#`3*7<ډ B>5Nv$.bs>r8})Яg`'ԃS@rx@ (2pLD8lW8ٴb HrjLy9,Q]'tSC!HdH4nICZm݌ҵ.> ?-b'8,߬ەOVC\8PP8t޸;Ǟq*b4F1C8E&j!M1mL{9v˙DrKkHlv o#Dr;w[&kgtgY|&˙t0*26z'*FwZAú-t X._j♌`RV|сˇjAH˟+kqАuyRfXwCC o"!AbZz44F˹zZ3+nJW{q-ySkã+/w TkP *g7c&߿i/j0E<1;_៎ZoE:֑q:| 2+Pyn% TAh=+Qkh5ZF.KY/J\ױpMKsR47wsŤt\M>Ƣ X0Seu-'g 98(lx-eTMs aJkn[Zޖ6?-8 X8]lݬ6x2:oⱔ5|-5 X&My @Ay3n-&ga=U 4+8zVjm[HH@* oE(@\ jm>z(?ԾíV0+?݋V+T::5V+W`5Vp, N3; 3+"GX,&eu186đk-BL!EQ%B4w L$qRD0!JW(̈|#qNBi2:!w$Uek cT{ZA%\i:al͟,pJdĤ'm/sm]:Lr!G>OYf5e8/((0Fiw2.?O&̒y귳E@gvϑ{?^x+XAQ(m,iZ# B@<}7&nakxF|MN bsi۝n k/13J5D9?£TҙxnvI ::JhCa"$~))up<#xt/fPVjhR>24z:< N$!v͒.M%NNی<?敓qbH*"JXFl@#u͡9)ǧt&,/hsrcmhGɒ? xQ0g73d"l:9q̼QYU# d_'Lp1r&9St;QN8y!XsH2H1Wnp/33[rlGHD}[Pocx"𦞚O1HV J]=?D51ɠ?8?䶠%TQ BuXU,ѥ>Z〈$+Iy}ij-vU OgUa!jq<ƙk- (s3?#Hf3_3C>9{:ۊ=w   w Idk"Iu[ޤ:EƄ5M*ondpC~`|ql5ǯ}-<0yXpl5'8l6&!Œ8oDH}(yxfw㥇6rϒ_JÕ)JB2fl4ِqte'HᘮZϧaR&hv+jrԌ`CMc7N> Dݿ:^&2}l9a|L7O8?_Stc~5 |0Z?Mgo<|oܚN7׳wN>W^%?_v\kg_}پy͟˧g~>yɟKu|ok"6N~]nP û:Ϟ "K-}'܏o<$B2>c d.׹64sU.8ecS8SƍEzƦO_}\h'VONX"%80yBTf_ODRf _V)f74R29DR׎˔{}3Dp0oɳJI^<35񱙶B'6q|xRˑ3m [hV`3fKxQ7tS}Nܴߝ4g~ru`7S~Wx<{f45xݷ!UG6RȂ[ 7N`~< nMg4b$iw~/)qq38>?pp1mͷ7^^}u2ûi"#Tg` 7_a.RabȔc'4: a`~gFJL66I73Em]|6%FeeYNj +i!ٴ4X5EuM~HZ(㰣 v;D:#Å+4+Vcn tLiRXB5&2#:ɿ6 YV1s 8:ňlk)pr- 'sy%1i(#$g>#g Gr!}Cp5P"# ]N8Y6{*mpr4U*<%=%D((BB%]"ā%r9Rbk!g$lPp8 %'HkLd &^.XYv'ьu)wI`Y X=KJKX!FdBwYs+<2 ]-\.Юt5>ܔgiv8t#r 2/D">AaMTLRNwO<7w7=ǽ'C/WWotbv:=IG7)o uRKq@E] @*9E: jVs7.lNn=zZ';>*h>~z8Q/Mۯ^ZxWOٗgdҗ{H;R i]ya ! 7hA]Y?@f8fwm_C˒~.M6Ah:%5,ڇe\7m*o 9H 0 PsZ Rc)ՎR c$' b1!D1NplEd1!# \PۖQaw)%"BXy>vb8v*NaY`K9Y)fѢJMea)a?u~\M쁑|5|>{s[ ـ`k7Hy"3O$zV$?|qD(1&9qǠ{ r<8 UdFbqpC%OnR{?; =U_/CԧeV[Ͳwvfls`+|}V-{@g|;s>s5\˨ybe 0GD .8j.ID'Ns2@d&(L`^1EILt,1OϘXi1L PTxod9 ]Iot &vYl-E")(q ~FQb0Ŀ\D1I*BGӅjGfZ`^sn^2tC-CD9X\͗U> *+z*H}f2Ϛ-ֿL-tq]ɵ_#AD&A"ˀ,Rq"bb#1S/:RЍllLAE8BjMn˔N@{Tw<@G9Lז(#Fc2 1t4_*ewi)I V۽܀4.lof^j&hY QJZP&aXb֭NbʠJMTpMFK%CTr*o9k=SĚ"MߊG2D C!gCN"C"k$1 ׈_5F%XF)\Т/])dgwVP> 0oD]6{*AP soon_ݮ$ў򜆨Կmeg=xOC o-߰k(e1 3\sΡmPʖS x];n;'xA}/O(Qs_:R y;rQ<#re ="O8vVuGb5aӨhE lOe!|$e4Dz4CVumm2M՞p'"6Fۀ0i(t<86~,&M a(8w=Gx$SV\ +#1IђZ"DRpĂrK 91՜ȉHXLɵ9RM_]\,/Y3a+,v-]J++kWr< vgط$;}_zPrR:p)̪/YnO“5 ^.LǓJ"tfr'l|zWAB"_wgbmS-!MGGd]|&TEX_S(޶*Z*ݗ^QM .{D5TPr]>ڐPMjhRd#UMZ=$:aOZmR Ye4Vc-u#ԡGIZk֖Hj"Q@u>kE9ũdy(Jo~"Q&> 6f98SyݥGvRh1~[4N>wxW%MiQe4R3Tf1]"$WY!3ŤT?td&O3ð&3eߘZYb:-/g\}^S$F)n0^Et#dI)#v*Bޑ%UbU^%jn6TA¸q/҂tOYI 6u7r+7A:Y˿{Z-ZWVr_. - 穑1~ӟn@mfCfӆZчrbڨ_M2pxȕnQ"C[BvVPN7#=!'>•jϳV!4vӈ.%^de4;uI-#΂,U y _rQ3ώ~)>ugbD o9RaZHgg9W O~ viz%o-w?7M99'(~  QMO7=R+X PE4~l|nL0][GX)7 0C;g *MqI(%[I=l'9EAqL$102̨$ J[˹$[|<$҂Rk"&zWÝ)2[k!'UQm@i'(q4ic::A*QD$RN"Vդn6n7J"Y<| R!|:NI4H홞vlO_ clvFj7<|o iP~$>= l/54EQxF@[ޱאkٸɛlnŠH?LP9*)pA () 3#ҭ;`Ũ̈rDgiܪ E[W/BbZh3jP2*Ċ=E1z"y.K>Z\ж=I |I g30M=73N/fl=KgUK_1pE~ZLg8:7~evd=0:e:=Jaʋ|:|9~է/n ݱǧ(ok4Ng>:SOB8eE~?NFW&73~:~z{9fI}͛` ݔ>]?mn}D@MsV”'{;^gKw֩fqo6QVp)o:ϙs NM@]|9MWnB_; ׳Gi:+BA'E ;A\$uJFA '5(źx&_GP~JgG`Ҧ&&(,o҃+ ^^ WlGNB60u \YYh7?ev/~ٽ?}xeG`u|V$ߐS! Fd-31`N=,{<o\z̦Hs1߹`l N/3|8|<}|c\D?`\Eb쓅 >x;~#`lOgm$uNO}s2F>V.C:?ݼ0:|ݥ7o:׷0vpggtf0vɫr2px~۹I'r8]$0j$+ (zuȎEI`'P.q}9^kOgX;w S 4imV _c7/v%G(uT2y"ܛ'։`88whd NT%1e7a!^`}F.WӪ mK6wi+8BNC[ "s)qP?L e;dsxgc3-4!p!\bNͰ 8fPͻY"pןrƳA2s;Tn|ZK^+vfRa[Fam;K-߃lRas}$0֚s7M4wt1ltyn>@Y5WqIB0Jq z#_P*D?망<;,0%1{23!-c}CF]4ki0%V4 YB\ kEw3Ŵ\, _0 {;rJp/w\x^C>*c/6nGlam+('7/1sA,7uM;iKbbf#Sxwxup[;²DaTG_0L'&bZJfI:v65AX& ݫ؞i ٫#XJC"^\p|^unrTSqi{ wL헶jHoSwɗQ!\{5ȭ;nJ?в4fgs|^Eڷ:(8j(Hx;k*4DJ3U wEmKE7F?k=?i}|_ws|!Z^*"Ӂ% 6pXNN(EuH M`XAEɎ9<5\EjT`dA6E;lKuF]j+TWH+P)1o0:vq"T,Pa--#i-! ,3U&Q kpjV LI\%]"k䘏4I\]Wp3:qBi[C1u9K mܢ@ ArIH%[?I^ޘqdPD?Lt'&bgֱ^l@խkDm}b,R`}@ȃ*gPj1G R[:qK}+RR[9qa2B\&vc\L-_=A,wU7~Pw.[?(xi^~lGk`̬]Y1K3}i>'rx4Y`a_aOKTΫ7!\EkT߰WM} 8@æ~cczq&r􂊝 s*V%\ E[m[.ɹ@M)k ?YhIȨ]3 ke壷?!8Lf2t+D T@TʋEBM:}[C= 4;%JoΦe8Ill86Tpx:o!;6s@)W-q6^Mwѯ_7zav/6?t 5{쇥ROI-K{7!gP5RûWpߔyO|~d]p}FOB#FkcOO,gw*֦RIy͹faLã9o (X"w&n>܍n>Ͼ⃣Pq3o楒.Jtb:/O |˧\sSH&es$fqGP#9_#QaIg/Kx¶a^^( 79ڧf^ }>UCq֟,Et[)^>OmE SEXoNsEv c~*iJ+f'rړ"': KN?h~IMrUv"Ի*qa<2rm m8:dDeYTAjP*яk-4fxn/h2YO.6~2ExdGƑqn[faϋѼIhkTwѢ;4X5Eg2=ު;nDJX 6_jIJp=e.Ii^µ똈džds/$Vr?}svM0`goְL \b!]}->IXDJ^&x.r\#-.d2M_eU(""-q.9v_يcR,:7UFNj%AarMe3\X3fA^47T9Rý 9ZrU1m*; D^CmN{ԗl Eq=,WSYKלP'Ѐa΀H/yNϙ2 'kJ!Rcq|\@bt t]?n|7Y9 vu^q۵⚫rrWAw.X^X󙲻{F-/}iJO͞ *-ǹ*up^{_϶${%x|PuGنyxye=zdȏvWMB/)%?}r-\=^ogܥK[KLΠUDgwEn;Q4"4wM:Q2}NdXXHCʓz{˄B ȝΦ"U-9`mO;_|4CX!vaD`f-k-/ïfL2vUX@w,lb‡cO]8bÆ$&q$tՊ󴺽PJV|BP wg ɶ2j2ըLH$QFyT$STq,OO?s'J5&mAe$WZ2Hv!KQd}^Zz{]r9^Rtb iIzUiȓY-IӲNUGp8@  'k 窲U.'X33U_7U=96rMU M-chΚ.3xZ֢v4Bñ*ޔ;e)d}2;Gе݃@@7z8oBR_^ 쇓'Fk=H4 é*9yMS)J:l܃֚- "oWbk -Z]Kk,6qk9 ߓ9|bb[ʹ[\{bzS5zp׷/ $!ZeHA&_"{9T ?Db&ksc>я&35Ej7? ŸO.6~2Ej7YlOlO哳m;)%SF}X ^0 13(x0Bh"g_|aAT\7Faj6ޏt105Ӈ!LTF t6hٗryt|o5f@Tzk.|x}h=DZ8V+Z5#HXa g^V %wI+ ø b|͕ ?Hz5=VJǏ8:03.;0e=FB8xt0 *L.4儅}Qx.Y,qHquu_9}%QE* 1G>>r*{D{FsreI`)[4 #ca•T^8\ (D,LHytT.fZlf1ƛU(-}sӠLdӫImdښ1MO?KoPrq1;$A _+`GRZ-+~8Y ZRrّ1 IqyDD.Ǜ 7}oA thV_ϪJOTE$[bʫ8(B%EPuw8g"p4djt~аY؇-zWbP}ɖ{{5P^@pBΖ6(P[5| UqT5J}_AIkY2;o a9#I6 `z6lX>:-ĜJ)"xp ޹B$q{5X(sz7G!%LcgٿoLJMfo^_|118pݸ0"@?e?{۸E? ؜3da􍶲d栗lJ,R$%ĘTUuݺzywZzO)\LYo\Ѿs ^]LL(twPy)W]!{ޯL;?h˿|~$tp0냪R=p#& 'ngh~x0s&b&[>ζv;%_,o0*4иM_lXO[bt oޠT@2ZPh=STx*[tnE0`r.ACZ2)4OPXH )?ն} 5џ[> [=տ~L9;ly * {7!.L'IcBpj%> ªfu8g`rK& Et4`QZ`obghB0*)H4#ԔoSH9>;)44y>LIJ/W|W*ϝm0{7 UL!׃FcY:GL׀G9 ȢU %~ᇫtLu|9A`v6}lM!P\)>v)]fpTkL固G&yJup0B̡@\& .W Wl=&Pa*D+boJw8F-&NY'H[n@XbN%  -՘[]Y Vm۞2`Btk oCP:{Q B\tқQLB5sOBJSHpń{%0 $ /% `ȃxۛ{~_7T{ڪUP;0HD1f`Ѝq| L|4\+ !2-.èI`IA'|zdV'sbUʥHa!c7Š >x,Sq& 86hJ`{ʬ1}W$U%Q"aRX_8} BRzZ}sB 0BS^|]%|CkĖ0fT T#TF}CAB;,iߤr'0[4)xv) KdN%"J\F$׈JrOc% ,De% Ac;><!UԢ60 jj&\@!\}2zطi&…EӐz"ʔN%La|2Bj(XR?~uT\;fȵhNsزAl܀Tڈ4n=wx~hfŀ ۷{ `&?FXDx?[hK V~h;hqqݿv<79YyD}9t̍,'M' Qg48xu:pV nZ+zj;9ޣt/zZjUgvrnuZ=R9Tewd'Tj6͹mOu'/sn=:#sh8e.7z'5v'4rNC=&?"8:8IS4 O(4 F1i6۫otIT l'S/"\v~TF%-%A`8Xy Q]l=JTkCM[r"[[9 x;Q_ϵuӍ:owQ*5aia)٢Lg G6s%a5|acJd?Y;2Q+]uZ<ʱ4,>EAls×O4t:=ܵ tǓ?<%;e4b=9pVT.:U(Ԛ6Ha6t@ZgqMһQX咻{*b5d^ͨl8dK;=w}~^RLM!< #AZyM x̠.'~3Co8:czˋʻ!}+GZi=pI ,0BS3J*fťyY-2znS:MBq=y|-*đ8/dZ̫3#,LVexAEa!lɞꄡ~h]J!wUh 'd5割'OP'٤m8#fo:Ig2R_»[-QN =L΂ J0spi9$\v  (\F\ R;l/ bFyTkf:>B8NBH)m{9׋ 9n\o36LP*%S}0ں3X<ّz 9 i!9ʎP3;4(S]_>`9AO#ݞש_Ԇlkg٠#{ x)`q(@.EiY={ʝZHdmk&Ro MEo+;14Ir K5mf!žK N+D) <-D\3( Xm~*ufbw`-# GY(ՙ=Mj&*?U/P$}A9e5s-n_@ jt%U-ۿ_N$W9J*ew.c~wTsoI|V, dU쵥cG4VJsT[\!Xo9iْ+,X8Koթ/#O:cV3Zs 2KYYxK-IF=x)mI5Nuy%/!Nc ,@#_]3&X˩ok mA-KUqNZ$R6ݚWZјe)E9K)H*#G9[FHgp!O',]lYDGw@``KLG1DCO5>L_e=c~o͎\/$86O?Cx.~G`ᦳ(yƭyP<3o`;gӶS-_ݶ-g!JQ܅YӅ#!߹V)Z=vAѩ2vVa~4VEj:$;уeи%R R/?;r&V:@T4rيBnVT n-|s[!w<͐}VtuD!#Z k|hA`i<o/.v/&w> w=9tY pg*P=ldy|KLzzZDAIKKGg'LӖiɮێε矀yec޴'f$NcJgwH{"z͋峻=2V,:.ↀnAG{Xa*C,3N(nIԖA(z?P^Y|В;BÉcK:"H :\ =*u~wzC}Ս;*m[i^ agZZȴax?ogx2AчxHK-/JJܜӯW#\HTQ<]!7׫ tw`y:lf9Cݚ#:_>낷3Ζa>BtrՅtL짥T;%V ̣-2a.<\lE+s£9x&&OwmI5;.ո*I9/H9#{9GqP볳kd*Tbt𖢓u֋Ԙ N܁ x ǼL e0֣XIO[KzHIN-1_ʎ=mD:iE75`r.X^G#d#Q`ɭ F{.CWU{NDbnj0` akZ`obg~^. ))SBV`3fX8?ߟv~WruO:NlJ0gI+5=ჟì[~6{1,9`{wܿbU0ę1@$8:〜H 8G1Lۄ2bqp]JkL q\s!-ZBfU:++ަem*uDW!TRr4jvRm̏M4f!\SA2,SkmnHE)QCUU^[٤։\~dVr kBP) |I1xhEr3N扊]X_% U :g[CqkͼFgڹ%3m0=Ƨ|TTi;n #Qh3\iY_̏y ^ wI3ϏbLJq=S?C{Of0nI~|0qrᗍvzjI4ݺv:o쬟)<.9:7GŢb+L0*ˀHn0|1RbQW~1/j(J'Ϣ4b=Eh%ݭƓ QIp) Q+!w10Cr&ڠD8-r\YxxrdI5y4X;߲xBo :?\!&1]'BHJ4a,X j#&ŏCNBۉԆD NwbCTI˩W(>!yP$ O*C=FffxZ^q Rx&k 4FhYQgT"fDnVץLS]8Qxmk zǾIb9`W0+`XZcJ–jX?Գ|.AM==kNm˕s'W)p8J19#q$?௃rYp|bᵧ*1KhF<Ĝlܭ#)]CS7ksEspnjjdT2%ԵA\*榤TwWTZK#'@u ڦu)xΈS!>J~CS})p}N,lfg. [*4ToV}`]JAT_ mAP}1l_b@UDx8TRrD1ᲄΪ=>zٟ<5c|U)r֟ >܅e8d!' 3pf ThYDP\y]g";'o}+ƋNKL\VKjC ?y:g`|;y2>{i~ߓG`v~b2ÏV/|=_÷O/ɫNcy*s<$kqKWX/'piA5\u.;FnG|<drD|7ϖr`+ռ//xśW|zy%9p:{|2z7}w8Mm;|y{g4ں (l/=j0s#`7|0~{>H=EK:4/9TY)i KѺE{xxsaN>Y9 xtqY𿦧7f4q؂elzfVU*,te߮<gop鵛ٓ~fl~_ޜ:Q]m+/X {}_|Z/Ob0@NA/;ы܇.^p|0|92^oa/u2~9ϿH#.InL O\ xis6Iɧ4/t%%O'dɫ9p|9/Y~~Tf\_Lwp q9|t[~?o3#[:,H7g'6<KP?d/ƄETEgY.WnW #a~s^XߏGO~Oy()h:}ڜ)|:ެA8celzxL> [m1gMXܟMwG.T{+A!wrmm'-.UAz0tSq3?C IЇMUvH%kfO$Kx4H;5FKYm~lBwV;%PwGF65oWexlY$I3ժo]Q1~3J垬"eXcTSX4ENo3nPGe[E(AN!b`{fD\r(  P]^+v.xYrPTb*ZQNYΙ/AVzꢗaTc-z6-=Tm5m-P`-//ru*ּu 5Om;ҍMgZVe!<ːp!}a7oz`V -; M臞JQMjbu.# i7}LAI:"3ŀ޼yA܄9të[vS _zurvKXͯPՊ  *?l-ƾKQsA{yWbgB-h~5evF*8_Sgڗ_+K{h8eϿ9YR^wl/e7탛^ԿQmZy!C 9˛rI88\nԸWk%4jvVLIi¡ee8]g*l(և .]_U'qӯ&̈́O ("P!T[͵u" ͧ 0׀LF hAnX*,t>xVڴ ՈiBE.ZyE@SOtiQB B#dU"?}FϧZtoe˱h,4DŽ_*4w> a]\"ȸ9׌sٸ ۵M<&RpVPkqKov ov-Z/bm~lE7b*(!*ڥ JڟEQU;^75n<>V&OQ:L^ZeV a+U9 $ꖅ;hI")ܱWj޳n[DS60>!lx%V8 W0t4:U+/r![e.VL$o(A[iua.9o=P,\u5֋rh*K`86RD@,Z;i"Q4)$ LXJ\bꖂ*YZu[%ҍ&n%F\sb֡$9Nha,XTD V\ߟIlBq1JECRqX "c83n}.ƒ0G#d9b*4^2=Z[1݊i#'q ZK# QW HqL¤^wD81_+u q9PV2 ATĢX: PE$"i 1̤UݎI녬K ST}vQ:Y1<8)}fb)0i""a1PDb rNZ-(D>;NH'..c^Bj ~&΃ߚ#r iRx<$0cp&ڠD8-ajE9L^ٳؔI\,^ಥH Ws3lc?!I[,?B?}R^C%^#7o^9Fi:0n;CT##wr27V~7rh;HyH3\'<-;J!A m-EFhPm*BRU u5U [WQ`^M%>ѭ+Sن6>!/щaj󪆹!wP:x2$U}KW|1$oPAW.)iП`*=Se$"e:6vW 87c[ܾ&o +AXJ(Yn Of7y6(g;ظ##pG&,qP'r+&sjݪ\h޴) T+mR#mksKewtYߚ6敦l+聢EC}ɟgO.OP=1)<|z54IƧP٣d6M"d޻lTS\rkB Ǚ DXZV 8DC(C1ppEH%L!AH_OR=q/D|ٓƂ:W_zlS>⪊XS23qon6HnHMpɓ"=pȉX7,!=qaPQ|ԣpJDso*r8OJ,gt#,bggAH Z 8 Rٚ "EZ> wSwVU.J5ĊsG'ZP0\lZR`jVlv[j-Mlp9Wg>& wem$Iz̖wSg@=<-<-(7J")&UUTQ*À_dD|G$.Q@zu+o4.;)D-L-K ^6e2$h]cˍR p"蜴rw[ՑMطwǾ*qw6Lh43TRMb(h7vo1_7Y[״Iхd;={xYX5jNeU!X_YӼ>sV&mDA7M '۬VO4=EZaEmܰPQ}2V*sY-Fn{U&86M:4dysu[d;G}>=nV!U3hU:i=v7jh=1ߨL!Vq#v΢zjb]PL 4r@p-B=(CVt9$+| HI9mJ=XӋݡ^Υ 㳪ч ːxM}>6,VXǚ>@J=ѽ7f˓%̬Rmk`~{ gi} azeڡި_4z;l57\`F2(v5*oԆw)ni~ a|orwKz~,r{J|SY)ͳo͍eLWa_d kKoN "u9L2;`=oŸ̪g='~4K}UI\yبdJ -ڒVaОjCqybFJK+mVCeeR0F悱d)e x.kD2@'R-E:@0QrycMiR+H,B?<Ud V kSN<˂4ׁ,Fw܆5hgV5x>Zx"k,NW`ax$8Qdڣ5sb9\d.nƣ!QqBk̝B`1#wu~T /\y`&$^ ,^]*# ~}u(h4j)>ɸ\4EZד/zrဓn4wHgFfY>(oW'gV},Hˇ?G{H8_NN.lO\Q{1d0 :;VWg5q{ )q[B77?W7z F-?N6Ñ;sn&h_u "^7fbLlߕN=<= ِ2|OmsT稝ܴ.N6(QK d;33P(O*=13D3F/f,e:޳?>^\d"{1L29H+q,-z_Eqjuxv;wlgdZWŲMAm^ж]:#\X.Ql xo>8' QH/Tqi7HXi lM>}sNOߕiM'pHIϽ kLHJ^L6<{%~dӃi\=RO!>!$֠ah2h|M } 2I(6Ժ'6mc)k x"wE7L VVAłIXd\P^w,\\ (#F3FK-* .$L &AT{/SdLYD։N?@qAl^gw~ [?ZA7Fh! "hdHaiGvjrW?K1iXO& 7{PJO03XoJ'FKUzs/|w٧+5j~hE]Lӿi9*ݬc0N׌-߇-^5 I2h0si1;˭\)LTmx)NiԿzF|l燃 :XA[Lu$8( 1!#c%L6 G3eW 'Y +.#mcs*ϒ5 C}BCI9d,s1$@ 4Azz-gKq>[=[QEYdp1G@ &3JyHg9 9:g \W 8r-bV2qʁSֆWr,kt0,9@x 8RL"~% S$$͟D7Q)c/0OsN|!dbM)oxzрZXlRO[Q6JDhqwP|VBkWϔ W&UZ傾l;SP\&$4Sr6J nw3EqrTj U_uR 1Bc4*q(1 ;pw׋TtR cQj+E4m]Rxv6}ƯNOYKϟNP޴[BhX={.T߯N{Ҋ-ʺ]}V~XH{8PP} dL)A@l;ۚP2Y"ŵܺ>oa]5[kU!'O5 /[#ށpQ^v]Bq(F%wnk%h&I7'R5ů)<֒U/мvdp&Gј(c6ɪM˸fM+ڦRbMϛ(ۀhĎv&ߋ^p*Y_!7lomu ]mNK uZfjzcvޜ2xGfSu@kb=ej{B(}AKߏ?3BP,٩eɟ/4" #N5ki4A5bdUުi SY5hvDǧu^RYjtJS¼31NͿ\[>@M5Ѳ`/Wvߟ30y?2,%j+FGa'=XwD'6RUX y{wIäR!牑Um79 M(aQ=\UհpT)Ú4L Cf4skmUR2W7t\^5鑱eҔ6zW&GgZkOV!#OԋbY(.O7|+.S??F(D)kCuQWȬ k/{`;*؎vzkJ4睏،Y+Y^Y))ecL̦׎:4 =ۊ[H-)R/"ݬ-QيsU tP2YCV B1p3t*)K3=ؖPx=PvI g(W6/`%,.Ee9h^ e[3c0!Nq-G@ vA',E%MJsL&4#rplY'sFݰC][oƒ+"t~ 0A&{,ΞA&0Fd&odԅVSݤ!c*⍬Jյ0iuNb# VR <;aGLb̔L0 JrAC ^a S%]P3z=%:r)ؒ EԌ_τ0Q{kg&+ibJOd㸍r1SLp.#8dM0H@Z5- :=#5?N; 7 +%P$5Ye̸pL*K~s cRI)YAoXϋ6.~! k-W #A/)յѼVDTkA&JN0֋k[@ި08~[0  dL CD$;VS/hP!Ec)t<Fh\jvw.#R`ֺhuAzGfu!Fm"̃ U4㎛2!!ؓsDG{_(e_(e3 Y@n9 6@>j%@("B LPk͹Ɂer,Ѱ0iA[lhD`ł^?N>`͈__oPZo$IZ\.%tɔ-:;r6}~0 qORN{ C=e6i(9`r&폶&209d+}zCvOQ8J-V=/UcTn=ôxQ{#ٻBs8צKO|>$bS?.+"(ZYV8ilx.Fe~ui+[uxFmQE\T)dWyq + Mቪ/!>'ƥ}b0T.,(FXkpoDLgdjk0A"줚3B2+A@,3P&1˚!lp`97"0u"\C)VV13וviV"($6y",24ICpI)r{’`?^Mg%Q=!彛M?~wlCPYRD/&YMt>'nA^_J0r WʁN8))1`?`N LH͸ӡ>pĈ>TaGU6b%ΒŊrg^ WE9K TASAJƅ)4.L<(EXDJ̺9 㮛*5!4)%0H&f(Ł])ժ9O|>fGTDS*:+o k1$᪣@XVhk` 00ֺö2{e;8  ģ -3 5Gx ^J0Xڥ`DJwMmmGv%daw&YԴ%0T0(;\fr0 gI{jAᶞC̀HU~:Y12f^6Nwwnn{o*LGdžkG̠1WWr΄s3cc"Izf _mVdѝ _67PAN]met[$~%?VDw$X[W,89QBZ  t;kfZ'VUɂ)q:Pe[,8!VTJ|N:Ե/|+cx'rn[*}t06N*Xg-PZٺx="[G 3&W4``5 q+) mܺe Bypq#Ky& m@8',m)Ç#9ZVWZD Xzic<}J)zgoo뻞^-U 7wO̾NYM}*|/eA Z@۸՘f:)uN)0qHsZENnV#b-][y l!"H?3|D615]p{/]?\MnWg;}1peKV *W.xI+T KT+*+14&oi´ / RI i ң&j֕KmkCO(iјFR2rV)1jO5 ֚vyk 7: m ߎd`Z^wЛu3 y!h? amT'UէT޳yalȨvBE>L\xg_ϳ^"le V +$n gx0{p[^諦%c{G&c1n./ =>wGBFLTnӈ푉1 `0#;/>lM5UK}Nibs iooTtA:6"rMY>jIn[ T0 sW \Mnϫ;`Ein| K.Ft:m9 Ff!ݕJ#k nzk犯hјY`==i9kihn6.{n:=z7Iy7ɻYPֻ k4v0yxy=֤᷋qߍ}]K1 Ei;NXEt9&TIj[^A$Ca{N0OzA] GN+͝+AU) "c2S*&B EJ#5y-*,/AE?釰uZY/eZ*l XVqŬvT!!u9 p%c[`suѾH"E[R(d޴b֩E4cXX )BSԆ"3;xTFZ׌^hPԁw QTvⰅ:RuЩfe:PGFXE+[3q#@bZ#La"]U`Eqj]5cԀ+b:V_ɯX¢qL)n/P˅$&mMl-ȘUWL4R;ErQ;ǨނZb Zyei( : Y>#eq!$<;Ao¥@w@x:80dH,c%sm71bǾ埊8W8pσp K J8kۯS;|XZRnP֏㟿 7,,fr&PHoc/卹5W ?u&,~ֵ\ qrfgP@ .oͲnT}0*߽f4#t>'nKHvIeqɢ:xޙRQgqoQQ;,_bћ6 }q& |,2Uk|t}t#%aQ4b2FqY 3ڥw !& O;*5|Mղ|Sx{F&q^(%/7]R[JL:` jleQ)Rx<6kN(x+]eĘ2?nm{KBv] !i/~ZDljczkC<_H%{O 0mFzKƣzC)iKx߽vA3/a6?(lz _v*U+%,WLzE-;a/;6jaN s?V0 Ih:Z-= REZ>x{9 QjYI~Gm"7O ⛙ ?o&Ij(2V;?ʖpU`}%`k_:x=JkQ׈e_9Բ/G=W/tu iyS|{Ug/AN+ꫮ9?pýڟNRo5+z,Dd쇷K]Ǯ>c9Bě)8qr?Ř66zA{.W#ѵXB:+YD?QÙ:qF_ t'7kr1H>f̀2a[ Yn4q|*Sqn8u Eurź (mM|8֭|*zNaٞdgt2xKY[Z:N7k$#YA5  `g(N@d-P@oz@Wӽ$[1fȕ}Tb[>VOyʪxmǷFR ;pX",1Z{/ItJYRA9㏏}:>`pS(#\E W1Z1he  BjvyjVt^_ۿ$[1byڽG  v}Ùd:”8D#&p,ppLW.%U#$(Vk+dmeÕC*oAIzcLb1AjEwPхLԶJiU۪\|$g@m\zxw4K|J5N"ig1JaޤLm"0ܗT"$9po.&?]ݯt sW{)Lԫ3Ms*< A$qUC-qVÍʦ&3YIOф(RÂwB(, D^GD}Riibn]#y+`wWnԍÐmE^56ϴ#]k(2 4#Y U)o^is̙4*4vfkFg:0g$!ʷ228\=w!68V{,VX*-\ǠBI;LJW]T%$ҝcЂ|@L3Td<ܞJ`BK??WX&Si"B7:l D&{eX"|ﭶR6j|pVږN#Au''iYBє.u[$Nr:h„q>ItX>7eRc%i3t"݋U3' K™0d&@e;VbcŲr/Z|s~耖8Bi_(e"#Pݯd NхLJt=o9_澲}2| 69 [>8jJK8WB?:xcDҡŧrN>M~iww`^R[maiJMl!hD}$E29#|wwFӂ QP(j %esFy%LX$0 (UߠbG5rtBԸ[K &)ʤ6$jI725wsrA=7Ğ=JYYJS%)Yi[iɢ(VJ#. p & ƹ FYe '[K]̹e(4hUdD=V?')8HE N.9(8J2FY)MI\p(\(ƥLpBwRlAj%_Z]7Тhѩ@{+\Ы $He(yѼmRG-9'Laf8/. -.{Kq;>Tg#eP9_4㺤Y'4UJ9ame:I1 ])UvU>R%emyU ̊F^}1~8w3<5'2CT WjaԪɃ|/WigA\5q]j}XMPZuԁXuADyq7(JΨ2 Z> `V}fU- ٧jĎ!":JTnD]xF'bHq@Vn%A4HI`ǺBRkq)IC(#Y ?e2DձlBb4@=.^ryƥq1:#Io5K1L''o& K!KJ9UoQ=& Cv.2-G֭} 4#txws_|RRК9|rاK EdABpJ SJYxւ 'p6Gحok>AY=*3.6bބTqǯ¯l#%-xSB?_ڻڰoG:+v5HQ:ەn2EPo߯MXORz ۫ &wyo->̒ p?69}nA Q=} σ fzws@ Ӻg;S=HP&k+BLT?.+ߞ!jW/I'ϓN'Dp9T]|\o)2S"4 /#ʸ?\_{9ӽ7\)Qk{f$EgEܭC "AٮT%)'~n *bUhE2/+D!Јfq*p .#9$+Q.uCKni վOQڄCN4eP.D<4(R0}d*Ld;Ep!oV*T0v,+}RDtg/ S߳eъ ƾI&fu~-.[ζGdר7Y|Q;,\lqXA.lAs˨LTzhO|u4+,c=\0-6NlƇ 0& `] a Awӏ=tz➗} *_T'F긋< үKd6fʸ${9\=?3eS)Ez ?-֜෻BQ+on(Sfh\b@cڔKG8p-.ڸG IEV|WZgAƢm^h;JlpzPlvVy&VYok78ݾL!*UZOO.WBZ`w³ۧ,]d>D%~q*E,kт~%B 8* ;eR al`D67[ladʴ6%§reݙհg[ ~A;~sPL('LkPds!᪻5<% RU8M빇KKj ևnA&&%`V ?>զ7\qo%(N8SmR Ϛ|oc4 1 zŰ/Miz-9+XyDDax"A!ОM.=d!d/ B昄ԐIRBXR=3ToٹBAY+jyN agSÊ*4Ё p+ғHJ u}>n=݊RhtJzRC ?\}9"\BILGYH0흎eVuMR XdDAq~CDΏHrލNj#Vx< }xq|CDձ64SE+hiѷ'CUwTvdRo+Ax1 kPHu}EޒsJF$6#5fy?{Z`Hϕ1gT߽aTww7m bA)BU+usF%oqHv^Honsd.o,A 0%Zz$VaΝMJaEB3+exlj%p;_>8IeZŇӟ̝._}$RN^g>)B=#;-ܟjC Qݹ~T H3d1 xcJshxO>L{PȣDڣFEYZJZXd<Éy'hI\| [ RXM%[O@RK<(\QJ(ɮ e@!Z!|scFj?PQ1+G~wY(熞Dq hRpШS$H%L  r)*ЪRC*pESl4ٰ*SS J70.5RWDL5b{pxUvrNv%K&h٤0bUI 6&V/)S "b JE0" ̧ Ƥ:c!VW5tRpFt!"†芐:8lUc ^2HI=Ǎ6ϬeQ0tK# ʗdCYsQ~DVr̻`X\pg !q0>ϡp5v(Kg%.8禒iAj m!cjlɂkg˄M%Zq6ZҊEoP L#& :h4u>a(Z@KdR E+nyi%n*%-5ZDtK+ӊ*ҡ.?]az丑_)ŻF}ؚY=%i݇ՒlCVUYMVee'##qPi`V橕Z/9-A!:6D@Tg +xlt8@~"=B$6iX#KB% ]N~wSFXqq}MΠrdmp^J.kp66byk{߉5N;o0_6GpӃsNqT Ք3om7҆ӆ 3-tl|Hl<H+ F?78 s:j"܏k#6ہ^qݴ5{Ewߌ~HT}QoHqu §e+ŌgbqR3KK#,BfȖdF;?1Uek]!f9jUӝzy<$j#ywM{ gRuf&yCm^(j.nyZjJ@-\SVh* L5cl6-´ȡdNÑLp/r2< -'q%;1i}W\Q`Ao=̭g;Y09Gk 7`6=C97|!p}֝zPa—oQZR--%iFi: AjÊKrIBB|ŠK9R1x>(.y u/DU?uiU'{rl{Q(cA8y[AIAì#)4;ֶS7HSvrWE|M>^(}f=- L ̒1d(KoLτ;e>v+";4"+8wTwWFa(_  ō5n[GCU'b;`-Nzjs:R(['ӜN!D9P˼EuTkFҘ:S⺁,!?Mohҕ/+{m?lsG6;w4WAJuWUjR'ӧ? MO6cK HΤjE+H/_Ê+Z7oŊ>: &j0ǰWaWe8Xz((mŘJ퀌'FOFy>I4czLPb#*G8<1dɓ8ݗ(KJe-wcxtVkۺZg =g?"BkVl~WJ,K[.ϋ}vgo6qEy{sIEDݛG³ |'J2ZPIՖxqEGS#Q2`gnf_qC}{s~wˇ^JR9@f'$dyRRGeO޸ױ?>,q8O{ &~!%|+"<) 5&C|HUk&82i8O Hutїf@0߯9(DJWW%dlZ1 @gCH= f$$ei(XQMA:l/yCW ݗ)ǡX_rP7Md+Κb ~ճqWlqw ec\]KUہؠKٗ't-g;m Jƌa~,JJ4?/f3ɓm7r??|i[lpM.חIZq_ZyZyZyZU% `IlI*Kc^.X8 J*WZ)eSrx ͦv;8Bw: l\PvY w5wi֋,~R蒐IKmگD 16+Ji2%BiKi-sLn~ܢeB€ДikHe3JZ.fV3'(2c  ϙ7-`LsHɴ4Ҩ2Tx hR Լre399J[,-QXFS*9\x@KZ)ඔ/3M(-sE)ډV**Sa hU#C#'0#'¥|yx"!SI!Cm.#[nN_?G-Yp"K; IXRD_nW3ҠP QÉ[N@p~O0@B\j".Rd,Dp`ɤLr+i\1` Vj2as@2%y)(3y3m'k^|IV\^2(ɿ$O{M~Yf{4irO:P,]>=8訇u G Ss@=@`!g81yHu9[sik5j ۞bC³e=6 ozݨUtʜ+)X eiTX!U˴`j h'/'5[bcPr{0NI_^F~em,(%*@5% uoӲ~wDG zڶh{{MӪ-0k=T*v=gC_fDf=Q FVZkm%\26 THJDd$P*Q%*<ϵփUѰ}8^#ay2G M|o26AG9A@g 'L:UiK^ՏA#۱z$;c0ẍB8ъ+jS8{UUުg岮o]A.% /D5ZH];F͈SS/L\I,#NOS7ɦ6ES$ݯq:vjU+qpPĘ f\IO c1Cg|0tUqg=t*82_dzB#{;JI$d̔r'"hwV F` Pb&”5ĔYP@Tϟ2˩Bݬ:j=ė7#TVJʴI+0ef&-E.D}O"ZMxY3ڝ=-h$[kDE k̅ROnzk2k $t& ց/x6aʧf]4Bt~39f?a.3s8/5)z^g :֘`{6BdmV0>Y9{<{VGyu "0LuK czbuFBYǑ\VH]?jgJRɕ{+~:Z Ө Z^j|:0(Y[{!9Wbg=Du07tK˼Y,w;;ʰc0/3ܘbnea4qݝݓ;k$NK90~r ٙGGwܝBΦ4b+ocdFdxdw]3cJeួ_AzEI YVz;6)9 <*9te?1sg:RMi]ϖi*՘hIdbMg;ѬGl]i5v-o=ڳ erK=ޡ}?|C; kL κ6}׳?PjYfYNM3Q۱?tβ.:g.EJ6mٳ }UPȣ˾>ct$A+J_7ơή=J ~q.Z{N}jPBvpFm\Ў>{9V(FM|!Y:liO Te^`}#IJ t q>zK ~Rv̻s;C"i @͠X4=B}: gO ϋO[ݦҽ(w??[l_SV<5I]Q:N5K0BcGhиCv&¤YY,M.QE$hp:2f(TyiD{=6_?/ooo8scw֙ ],m/ w~q&_՜pYsq,ŧ'˱6Rr3p 6hBg|o6J0)2IyA eQ.IP!QԿBKJ, X.J.~PQ[Ltu  c.10r-YB%KLM$dܔw|lذU?`qܑ|% L j:_^Ypt 5O5W,,U651cc$jy(^"/^"Uꨖ;uJ9#x)r;*%ɲ"KynYPB=vT.QmZ(lldNǑabᔁ3e_e(z-o^9C=} #mvtc]g0 <Z1D[s.{N9S'ph2h*HAG@UCga|gaYX;We&rc-EVRi6(S%jem82?@jέkeU}˅]շ\t%޵5q+T.d;f.;9%Tۊm oCr(3q4n4F7;gk$Br8Ճ&A!`1N'`Yu̙/@SIetzmbAcȯCsE Ԧ^!,sbY cTC*p(~|&4%_daZK*VMbnd,4Ȯ*5-, 1`MUĚ$ncpE ʘ'p~FmSł{@zi(d' (=A- 勠A I<6D"B+_AYq]qJwmTf D *^Dg,;HBmq=DU) T㿌.Jġ!\tZ9Ð&| 9T %fKSjMy= O6Yc@rKJQE%h+(TZ)$*4+D j8EicT!xPA*xLŗ28Io/}OZt*%~~;bJ5\,޼Ri0.xʯ~ݠjo/NarHp9Z2:A08I8Ipok;0S8M'hC16D)#),\|֗9DdbFL:X)GMmNvK7<@ ec2B):H="whG|ǟ9i]VsồA ьQI-hH281]:ՃxRT5ު+f Z z@8TR*JNW@ǚN$rȎeyb`%r|="D6gq'1.B"Q_IZkhxw~v-ӑ+vI΄|}q ĂrƎnT"o#JDq.k X)<85+",rǵL~E/wriiŤQɜBi .5pnpGm (:ڦsI` w ls>X;7z]egc$O?#4*90\!,C4^xKTXM#=M=3O"V襧*+F$8 6łQC@&P9X@ݵyëeܠf) zSnlc;RJ)Q1h.8S8RTg :Sũ_4@ҜrgP[ߩҒ=6f8}ۊRD@ڧ6F2 MZA!i^+h@! !j1QvZ#ٔ E4LkA['TjTY$knL7f(IJnD21f򖒢͕FOJO6&䭌5L7J HEbC<Ѷ򶀷Ӵ 8 3&gVHn&6 fBB%NZ:Vpӂ6"ES:iK?Elo]BrgLlXFwhTnA(p}@,I)?Ɏn(t=Dnlq.zǛcVjpr W8^{8ŝvtwɏa2j2O=LzuJSF`'W^&`Esc)8}x-»rx ArC.NTydOf[)F!s S{Mۛ.6ٳ}r^*75n!7`hj T=nk e!bKA 9KGe §me| }D;mT93<MlO6vǏG[x?#b|TI3ankoj:wT]iTwvr3t-@Ⱦ%UT[]vxJt* ]^h)5MYLb2v-ۼӖ/7ux we%a&{EAKƋZ-ii:3ɹc2,==]; ?mk]\yp;:*:nV~]V4sl>t%ZSy\ps),z ȠZs:Qnۯ;b}7kn<CU:=A&k>[L5VyS:ՊVr\},i:F{4]e!Y 8 '4WhT_ nCZh{qxZN߲1Hkq?k錾 SӀS.ˏPa'~r9E,(IMwXJ=:̵Dlc\g+Juo.N'.2} +I_\DdJɛM~-&6%Y%4ݛvwj.$/.Y2Uv87 Z~[翭Sٞ9ߋsEۚmMp&p%_nYOVEoO g cYnZ9,/+y݁jCg:opOVn"m5MYOUz0v<} &Ơ8#7RUx;!@T9pfɘr؇5JCOZ@:cA.1b*A$ncߎXUI!Ex W)./S; "*V eZq}vywr#҂.!unl=TK0k\hF]h*0MȦ5GϊyC1d;+Phڈ.}ctCy\0{5vrB,sƒE/k _ښB=!tyJ@S޿)Si$Ug- VruS/DePʩ @PB襪+ #30gvfx <#1I}i,B By&{kATn2KQHdQ o8C2Zk(]gI\8{}64,4 4xGOP5S%'E\'OήxM'ŕAiW&zOף",~0d7C}n\&2>PDd+ 3*Wg883$wV#$x=GχsM 0-GWz9hPeoUX/#%(VqؤG.\7~FjyS_./~\Vy|[ɕ"W939K_#H7+q]oplTm|3h+L7ֹOm$/Sn_J#!PR]CxxU|_A(ۭ)%I6u3K'٨twJ $=*y6޲XOFWx[p*iuw+osi{}>](:*+p>Jjc2V\[Ꮩǫӧ.vz`?Qy[ ﲰt~UNY:2݆[+w%ܯ2YpKxmK _ .b(~ר>=)nYmz}&W5H KI"Ɵ]zȰ*B">ew|ಫ$ycp2ڐ{qhw.L b㝿i뙏 y5jtoJwפwf$[Yl<\{S[-4zp`̛6fgd̚dt98 &ҭ\w6_>* 45%4`|i6_o+ B% +4 ^`Sy؃k+*9-R"d%hNq;[<@mKQ+Qߍ {S6A'n / XQ*Go4AH Ƀ`1BBAg^3[lۭ͠gVFcK0lH1I9ԘbRD`H'B#Zy"v%8wHS&GӚ]h7]!q0%SxL+TH18PpTĠр+tM5o[ M嫖K~5{tswhH֍"  \bРq78 eqeET4.lTKVux/d.|D<145P[+xkI^ \ iPQŇ #,Fء:}-PEI4*i0 iďl4 .)NĈ2paKff6ϰ%PX/3Kip~_.'̓9vN/bZto /??tJBZjJWfR%&ѿ'g/N7{OfRb.N[:,9/V33w3x?bD`mC5 Po35 6P_"nQ;Dܤq)]7uٷ[IqbfQe+Dz zX6?3Bs:e#ж=C(&<~Nܳ% Aq4FԜD! F gzܸ_Myqq fݹA.OQҌ(zn/0카ů"YUK[38*a%RD%BP7xK{܆6-}h AT bZ4Ȏx۾e7 \ 000W%';Il.auue\w_l65SŐrtC2I$uSr?bwVD_JJÉT4ke*RTA!a-v96Xux"sBJ'yT5R;4 c\ f) 6*WU9a׳>!}:&r}x>Ql<u`KITRf!/*[stW+DR,7L/ƗZgg^G\G͎E$񨿇ZJG 8)?1o+LaKU8!8hs̫V[;IRIJڭ|2:D QyzCLo,>; ~<}i)xrfW 6b1zA#~^5. 8օFF/WJ8bUeS^%5:1;ᐬY%QC bJW3Ke4Bܯc$}XSv9gRОTFޮ {Q50q`Z w'EfY4!~JO{)a+%Fa9`sI"`|ΩO(1eA~u 7!nrҀj٣N7yDdȟ)&0dWPw6ԱQ7QǮSc#b[G ۣhۢxwkQ4a1r(#*ݓFӶr<< sZ\#Wj>1>.nJ۾|hmIMpTm\kI5nOoyv<׍#;qҞWf t7݆f d߮*!_WЎGo,<= ]W+0>̫o΋wgo7*$9*EP'A5UpY=/5x|%:9i)CȠnN0^QP(qq12|%&2yes!xF*pZHFVl%$duKNݾXR(HF];x'ELM-Ih|d il'0kB)'%KVUq9ݍ:wFdC]iC7owaMoo@}!6$hCV608ÆUhδttJP 1md6<X\#oN?pISׇ qOE)'4RYM2؞S8_ ȼ7×5rTO,d nϗs{׽Ttn>Q&d5⾙~YeS1TJ뇞es9t/&>=Hy'&PUȓ1lw+qg۹\\8s@bjx?f*#w7_&ncgYY=~[jjߥ'2$۴PgxֶH-Ms(aj?U쐏Ao]UvpCP[k E}(4ax{;X̮eɮApp&Gw&KJ)Dat BEZGhb)ʈ!QVֺB!`Bb\Lt%9}c>8J PA`AswD!IC.X~>EbwMc6QBq' @Ã0 Q’GRG꬟\EB!btN}Q!MKhuTZfZR:, Q;6RTZ+Í5"c)sE&*QKZUL pݥi8\᜝8Ah zwO$P CE']~$rk|zoU۵}ltwl_mX}XNuJ+gNzۧDT&<ՕuRLb3tRƺR[rGc[c .>j^Vgs}(4!|!{Bi @:4}c7E»\LD8G#S>ME3[4\LGfcُ2zĄ ^6OlBq76BWD rLC[R#h @QSe~?^MKj}hXhSDŽI%S2OxD'tbI7=&0۲cfPNB&ܐJmq‰f٭bӍqF4#Ӻz{BoC[񊲳_hQ7cJ+@M{N b$ӨIZ/p9ҽ5}q\_5Qk{֗c)(c |*EҿąL"pυ_, SG#)Q5Vqjy-NhG\忮P-)W e)#0DTC͓Ri H%kɜܕMOEQ" kjWFIZT!ϕMRӣ/h!0S+PI]4S%%pj)֩[%ZP[8)jx (bF)KZl$vJ*)ӥe,Ɛ60cT2]]քs8Y뙡FVJDZK[;,g JT (aj„W_$<"NH aO+cEq9GWTr0lMD+na ێW=ҶNs; K)?_.:d`=>a<};~燇!+A`tbwݷ?Xx u/gŶz6_,7V͇7[\!z7̕Y,ony/!`d <~gL5P2aKw~,~nJ /ux =XW""}LPNξbd<}LbrNgHw,k"t,צ ;$+aVa^Ke4B`#?Jdz#`z>9:PNHP !9"|GS.xR 's?R>FRy*(\ztd"STя"1Wc"/IoK[L RACshp/WA,%9j\D [0GKgPqjځ |1-fZ WѣL颯ՠ)5myƢ?Phj~q"x){clQ7c`)g-Eˎ9~?]Հw0B@ѓι@oq:pJP7 8 R\NvM1#e(+(mq9̵g۾Pп}XU>|5:[MW^ԋլ-lXV5Leo !}ҧ\KrlZ=mQZ,aaGtTj!J >-|ޯ~x}W]չݹ|gѰU.]VÝwLpdr]8Z.`ٿ˩: 20Iq`/X-Iaei mb`' ,m|4T@S?^%1s\Q>ԼK_ۣuWnٽ%/H ڍ4T%Q]Yo#9+_fؔxT]X е3W5--ɮ^`JRL)e TI. ~$g˄zQ="%!iT (M/f 1mo#Њ뮄I7RQw7ί8+ek'|ͅ$wnOM덿;ޱoKeZVr9.feYSME:t\A]rMi*'ץB3ќ.m"f0 &^=>7Ha+םx0ӛFC;=mI%Lp|Z]O顉N|Z]Oa9J)T+0v< &%hH7-0@ќR8aM n;M\&(WVq&.(| f4^u@@Ī\dTd(Y)~c(WiMH]Lo[,;0yMS q@~><~ ޯѝ}r9`+Q+z_/.{wa:-bUAp.0GmosC@g4[|O~;!4pfH͆R4UUh3BͬH:Br:ԍ;kڲ鹄b(ao~. b>?9Da)W+)'"},D0J%:R n+^֩SS2ť\G$J2n -`4~SbՋiG'Xe,)auN!u.W-h{-"K3$z[k:ϙtR1,BBl^b%W:o{P-$m7ޅm?Ɨld߾8$Z-P?RaZudiKD,\^'NڥOrS;5'y]nZ3/iw{_h/c.|2ѯZ]=KT([09E5D1@ Rx Lki=SH=]){[{ f}4 ҃Gxbt6._& dYIONqIe#Fߤy6l7V@c Y^[ҁe4>׏پњ}quWcTu)%q5nſ!*OrB.*z=⬶\/8R"m]DyO`kL_nA:D02(g`W@GbShJi4)y1 r_fHYLkH+r]Y,a(G22,Y";J\UvKz{ɅE8$9&2㕪."Fi gqz$=PЙ;93>v{8-"sEP*G/.dbiWڽ'žObK昗tH4cK%›N ՜pԲe5OkL0c&7N`D\n"h5<\JD(j%ܸ' Wl@/GBkOD`Ø2 됓JpRMFI~*Lԩ!2-ze-K&}E|Sm 2{0KGܯ- =kKjO|].m/Oj9^.gںc;xc9pTfjZ[t)Nj޷UEJ[z0=U$j,``>#H7rxy5`5jWLN#vb* Vk*XZh)3p[U7Jm.a=m>_THa@:X`ԚV>`@L1AHNq*62QlbA$ʘnƖrHi¤U OVH3A9QARD2^r"GfQBfRILH3EVw 5Ѻ@J϶yUQw[3=3<ƒ:}ߞ̤:1RQT3s_U|N E'6p0&x;ޤK$jЯUB~p>̑ɓK#b*Xkϐ ykp^g{.Jv7z#3 Eq48؆yFe{qRWjx#X|N âXQwF+WIz315Eӈ% So[3Z޹MilN7qb% QʒtCEtp=DXYHP1 a&h ";j[>}]p6݁ M ?Jg_O!Jgginq?NW?)Htp9`">TRM0ͯ=?4=[I@~_>")'_$3{_ ^Waߢ3k.|[ Ś5ΰӃ,03qi,#f1Vq{s<s\5B ]) !LQk3+ϤKtPD-JYGbܱ\g  d9@5tL))$vl}-6 k#ިq(1"21C%7cbG)3/9c'#'!_3 BA&'ϧu_?/{b%^݁.{3ijb482-MoBiЯMokGvqK޴ )ݫLؒAINk:[i:T, Y ` i=. -ctr*19xzRh(GAP{{9#[*]S!-U+LkVTnI_uir>'ފ(pxHAVxj1I,qiY8.DILo.Of< `> 'y cי<5gwa82IhN]&k ? J3ܨϰdz_ƅ 3رD | Y 9|F 6c* Ģ \(3_ sk@ ɹTxU,f9fAhgRkE$/&Eu` ̓/ǰލRD)֞H!9N1 Tc-C`+cHm7lɌ݀zb/=lC%iW4*}ɪR)U71̬%Ѷ\Yd`qill*lP]:T Ut|0Ϲ/餤"AN+q4/cB8mSY᪟[n s E"hSѱ%uQOA1*8:zgSX~$-rA&%4k9IY b+RC|d*'-[(N4s7ƒHb-36Ȕ0z+.3&] ̎A ڪ KRR%׻#Y|xaۛ yGMkJT(= (XR=^l,/~tAŞTŏXT>D aE+HZǿY b᭟)8k*؝g#(M<$HaF0',0 ֣ 2 Ny (wM t=8AB*驄QN\4P]XԊ!wI*tܻnںCP= E!ngq Z.qנ\ qp6~Z:f% ua,+<8ohyjV;QjbP961(&yc4 TTJ-'M6{XDEa`#5Bɧ9Cf{zX>uzWզ!X",HA( P4,!lAM!4ȐJS?`{A+O ^KO 0\Qa>8+phȊn1'1Po~}A~0 0`2@1!܊Aj'Cdz%-w 9Pjs]cv(l#N AȮg9(㯩3he]My mT}ڷts_w2<_槒egICN|Hn>]t;4O- ^5 G^"҂ WgBc*&mem}bٚzٚu>և18$i1rNC[o][SE[(yGn rF4Aj<{lգ?ϻŨWQBDvo[WWmПwklr] `D?bо@TJ" mDMøD-crs)JHOPLv.·,U7w4!ۋ!'~GɮG_wJ1A`Gz56$:Ben'x<_= dR{g|jdR*]F\ꚖB/^../uq!6UA[*mX񮻿ԚQ-A2-YXĄ;B>/|CA&i. /u\$MYGĩ2|>}1VEy')4j{޾;q˂t-\N]]lV"[A(J^Kv2zY.uQy*r<ՁnqzºTCsRp-) ⷼowH/D^ -aO5/%%ki{By, dnfgV>cFDݴmZ%-9VG#ӯ]t_79)G1};h{ x\WĈ%Q9e7]P=zhdan[ /M(g?dGFNey6YA=&<i~}Hډ-L`nqox_ΧB w醮C8>Yt ]A ,^ny5OEV ܳ.U^אQ}u sEe_Ι ^]@?T9Ml5K^6 m>{FB#&K*8HIGJKsZ+Η7_T$ecRM盓ۛ'HWnwo7j7NxrެԉȼyqF8@<7uއ|[.(8KYڍL駎*vcs/Yxcʐ4Fmvi; b%Ov\l󺕯*.>j^_R V}]r.9˫GWn1juQ$&ۃxktE P (S"" Ǭ)$5 &*nGiʗdT?/O`ޒ`r,1E⍉:a?h845b@`R-#$\PKo,2('rǨ%V3s<+JRJNӃ4Q΁F42ztd?wgWu_(@U=:B!-V. 7VEvT2 x>锉9Klr y&ZdSrצwDIxR rLt. 7w+\VB^ؔ$BUhMny.[ҋÔV:8NvRA·= L3i Qf *cwOT3k"4UDӟOWBNwKKHܲaLH&( d*kr9qa?Ws>>2ޒӲJrجf~ׄh4w?şMz-NdHKп>6 _xFA2FS)F)Ej%+ͱ^W^!}̛WP4kVOPH="D9I4H#J,2/6(4@J Vg`x P Ac[%8&4u՜hNj1qA`QϫN!Q$M@;M|{xibb>EtwM?4|yÚwS-~vK3q2|uO-NКhO (J@A8T(pcyCg*HU dƢV}EgVEg~o_ ՚TV OlnCIѪle_6Jy8ٜ|OksIpDžlL3)μht+۷5FEϢ7,F1@CU eǀq1h,KvTKBz8w@z-n0Јt*}}4" `yЮ-Y=PI) 6g_.WLj˃OMY6??rd1-n +|5$WSYh.9& 9<9W٠ףNV%Q5x$f]^ =WY2.ŨE˧{#j(&٫ʍF4VK^>'>bXS'jc(I!aWyȞP }v@!m=XX;rjOZӪpp&2(Wg#8 $#[[!h@!kH0k/@! %n<иc.!N'}7 9l"?y> Ex@O63V+UmjmXݕg7V" `zlvGIWIʀ$eℕ(Ku*K!FS6X @[Hxl޳ɣ*R /sGQA3ˬ8cTפT]J?;›Cg7_ⷭ=BJUe8)pKIVG[ 誈Q]TsD˭5#"=S{Q1#i.^*f79g ^]|4 dKL߯z%VMRHVUuuUuub]+%1G kX]6|DKZf"XQVh7?MV#5eQ|E1'BPĪe1ƂQGi ZI^cEPQjz96Y+-rD̟ wOr~ؒ|ݯޫ߳Do,Vjvt?%غlqs}Er&7JJvm{I!~mG,0KC~͛kx?ühoiCm~Y(PpH'w?;D)/T E+g6,RNTX+B%F0;0mNA^bY؞렠HS' :Cwݰ\Kj(Ң 7lMe/~Lzk4TAQܻxzt ,&QU8J$Ok͔~zLs_]{ƭ= =h蚅Zs* YYTTQmjܴhOzlZKvukHl^p㴊9=NE!Rv״wrx ^]ƈv%Z5sMzLP,l.iw k ,_WxMb" ]rw'OوeJqI$b?=nSQJdQ&@BĆ4%H$EY$@C$h13 K.hMICȆ -k Äv':b<H(I hDi 잶 TB 8QT%4l1:R)2x?q J#C JEp8a"@ fj*Ҍ22J!\THb&x%-%u<ĞE(TІ$UܖW·ݡ9f4FUTM7FN.DZ*./EӐh@{") FPd<)RHJ,Dd*Z?N 6fʲr*AhZ`=d#Jvp< 3ϙ+d/HAq7tG镰UL/O<>"1,ODƿg{]B, zg!GoC6q`,5Me!T(QNt;rgKWԦ< n̓e2>&7JF}{ f=9>гNLjxTh(S%;xNn)r|j K޵SR ϐRǽQP MϗH#|ox cbwB_ K R[8[œqRYp>~vQO=Mry-oUz =2=upyG I{q Evo{4Ý9"b1wD.yG!WmkMl/Gp\\4v7FR#{{γ6?wNGE[t[y)BOּwztZs,$XxBu1MoGw螣 TGUtǺ3(DHiL4LruSDPIF 3plۯc"s$|(\ KijmKe宝뛧ιקbչ.4_-oP Ҿn[rW"걸 )H"bĘ 1%B&,S4j9AK(Ub[ëmݙOl,T\7 o@7]=`܂ּ{Bu he&K(b Rx6݃cp7`qEXb!18 -Jee)ϙV!v!r 6O.BTXXrUDz39* O*0ۨG;WGt< RwPʾ3YLy[[;SZ rk<imb"im m$>v (9ZD/v&8D3,:7Djܸtzg5p4SzGEJa َu&\w`%aoGo߶I׻#sVwNgzюgil]^ &ˉޛz݃yx**֢:_? Z,<0C0J_3 ̨F138M2*D!Nz|l*M/c W_ dEux _5&q35+f"VtY#3{ sο<E{HyIIXsSz47nn5>Fd/ (%K}ˤךa᥄4{5I6T8`&}W8 O؆FOUT\9as:jkHf0s,5p>KӹM,Mv?y,=&麯~`r=lv-v}SGt |2I WJvw #He9 4 N_KZ5#!_ȔF@i7A- }GvS h[nMHW.A2E=|yjI5BHp^rF{7p`Y4Bahp׭TS ;Z8gB1 免ɝqiS-Fmor<7&$+RH^'j70Q}nN;h3 nm<=[hm=Nta?|}"6N+/׹T9)crɴBd_&ϔtkCBfĞ3yk>ʥm=+ˢ}~^bҘ ,|OqX3,;ʞx l?Q ronlɩ^Anl ]V>)b7ikJ*EvF26+ ?65=n6s'|5Om4Dպ[oA|e ܛf|vluf W?5Oz27 כw&o#W OޯOsyB):<7y'40y%GBqwE}g-;kQYε QI {UiVR L Йd!%9#/wn/&^uYd%wY:'w9sWrc"&dyTӌǂ,47*IIJim3 InkK\SҰ]VO "M̨U(әI (ld, F.&ّEV+A +b`$*c\3 n"ue N45\( Abaw`%|# ~(&E$(BYbzrL, eQ,L3H6M:>`>4 R֤$6yLS"2I( TZ je) ,ӉD_`:R*$ii"@̤ H˂٦^\$pM/P3"0ܮ}]lMrHlOd;|۞u3zg(~ގ9 H>G޽~>t,Ge]F!ADC_f&|~5 y;O(wMIX0~/3 [hS"&ۃ?cG`IX õQC8<3aڢ7'+-Q J*g@aHo ~ͷA!+u[ V-VNF1w,w9% Qecc@CO;a8M~V(5x@ g`Q\瓹gm ٢af0]-gGkx\n1W7 {MJ:&e2;޶Q5n[\?sy Gc&H#5b12Cܤ^OY&u+̰ քB`UgaNvخz_aPS4r;cD)/MdGWi&GB"D2JbEZQ(KN$JEBe4Y ctMoܗA4)ǓMp`Rȳk _z0xx뗖PK`XO}>[lkk)ou.MDL9u )n$Oi*q]IK9m5dp}*U45(&x# y/qtx/xCDe4Rő2$B4dD2J4 ZRF©Ae59'Ɉ IMR.D i RtqpD#S;9M qCjˑ.Lx2]XZJ1PZ N^`r9V7B:jXAbeV$TK TsEpըR՚ ٮ:<#L\qiO+>l}d sb~\NX<,l]^m# ()ٵ;=~|1hab<_BwO ]Xyp'X\ >}5Aע}îD$R^BV0޵-mc_Q˩q)ۙTHmͨ/vSP%Q"H5Mj\ 콂-A[""?n&xf2؊\l唓sp2V[u y\[]JsRgRO iC aI[(<-١G8:f[2r'PKa#[W+Rj9X;jQ@!,=4p tSY ar{MjAR"FJ._y VU\|wZ)9"Kt3RXVQ&Xhj{F,p Fb22!HL3{f.ZNQ|zkÀ@r;"s~ٷC=XH1~9~$@^"V MՈw&揷7u5,]%D|U}!>_-IN(vQ+{Q~ozc `3k &Y,7rZ(* l{Q7w_ouR-fz{8)k|='~s9Svɇ⟫xnI|0miz1ڞ^[(Ǧ-t:]ŽsXi1.חCz @Lwc9Z<75G)m8UV*$";eE]@:b;Lo:v[>ӝ!]ru O2崣h@oܱyȐAgE}"3{1hYezafCWM0\>^xтcA 5 J^O姣sێd@t1Mq~E7430xk$*(71BQ0n `DInFf!Ic3`-;|`;&A"j3Li(fxAO>V%Kk +_;gw܀3㬴r5sV~xY-zhEs̜D;޷zr*|k g*j+ 4.G`pz[*jԯx.;JkO1JWfg\:.7ꦱvE&XV_9_:Ei"iflPf)/EGpVu]*^>8E F 8?32f !jB!Ww&sIɝ9Pb!xx7*yLsբt9K -q-"Z$ MT&lPyj\-d沜?;mue۾Oچٹl۷oD5a'rUdScT3qZ'[ |O6w cY콺177y_񣱝}^QdO )m͟.Hi%G.:RQ* d^SRÆmDOpVNݤxld13k,NYd |]S|~w3K]lxqkgRd*mLXX8lgfM@V^`DPmr0[&99<ۧB,%EtݗSdМLbrpw:QʽݳZd#gwDmU3[|as;{g+u__w6fgw3=[+nV~ 7){7yXY^襻}d~. 1研l,ҮǬ:w r7$2Tܴj=7TʀuBy jS%.nJޟpt >v1ii$[-7m8V̧ﮉf+.W{H]etqx'$Y"5QJJ 1 |ZE\p Ny~؋\Tbxas`ZPu:pg@=!Ny_=|=HGrlHX`K]jU=π䞀ͨ8P! H)kGJC)v$S8O {E΀zvS|sNqB %ºx(*)yn3,HЏXu#[=^X tw8cHU6]mk~#>FG`@f`QDZP /Y,pܠú6u3ƖFՖ`EW ˏf}uXV,+D5DytUvbTmݣ uhSv~WW M~Up/rJKYa7q] eH(^ƂR1;rFz0G*E-˺ =MIy8C.cܟ#}= 2i23lC(~#(DHHy"Rˆ4XÏ|.Q@ ] 5!9M7F7 fBV<Qɧ'BPhdUbŽ$I#VBRIFA!˙J3),_ A@`VTv|o;o_Xa,Q<%@ar!A"`e(W4Uk@*PJc#ݷ`Cxk`;LG4Pv5:Y $5$@,! 'U݁d)` T"wyϷ |X{\HJ(Y~N`! q_ݲ7oz0y CY@Pr#ϞmAq&8K^ʫ*tnM ԓ~hl_,׷7vX}g''aan!\hݽHl̬wˬ`J*XِR{LJcfZ^0-8.m(s>hK{ʖi)KIyvBy jSe&`{3n?ݺh!ϼ);x3[_v0S.^~,F-'wŎæMfa 5ޙ/]'Kd*EwFdNVDg C ?f%ؒ evrW{{FHm sPdTP" 'ip @߆YZ ;یCukτB.q3ʛA?2%{Q#_s&dhuvՋ%τ #Y^xC?fdU,"y@842_ل^'u1cGMXkڱugJb'2 Iuu\ zQ-1Ѓ`1:#He +ѯ]mzw&bqycۏ 5׉F~\==;ǥ&4֔n=뢇@gژ HqZ}59q Gљ<689.T".1dC$c5qMRRF;F} oq ]B(-=Շgm !nRuSήٮNei]޺?\K0)c!\̦yg>;3s{lpW]wUoͧwFFd@0"p%,'yre/sf Ba&P m>uEbVI^NIvIvOf7x޺"*@w>Qǫ0f l{/ՊF ӯ1)L;//.ǠtBl BB2뽦T-#Be@ 1r*)S9K̨<{EzB&/@eM3h$-rF18`fQ5Q1((*h5Q>}Qω3\B;KJͱMlDT0r# Rbo齺 XSEwV ;ߩ D0vL&NV:TgDiI*bCg<2ΈI$_|:&gO|Yt=aަ]w?5S"QI⿂)O<&$ 6':'9zE}?*H06qƽ>-ܒ+y kkیX<z៸^^9jWګ2^RJ(RDfHyeQ*d!Gj+AAB޺i3"8AGLx A'>cˎ fSi4$f0*QlNT ƕ@p3(XQ yAV' A1xN흶  CK 9aѣpn@ʅ-H-\ RR8@N"X2TzҹH'TxWmrPJpӍfWT~X p|;…Uc)~};{;(_5vܝ/k">^Hij I ?,޵m,Bs KC@IN[4$YRE)~gIYdSk@-Qrv~`Tm6vȉ #jEL' 1ɾ}z̙-4~09JgmRrc<9ׄpre(u ;&]$~/znxU!Y'Cw;~% S k0zGi![1UsqwZF&.vI&-g2E1y]E Ր\\*p!oݷL~}n`"pd6gcmuw)pN62ޛ \983YQ1iD˭[1}ݾ>}'"29ہRva>{o 9X$уNڹUЦnH{ݽp&["ΏS?o.V1v`(;~{4-+3a#U'U~@[)Td1QT)LBw 0Oo\< JZH"oQaVΌo|q:[4ٌ;:WOvAu1yU;Qa] ҾVJk0fC :^Sjg0 k>@'0$*lY .@&e\o +aǿ/25 W},O [ќY6(2JMk[B~$*TG(RVSL`C*.b)|ˆ> B6Dcҏh_]*p}ʙeR=1+K p"iʂn*A Ȫ@ te401*1O Dw$sƻJsG6gqDIؖgqC<7# e-yG)2,75*Le9osrv~/@b,kUz(_12лKn1_ֻi&6,@Ue2h8bu6lӘ x3*֝ՎT6¢}HU]s9:_I0ݹ!uY)jomrd-W l67I+ٝI&WJ5Zd\Bݹ|_r|wϝcs\7uLɨ\KC*`AWa،+aʜ u )k +A6 ;^O2*{j ,LN 3^J:IJY6q?i+3Pr/JJt~1m,WSur18C% κK+eykB)]kRw[+Ɲ:f<D^fU:6a JwY\7im~`9ZܤrE *3Dg9nja}p DaJKZNmXŜ$ 1Gc.w%Z0($Q/FQlo% CEI#Xb÷ Jxc 3'7d2WHʨsn|>u{UDz\}Д i@q1a!kP,X 1RΟY@Ƞj"X8ukd(QE2Y1HBSW'ӫ:sm騹`ծV_ZDFHqm'R~a ?0~JzʟN>p*Tqip+2D\͙TRfUF-RʚDFpxCSGSJEh80j:6Ў+`&kHgkt}u̓xv~e"udf- gS3pڹ={ŷk2 At΁ݞ_٫4@)?:D`]{ݙc/.iW.zZ ^ %Eq>0<X¾&\T`@hrEr%I[2!ŮrO g4 QKB4HN~ {19'=FM4]y#4v88:A`1øl'/;I4hJgɛݙWUMнI@‘ Q ΧϾB!UbŌPnl0 2"D)xj ;㸋\Vt^va؇LP>!9%Cϭn,Q{_XRrO X5zGQlE* RPY*hj=:Sa,Xwy $X]y"('͝è` T |?4QNtwIHIٝ&>V! # "SƘ "SExbJiy@",]kHNz u䑸AX)1 AEFE2ahi#RFL Q@%2h[ QUV+8kl,MD+ՑH=aToL8+ZUnKSbޔ !z>V] *rvjMj!8㺂ndf}(9<QGfQ@rD%M`I 3@h K΂EA򒣐P7B%[BSP&{?;54ŽOYbr߼_^c[l|_zrƽ'=FHN 7jsC ׽ b{0}"3n\$%_A90c`Lv{Η!7ՠ C75tK}اř=Xz&jk9ВFs9?. M'ꐍoP&[b:޺Mmŧćt[feoF~p{9l玞4q߄]Uﻵ 7Aڇ]0|śIl.lu8r2N _1BHpgI:F r~6xj`0xqASYKZd4'a8giNÌ1\zhl :|o%),tj7ހ;\ڨ<wD-x禋o1NP[9 T8 *7?q5+qm Yz2$f} )Z Yb-cZxcWV&+:;ʅX R+Ang0 g;ISP2$azx 憚9ӑ AyX,Kꓭ2} 5;ovk&HL?ZEO* RD?D,3"؜R﫛]#m; $v#v[|Ԓ4ʑ=O?B#!ҕLd(}lDXsQC!,Hs51ҿsQBL6&Sˎ;'˥v~b}"ZUa7U*B1Tgk%@ 0QwN̵Dh|)+NwړFT8]HKM*1[qSeץOk+<48" #_kFGFcFQDKS͉Q" DF> |e}%(H@#<0D-5>L6!5TJ+t"EVFXpSb4S~d@Z2),I5ǁ}24ϊ`u(('!!ZZV$Б!CW'U{Sz[/MB•nzY1@`ӌN1MM/mIR~Xk3ڟ.RHsUO^u3 u/3|.I^vp%E[1^Sot(%$խI)H@(M?f*/$s|O 5w0Ɖ1G8f%Ynn߲W sYT 9Y14[y '5mxj?WQ$/)O/2:C.z h' ϼ{v_`X}DZ$n6,QXUDdA J-s0ŋckGޅܡju JW.UCc`}poiDZR -$/U`U !hiNM9RfNV^.AC:ה1\^BuZT̐2.28V3$Au*hH9CpZO>)Oi(੠!QPJ5iWT] 4dn-xj(h h3)] {%%Xj & (0 0tD?ecѿ-JkhD^YLσB6 2W%N[Dxjv4yp2Ʉr:3C%!ʸO⪤R^MŒP-psUVcM[V΋ѿIM^`柣P6|[PFxWmOg}Tn J]tsRu`jyS`H}L֜ֆz:Q:M:$}`BڛJ*7%j&jл\HǔR 34USǥͅӜQ)ֱ Nc{+K((Fܰ) n4ʡs2Tsrfs_(VJ{kW"isJ4Jl֭+Cśe\5#(C@|Rp8rɌr@&K0˕̃pw押NKe0xؘͳ71[7#(Cӗ l4<ɕTX9ܐ/r' CRn~&buT#uy'iey M˗Aoͺ| 5K8I˗RJręH+Vpx13MṳvP#8r,RD\D=ًYp7m F郛z2ZimRK↗ZY67rkqû%(u梢@nKg‘ƴՐs:E}:E=M7Uh;^S  ߀5} +',k&nl{SRgZT7.O5jWJ֠W(N6?Kw~tDob鐛>@1V@@ҸcN> 7^mc8|2#?C"A Nzs?kP&|g&,BeBliu/~c!"߇IWM(\[gJ=!ey,bW.cAWaZQ嬈֒MMURnTԽwk*i::}aVuݚDKnmXD8_z7Qwk*i:5wJTzvݚzOnmXD),>)~lQ~<.?& b٤Ѩ^;;lÁ;<o0/f}U(BS2BS=!+rBh.9:|F jL=ZE&_Ձx_JiZ[M$ @y!)I!؂*FƝG\Pن?;]z75_K/ >usyK=8N⊘e]HJRP~|ƳşXVO9&7t}<+/<[dzr}\/\GPv"}QL.0سd߭7h[i!o﹇a⼕5gI|o:aPSYSCK$I07_pHT=kYebH:g*fLIcrk}N όW Kϕ׆8㯹d4 jQX{;lӓ~ȣmT_Rͤa+5$^|YJYv/W6UݓE+e4Jh1(t՗+T 䓕E+=Wrͩ9 +F*MdoJvOL0DBJCAC{TN揬􄢐Z<4/ąWwaz^&6~}@Yxi|F }Fj|G,h/})VTkCOV4 |oQGaLYiI_8Y[R.Ӭ> eC̛u2Gb$^!{?ʥ4Kn=Qz3y݄DEܣDMX[>G5g-OF[8 %U+x^aԔO|2F~WqI:MZ3,x) Ulg>eM̛DMp8moΗWFo:k(4c:aJ~y>6!ZJ ОDJHC]G%]zZ o=aY%Ң@)GU9|&!X@+q#eJyE}BޛSyZe!0EbE>aiju5)4QX(6t܄bjoj_X[i/A-rZAm[uv3] iR}m/.e噷? m wYCg3qk zr5գ155h@77ҼAھa MRLN;T0#P;uއdu7ku=m!QAyL- Fn{F̡+zdnIF].8> v7O9.Fa5qG:R:Crb7uw&#cwo.u 4̍)3HǼ),+M o5x/Rǵ$*ލIw,r sS1'[8&@$m8Ɩ}}7?C?&8P]VIh,,JoL?z# J ?O0kJ%但ݯ#O5Q$6.9)M&۲W+F%҂JjV(gԚIHCkԩ̰E#F*,:ۇjєZjF58Zh*E(8bVJ)~+n &Jc^_]sr)d5 pƫP \b>&Hah8L?Y{#ق 0xV,Q)q{9 aއ|oGI_PjC@)pMJ.Π{ƾ>_}qhi_~SpalWjOB?xP|_n:9)8C#,^͌>SVHABOErS()kGQUl<՛餣'W}udlv=s*ku}gNr #fe#:R)Jz Y-Hl8!;S>dhA$Xax ;--+Q_GJ2Mf6Mo=c8eçhsgq\% ijEOrm`ϯs<,T@w`ΤL5g~w[\ %qF+GR(N:fD|(Z3BF/$—#a߲p.~Eyx#>H6;@r@ <  &Bgwҝ=>\)do\Zʂ^0[7UJwO1L\~d~?%WBhgxz)nGX1 4??d:ӏeʵ"h۽wJ>Ɓv2fI!X9)zs pwCE H0/}|IYK!}йt 9FHr t,>WvPGНm՗T8k֨GhOx)l-# 07rT@9 1JKzŦ 2kBgn!zD#&$*}!* /W ZcDMR5V5&C F#Cqagu%`j;g|m0/ PY> *OxM3%ɨ r60OW(RR PYx,~[]MVn"TszH9bI)7]eh8b OX]NB\ȶ=p fNpk\!3ΤYPQg;FشDa¶GS* $ f0wӐ)#l`%2_cLʣi О6'{+Uԋ]Yo]7+A^W(dy ؈d많lWG$ykUT`UbT,@T5xF&֦wkpbR hm(5Tɂ(V9`uxP4;nn#؄мRA?E%HCw:\}+NGs"֓Q4KjiPHI-t^'LJN !ipU@ɶP dH4_j<>$5DA`NEr kKtoI>j6_ mKvNg\AnWL|˵AD%;mqԙΆM<_j\b<]qC_;vEA P&!%KpGshJմJ!BK[ MI]"X^*:ˤ7A Pĩf;NR[>)ѵSlMH<:@C: P ZC$UAc ֹiPY)`˚ uBq"h8+0Z YQ ),ڃW*,[oFRfCl+L+P#2\k|jR=j$̵I^Hb][i4H Wb<9(CɁ\0 lې!4 4Qc1!en:Auq"5X1I3WM-q14gK T[z 1qh7Nn >t $ז/6;!k9Ō4)>J aF9M E!&+[Mr4ulNyR &dTk^KUC1ѥRf+*IQbJطwǶ>]޵zfJ-.Ӎ8n<}WsrKuiѹWtݚjr/pL|&Ye:Iѵ[3i92N\`ׁ &Fqh{d=4C'l2߈ X?}/1,bҀeGр'E>:ʡ)(|i}9Um]S#.W?߯CpƘC?# QpEuu(8cjuu #}2;Nqz#+r.5ނl\t湜bVZx3A q`֬@ס=1U6Vo c[lT?HekZ %K@߈NmK<#rLyCe}CTZCr5&͸FX5%dk̆>yy0WU1ףÍOS ͧ(_S쬃\?F^W_?⠎czy7LwCNh/~]g>8?}+i5˿I., :Ox48pE~Í:֠$Zx8wIXF$9w?{ߠWxwvAl>g͹Çϟ4/, $^CnSD~: SDǁ0!Wwe}zUΎz.lPe+d-˞%]ڿQ~ցٺꨥR,z6eQ;U^EF vrB 4ՂoGHi{an>p}3m=>PWήCrsJ|`覴]Oӗi>KicJ4b_snK6)msۚi~==i#J|-mw=r]#'Yj%:9kk꽩6>4]h4U%=VlTMY5[W 8" ]?w.)QMǥu,=^@xw0%BQ#('ICWZ [_ϛ F Nd ohXIyC aG>w{Ms` 5!xpV @5ڛѹh{t.wω߾ߍ`y. {iz^D.۲+榞 xw3r8Ĵ3v~MozOS)NPѽy ܾr>E1&PHj˵a/%asm{u 2:rh j?.c ṧsYl5Ԙ ЬGֈ([~ _S)YCW/0 MV_RvFn^Q?85zFx؆ا<^5M]b]KG=;73E}Ux9 !Sc!\|5:8am+|й2`@}h8uptqv\{ ĥCDU1^0MgZ㸑_eccZ@b河a=9m( UD"M{<h6G@ףI"K,VL$2r}ę\0H>͒=x2ɐl{E4KQF4+ST|4ֵ|xx0@`ߞU'#` *(x{Nn oyMW1jQPRä),|,'b71X%$q} q RcSqA}#鴞PᖅFǵ܀1rYZj_ f59[2\0)ۄ(_}*+fOTE|޵Y*jҶTg, B/elKOخTN7݊vS7^E( ,@ݷW[3)'ե_{^!:7\T')STTbœ% j4aG~5B V%j˺{y OKp۳_}AxpZTmhL΀9, @DR3vsN+@DxPչuFm-*C) Ċ4sB 1@b/חʥ_:vayL̓::.V2ō^#; Lo V(@^eVIaj48CR2'7BEG'j{~ܻQ3Ey) Abeu8z2mK!*@HuZ>9_\V# $CWNBʹyl5Bw?`I-Z3IRNUdp^b%d(.`d|_U#hSx&,걦սګ,Ne_M%'ppv)G'feHV(L&Ġ;}L`-$l2NLy?!\,c*')ri&a,ـ{Nڽ{zvLz8kv34 r0\~:xjk7GOQ{`]vX9ڥ@i3EacknEG5T8&G(6}Udv)d9L[& IPk/ӣ`:am}*w,?ȐM\{Ph㱪 Vb- ] &8B1Jf!ϻ8YYB1fubN=taoTOHkzMGrO #'}Gj̝Y`AqQ'jq6u:B"Cm<=b,n7UZ4 ȦŢ&wE[q ϣ"N=vŘ+Bsi A"(Ҟ 8if=ZE7JE$ӆsdN^}3as6xSg[}n:2C#(*$G!N_T'=i,7Wԓ~U!E)5 +JWƯ.hXt>|?F-&E3QKy8am}O#&z8 f T2a+B֮hѼy\{oN Jxu>9MQSSAҞ׿\45|ŕ&4k3ҀU|rsg_*zTy-&3Z&:j24"$ZgTPB N]-ߡ=7*8Wg7L4oG}f뵽X\̗W$/W>V?)ɹ19!&>܇frnjqyS,ڲ q2i5PW 9Qk g1f-2MGYVi쏷; py]oQκ~,.gI Vo_o7gO5feǯg-g0u0H$ s^5"yH ^VZ(KQyL 䪙Weh@0 'N4&b+ *]rViRqY\%(ޯJUCMu ~E#ɠ;g_;{'ͧHl@KSRyR!U:hdHE2U)}7)*C.+Qbʪ|)YP&k;%*Rґ_M]Hh? (-*cʭ[NbTc;{,>ƹ`$ 5J1L; ]-yhwv" 9{ƓYE;M8m_ /ːb[W^|nѧ ӌ`$VTJϔ)c2_]eR+gx m\,rǎ!Qk8Ӓ%B ["q/f?]A ʒrX~w-r@'_^>"1!{mZx6C,?6U{?q_"ђ\l/oV." :ܮޮG`[-~: ׾Z'/+?x!XL\^23ynG2΅gtV2A58::X)ʰI 4=c.E,>rt@% Epdsr7d$x(*Vo.ģ&]/2EGX4.m/;!R4@.}[Q߉ĥuu4o~6 ʌ=>#M! u7T׈vL;^fʐ\L?@Ie92Bc,S˨߼1F'PA2z(3;8 #hЦ֙A()OQC#y"/Ex!&h=7#NEѸ#< c:CWca<ΐ]iycPpO3kE6ŠY@`#tRJe(9 h%}~ ase:3w] 3>Gu"hod,9*\ZR0_Ʃ RV`.+F1ֵ#=\gDD{3. P(0Lqs&ȅݻsUV[|shH`fŜ=7JK|sU)_DGILy7gA_ DM3sF"Ь'Ԋu*FX#{{t=Xd&b:UL%e)&ǝz j;F9\QI=ҭٜ8!vm yib2N<5bP#] m-y"BG{dgw5 Vt>yUecdZ8^im%/ +AzR\Nt{^n1āyb0ױfkT#FiV -w8#^5j:.Eܸ?qZyM,`"]ZG*H= HɌYewn{ ,G/D%q9t[tN }Y:a{,{{=Y `=,IeS3 GNPU".)I-ߘ |Ihmw~xbKnnkv~c8~9G[gIgI==,Ze9NlxV]>7w GZ`Qoc F?krY8Ѓ At_! Z)3A]; Ǫv*ARYvBREDkQVN:v'T2oYw0[Gw-&PD&4H,:tWl.;BjƏW3A/(SJ%Hӥ磫wrzi$&)f M#D[1a>sIHOӥblRIіlg9REc3Gb.pKu`RZiù-k#:P0^r\{zw9Eh@#)2OYBO6F'FJ G_~o~Пh)ybkڏ9'{qsTsPϢ3T1Sy} -CbPVϼ9iPH9gah]|}kYTE^ơE%Пjפu?1ٱ&6  +iVJ6;=nRf AQ'&ːٱseҵ!K> Şc bK5ΐ352R267a,[U|4E+G:آ7vㅘ|p&lI/ܑBFVuOChr:!OIߜ0xk#IclkT=Q6`d`w77Toa&"jI7QQ?ĊU,2bZ#LMSRI'4r< TҘ}r_7I%Rrp@%ثJ ثJq**i4ۡͮOݴedGmM}E=fx{O$@N OiaB*#5ٰj9 LYހ∣!Qh G¥<%nFcfkEۨVMP/M{N8 N+SRo5,BJtZq_*U&PKQ'BaH1 /WI1]tgz9_Z \p ozW=5Rs%W&`ؖvvn]UU6w FE2Ww U?Z.wɿvT&H}$ӆOeE\jOs0;,21)QA;|/)Hk=yIt?9lUn)%-xȎUփ u5v\3yL3{^X9D`,P4w)+*tI019N]P '%@tr6蒊^:LRّ |N!k퉝"'/}&"dA,VRK%1#)F0)(|f`U f,EF $*x !q&gkR9]E9 &zCƆ#_hȢ5$RПjmO_YDbhstYbɪӖMd)VHńj*H-*K!4>%Te'e+*g3L>! }8}eJ(dcfJڔ|2@y(71D:&eպA%<{P zAWhXJd]}nT cJrJB9h1Jo]հ5N#]RDfv3+n$iS?/OQyyM \t32gkv%GH=yyZ-NkNi'k1Vz5B+XZ'dyƲΗj4BrP錁X:Lf|s{.t^0 ˘t*-u'7'AEWqc(0z}k~DZ7:  ) j.Ʌw2.$dM@;!jް *3NK7_I3Fu'|J8j-(fW<>s8fNPɿz. zzEH9]4JN^ UGUOm㢪t1i"lt95Q)[E#vuE(2>3LV'ّ%P TV9K$-6g0wCk_c<>V`SԺpq\> jt<Ɗ}#/l_YFmڤ'`+m)nW˂m+>< LJZ"r 2-GH'W^Q'x!+MMZC`*ź2F0F?>ݣwȩϿЇ&b {b`J>XjS۴nU^jScO):un\hh[?sy`o^v\\_{H&7 XFHWq`U$9Fg so+&߫-CaiwXjjxGi5evWû8k;1FO)*pJp|y0yoǠW=U*fM3FWZ.,؏0>F{q"8|)==8p*x?\yx_ӣZ-%JM~zv-&cRl[ -g]8]^=j|ùisz{վ||v$4(8Lg=AnaW ,"ˠce"̮ >DYEg(KkiF?߯v lVD;]v6FS*$X#(\&I.Zm:fK#!%q?VH"yc*}k3 HtH9pEIZDƃFPaK @S"cU ";ɃշR {ou sKr(p$g EݫyUpN %&Tc)!pխO<86&SP@(9{$@"fqECdSD֢j-Ls١y˝u{wU4=锕tfB `k DH~Al TxMJzyּa5JIsKЂՂ޴$S$675OJ*hr"y^'?ڟi~gw7Mhǒ'ٿ_r;ubP,KŻwR%[R^pzH>du V:7gO\<ĿowMk-"\lwUMxll^RM+@nc" M/b| 7rgaJl7-E7'Ͳx3)!s&!8JMQ {zR)aPvj XYj]98-eMdiNLm˴qɵ'4B0 8 ^#\*;e?m `I;nIVثjuLq]uZ-'WοYVLw>˜pIs&$B {?F1/>M=p@ߜryu՜5p}rζp&ow$^b5Ʊq  Y 4 C\&`9쾈MZg@PazI:}"QW%g@~I+镈’D[[/Ei6`SJA :AZփ06C؁Bu=pgd#TgD3tZBv$SNg8 9k:b+zTH]wq[F%YBWD֌sۻ)!=6T53]wVĚڻ U^ư?)G =4R ;G J|НB7writn.^sKX8_\]Wy.6@~*ػr,X~BUЌ^ܴ!(c3\%~h@MJhwpD8RdPwhni~d3f |g҆o(]|n7S\U߬7סmb30FqHFX~X|XiT0O'`!x){RV`V"Y 0.)v6Ffb7 |,-ᚯ8`Mj=#ZjM.?uzW:o~LNm7uzלԮ]vRxjғΦTq ՚2Tjl8q:>R8 !dB.Ij6ܺ iRI8JY(l H :QdUe7ZSq[Ju Ck35EM9]&-3ɪ̤H1)ErSd+e_ߚ6e_xYJLpD.fwD 4|ʗM0Wx].z@Wzx/_V =4c+׫: ͨɡ_jӳn2Ct(JقɝH]^5n>56?O3FG75Ep3_2$][==8o&zj8{:rno }ewdљp~gG Ԏ1"X.*4rWP(cAaE*ނ /wռyWЊ)YyB/W]A+SWP!V4vWԅK+U(D$c`RV@P6x 9ϤBhR"PIyπ9-˴9t˴.56jtW8α)VsZ%OuYo7sB/e$rj9x) ) K ҆L Qb֖ezhp@ߜryuo.GGq9cql0bK;=~ vb#YWFf]pv!~sv:Fh|UbeߟƑ#"K^=WUdC6> @=d//9$m,Z佽 =lF{eZv4#vU=լ"Y|ƭ_]xSl?Si?]a_,~GqgK 7jM8LÎoɦ?Uwö=[fE|weQ{$9e-4fX@1k=[UkṩIFTWqR6ve~6زXYM^ʢK7R[lQ]Ez)+טzw:_YLxoNˉ]WE~4fHOy-5UwȾ^t:oiQtlܲП"6oۏܛ9 V?p{ =eD0$G*iUc8n [z{\T]x8^t%-l"z6ET,+:BeDH Sʡ~.ͦu\Tv/NfsuqE(= q%_)d #oXf%ft#x"~%@^E-A`aϛ]KYfGfP, }~D~X(&V CkJ|q==F)"X7qHYSZNBGI G5":7 -t4FIpdct15+V&za}d=Yh=X{C]MngaW:ZyH:G W84L҆YFκ3I=G  Yɟ+ΧMt%ǻk1ce=EїKKb XQ%k}w&\%ВZW:9v{dBʊyA%zlSethҬ %NgmjT6fy:odG+B+0xWo~v2F!]/4noN*s=i#Uz*:^_~-_|wŵPik!!wNHfȚi8///Ky]Lj^_ܸzq}XW[/y$N^6=W> `H˺?>J$T&^ K)u?M C}}ml2*p-飀BڷC7TRr8'{l"~^[-`8ytV8doQ}~ş8;c|C(oi` *Jb*jL-4O%# )#[WV):$& Mh$R%# JѵȉgLSj[ %6*7C/a;=OivL`aը+ %%(Є:ϸ}O(Ly.Om|wf7r} Ej$ɞ2d=xeKx$6ܚ}<;@o) Q*`h34n[ ُDft~l?\4V}_m6O*G1MD-i s1e+w[7srVo}df cȞˈs>r(ٗOw_%u)ձqwUK^q ;{} _!6FءvW|ENHRMii CJZ U'|AcՙQ8D(NHǐ`n0 kyyO<v)*W}75wℯr%{TσfyFb[AjhK3@ ^ 2E.͈(Z;K30THb$er85) OšIsP =yYJTfhe500d؎LFBO'Q%\<P-DZ_g=݆w&/S>kgJ֗!~q}ӟTϞ|=<{y'ߏn.>VZ+S׹0h#k:Ԏup$ԩ6NH*Y7{}㲹gn}&AEM.&%G3nC7W.?ڽ%Kv8‘!I\WBצ\$XHf*koҪ+hjmB0JCd)GHm ^7}jÎśXqw}Bo7Qv~\_ǛOU>I@vNly0F  J/z37qџ.>{NwTX# wUNnַ2Mg&ǀU|_}w$ k(쫳K=x2_'ԻJ)~uuwuʊ/I&EhyJ;iv# X\M(gq׏ { oFbG_Ļ~d Ԭ8ZE㡟O6d (Pim8, >+ f2z]FHk:Mi݄84R3GbɱIHW e\T^DR#&xk{0m&lohFT's}S>KkJ ghr k;qNl}O^S}VNtA֮Oz&8Cpwؠ6s8G['Glԩ̗,It~cr>zj t`]$rfr, 35 'B8 \#Q2+sG)뭯D,BOIBR;;c[燎VҶc0D_ۏi Ji>HVt`LlZMb(&E}(ynb&dx4¥dId'}XaSԶ-zkC9w9uW<<3]a^%_~٭TӄCv,uëRZ]Cz8ù+k[8<^#diɛV3!$UNvݥf23 @*5r,ߕ>-vo*Je]D]YϺRZZ V[9 {=q]7>c O<;ɼ(Zd b|峻$M,AsOw+'l l KČ$`jyN']oIpFP| ۖll:QⒺ cn|xoj}]E M NfmQkht1a":[ Z WnywT[s%|}M J++R³`A;$%ZX^6 (d%bQoOxW6z;|$ dT/(bn!`TdiL2Σ̼V 5&[sM>r2Δw&)L./5+)r2΄CҋNϣJk0.% jw< "͖~{{YV1({vrY1oOnb|h[[zx ovUH3(qKA'VPIO|ֺ87k@φ(xr"qD̂QnxW5wWSү7/D}G qӑO_7kUo8v U\.BS9@ŚHezOzuV"^: ;R2ϒƇK),K/H{HDe] b"*%] Sr&lKn 8ڹ;;Ȃgǖ~'fO^&NIcS^&2qiys,RC), WRvYmkJ*Wh񰂥K8C0hY6*Bhj݄"ֵu@JZޠ#"ػGE,wV,h~ƕF:!@ L *%[1eL6bcbL* C30f/s'DK'0ZanfD-#NL Ri}s3-bxRdYH}y"Òͣ+#GE7"듧 c؟lyzGٵ̪]G7Weu,vi`0# !ofޣQ@s bHSV_X-."-7K7C~)Ř,YJ7KI.$!>R.QΞK椔K  $]Ni h["9 qʀ;g ],u0$$J( C1ɶWKqC3cᎤ{l$sѯJ8<N{~z 8Qؚ1G!WnSǕj~Dk0Q`з˅3t SswN#)LJ6)*6Ҕ ,f !cz MpS*$ڣmUU) [G/V DH Kvs]6|ȇ4FoFIHjϠd#3lM9Ff"Òdp._Ö(r_-0c2d*ٯ+S,4J\':1(au" d4_q(TȽN$f) 3׉% D՗cDJ: Ό,cfgq`ي)- bh?Y )9R@ -K ZB?Ȳ_`vs?zGC9]{B:?c{XbxLJF ?gc_g65ʹ̢ }s2qߡ()ڨTh8\&y󂡮kNM!)%@R}|+cJqva::J.^\&v9A1a)} tx24Q曉Uieg5: XMR͙{=4/;[8R[ybK[Ok=};Νd wDp$i4 J% s#n4_љ9#;(fM+!L/ ܢӱ;1 doQeo (yӁ58mM3t:f&lgnt NIHer*gn#Ҙ~.ʙ/ulf@u"KbgOk(^jkJc {t}g'"I_7ܬ>E˟?z?HͭmXU-Q ls{/cuZoE Y 8}CR~zmu4әRSLMado}=vb? Gr٭!d{FG/۞\bV V-gxu;1tp!ƄmC I"(d(t<_7 Ϩׁ0mKb=Г9n&%Z>lɝKhL'R8X6 cH g.)`@d.e!KȔe =e1z;C͕tfȭ"L/W>"9,nJ[lv߆ Vۮouwg{bݟu6S !Ye%敋BQ**Ei U]UYS׮ʹw{]+p`!HKiO6!T }Hs·B.JZ $ﻝ-[{CKjfqSV_Y"˻H1\iuo>mվ su=KR\NsRqtcuK/{"Ru@ Qy;>5 nj0ҥn$9P٨zt=]+ {)U=:ћ͌U=;ǘQdg8|=N݉ѪOxqՎXUz$"jQӁ5G0>/>%0thH݇TpI}e:,U_ICQ-4HQ-Hu L.ySJ$mF7H IR +w RSJX$* [f % Cq|[E!q+_Y[^hy$ﷂiXhG-%3,譝~TĞD#D<qdۭU#^0 T,tR&StiEBn W5.|ݴ&hv*̊, -cnn2Ęr0"]BDazHU!g)WN)Ȳ_߬j'@r~5 |$ ,۸H"WcIL;ADngB+؜Yc9d>%ysQXVPmKV#"hDE qx VܩۃX* H8"1s:F9+[)кyBU_fUҔRDCѮCR 2q J1ssq* 6 @!4Rز p6\81+)5AǰtԠ̀7c56A7QPc\UI%Y;/X)NP#Kݚ16 ѐA+iv:ئZӁb;DRlSp!94v:,Ǧ&u\!4.aܤLӤ=٨~rqiL)sFsN#Pj,jeK?R!X'/0;^S]Kh tU E߼H~?eՁ֙!M[p*h ={g?\PۛcP, g,ơw'n4pvFaT6 }7?g ,si&.܊CK+4VHo,侶lwe87Y{>u/ J~1XW .0Η1RΫBкKRڈ4jPׇπdDeYIkx&e k4e; =ə)˛4fv)|PﯫaR \+r^N+LE3/s''M#N22O(\eMIX/"w|w@<{O_b1Cc.! ^U"6ָˉKC,)D VY]R &ޫMuFνIw5y}dbϮyUlQlJSg2 &k"/+yXdT:ޜKsZ^q:wϑE{{|ESRXW d(,/D,5H g"TCuFFߐp+'.A9Ll k^ֹ% rWYZ43)+)Is?R\^BPzqux'U!oy/U<vfN'F}b8iƐ3 /3\Ҋ-JU jruuz{wo}DG =pk|EAt;.(!`8G T óT 1^gwW ȵ,Ѩ+ED8ƱDh(QQSV((#5L q͵?uEbW92>Lq#c628X2 zc}˲-$-K妰& ~61qYNY:h 6 k6.o$apsk("DH\M{=T KႫe*,9e'5X$:7)jɃؓؼZAk!9O SH5/j!Z'.KtX /Q{ە@ /ρknQ#q]; qQ|^WbNu5T!cxt#u Ϋ +-IVdeV|IbZ3W #*S.{Q5" KJR+DڲMΐԵF!˼0[C=9Cg!;Zq{W40];xcc&1v#~W\nkv#pUvs/⥍p}3=m{"0#jc'6o#꟯Fj1k}x߯n^lzS'h26&|uו.j+~[|}wi~9Hէ޽]Oq^W!.=ium$JhNHkqOZݲ;=`.nffծ!u?2n]SC~ꧮ6oLBˋ]=_(7c/2o/| ysoo$ON\3$3%SA:ϐ"%9[]{^\fֶ(13Jՙ BW|8R'i.PsN$y Ď,նUಗH 0U7کX+6%]qH쏷mk! M6h& ΨaHt+ZS"j !69yЊ!H26ޣOր1uiLҫߛjgONw.`u'<> i;.}E :[mMs엱=~/-&9^;P{ď5Zdn?o>6raszYΉO7돭#zY?[G}?4/?Uwl~6I{M^C[ &3Ȑ KfG7|O-U1(·hȘB _wUH w;Ve1Eл:{Ѡ1յ,&aĹk>w w;.`*1w b|n)pS`ǁȉng%aoQJ;*m|K}-5WKs2)Hh+hY& yV@+5ΩrT3V1g<˙.E+%e %5̥=f *D&JD_y@ƹ=Y!zDUy{=.۫oO-2ͮDžbwbŘ/]wB:M(vnBY5DJ,k]$g|O$h$;=grNgv}{0[/]5Ϯ۾I~fжu (˘F^o֔T҄ܝݬsFGJga׌̥22m_&8aga/.08UCi=?'A(j=PDIs$Fl+zz݆_jqdx6^-Rٖj%nb`|yG^yU]5Uu.XI$ ˋ!#毫Z):UxM=e4mΨ7_ηC Q⾥<%tl#EOmo<O˛bWNQWKauZfٗ%n"0?A~I,Q:ΐ }<@r_>Uf`/j:UFaL, &7<ʵͶNX<A0tUa$Fǚ%σ@ixN8OBKgJ$XZJ$XpfKֺ?\Zc͔˼Vǘ'1UPb=Ս7wڨ=AiPܥZ> ktD=[w;l v4@OP_U==nmo9>ɧxl.~Z2[dW٬g43ۻyMsv:c] }nq= L$QI멙UuvW y7(t1Pyc{× (Ǻ*?Ajd- 8rL0!\ ;;}7ݴfOQn̯Xd]⊹u5L򠉙)LϤ&%o<88&q&Nm:˞ rB) a(uB .>$&f})aqf[A"/z?G*c'NKR/i&=rZl ^/qer]r?4: i(r2$ c;SD%$ǜQ^>șl&fnxܓ[ij]Q.#,+gD,^5P(V13JJQy.I1oÌu:7R"Uu^ I':&B(PJ@Y`,Uo.cV|l~}hoPʛ%)Qyt] k~˧?l)YC~.>>Y7cp޼]/kKr}k!B+/p6w ]~NFo F&`bAglqŅLƥ>e|lt2g3.IEQ*wG'%2ļk(.rʼ{_9F9#)0$<pƗ#bJw9t#(PxX,+qLƼXU%2]p7CVlŒ.َ%-|#&X8 ۫ŬY vCq` BX@7EiuDFuzZJLGy'˻Y}~}ZPХӚ pCF0e6Mp R+ˋ2gBZB  4SyI yΰd͸ȫHgZzUGk]6OLbsPc;6Ȅ;%織&q%dd%ЌD\Y)[ RUhʼ|E(+jnsR_H d'>'kP;G$f.j*3+&,+I5Y)JQEy%gV (uW & TM#Rek(=X0b/uȕ1QUBADdE <\tM8<L^g 2<eE!t⛩j$H 0ޝ Ĉ!ׄcx&ŝ;T 0I$3Ɛ2Ѭ4k7JXyT#6y;@Z^}uI[4%$kq$IG8)Cf& Jh[T8F h{Ik:X>GwWNXuZPڝZG+Z`8L rLrO#Ôh@"}7AWsbST{ ؏Oo ,^o^^@C,{ћ$ݛf% jj,?yr=# RNpvZD\;[hq ww-DeZrN.]nލxp*֥]0дSs ]I߭@L&%/B+O؊Jw0 Gńn/.}Yl=Θ"Rf$="ب)A:pumKt휺ֽBcSv.qV ȨVjZTO|Q៌͟Gg;z?َѻzCq7R0R"EkuY +iIx9/rafҌ<ĪV 2&Y}>5x#~7ZYq< л8&^Qjr53h9Z\V62݇Nڛ5.ͥ 6Ԟ;)6Ѣj4 Po΢ZKJHVѬ,9R)y*yUSeuV9/j`QMWO i/\Q)ŁPh4 {m(f^3F$aAj r"5(r 'I&5u pLP_aZvKqB% Y2(F~GmLkێ͙YjzߟEa]!j%]V[jQTXY%V@#ԝӤ2PLAMw]k_O |E?Tb?ĥ(վxqe g,}1 K52vx̌}n:yw~LFlT4& o9ɞ2p;vwE8h1Yh8)CLbِ̼2sB!V*wbW,X\?*Ot[VN*O\b'_GBKC2N۹ tr QӁͿpQsQ DNW?&_* ^N} I>@eؤ!wo`E-!;#$ɼ[xs Gc|C4dLQ8wb-.C'>혻[=4n[cJQ}^/۫ ;>\U=6Li~=Qp:?_}{[y<_At{NUD 5vG*Zخ r CK_yȳ}E["^t42!I8Ubhr؀"ţ 5Spj+YeO'Goe=}=Sth ݍv>Ů/JӉq dw٫^PTl /UoSF:Eň{&𶎹bx!1UJ^=C#) A."onYILU2VtdZ d "%Va$"cARfO5xApy H j90Z뺡5`ֺ</9(oC]{LѮC<b5ZQw1b:/yj@}dAOldt |䖒H۸߭Ԗ Jn1 1 \l3ZsYI*r[uY )NrZ9FzNp?w73X%OV_1`dlW1sɅ!w.|[gQNk9Ӌ^I(Q#\D"_]~p?/ŗkĿ_i26c<[ ݪ :\W!I4]gdVs1"R?QɮA7y=x/ ],{}V/nfkZX X^'/~5wL'\%'bUzj[gzɍ_%Hpx30Afr^KA_ȱv=f7?lI[rDI6edx)" ȃGl1c;gdҺ*ݦ㱸^3KռԂέ$ԑäuQ"Lgqz_.uBtjܐc'ESBɹ,U:=^_>ӡ@H7wSsE&ܒߜse[Do*sWqS=g:|yH<7 IB o}8W|yB.fq o]݄ݠ$ؑp;}{3>tXS% Eؽ&`GB,`{1a=T-,q: -tu,@ƝHOwKcAbA:]$e2#|5%G ht%τu|rXM?ܣ'1WЍst$b BO!Q y?i!ZLaC ᙜ `A9-ZT^anǃSb#5"@N@AM뱑k6-&F0sg2gSS/ IД͐ ?NY I1d G" @ȏjՀ xuZ$s_v7xznC/4\h26z-~X|2~_!_!=7o7sFKUa@׆cmh)𒗪.٬( 01) ٗU;zt6Ce~fvSxJ=ӫ7g_e=;:pB#x>vo<{.*6%bի -TTM?DDօF!_0gd<BvxbX&y+_.BgԣMU}QwG;wwn_KFL.$fMCJ\qʹ7S Drum3i8 PK2oK؟2Ͳ/<(tۜW&Z) bEJv^a%t X{馨̸5f7iji69Z\͡Y1YZSl:c{] q#,h or AISrqH&|+ME$-hȾA.u6Wt?-Wr~əݢ;Ήo:|7*ؾ/<s(Sju .|7~F]}|Vwd 7Ji%cxzH?9dgsw ɡZ}5=0W+JrOnT]AR#ʐ<'Db%9DK wF{~:9b2hV+^XVrx|ί^ك/D{PI9- {{ľI>aR/ 5)}IȽ~:N,Pqjr7:^Qzv}S,*iYKWh#Eh֑ F5->9suj':|QJ # Dm+?B ;~~T (e*[>?;2=OzTK8eQ Ï1fuG3`{*m{cR n,~B-_kNA`([,'P,Yt =!ƋWWFVk!P/]j%MU_[7C~䔔_ٳAYʠW`Wf9%}Id:1G}iy#?v͕uk֛V4ӽa7ZNz‹V|ݔ34kMnX~v:glզ/ޫ~$,IsUqў]OOYdҫɾf:\ ]ewNݫ2O~-bsksqwK[c\_pF.*ZyIEd fsWHKsJת^r4a1#dqD7]'c 5܃HNp|ܠL1_zHDk?-*XӺGb#N#Rr%s޴IRe.ӆ?ȮU޴0,[ǴE w/(,xq.A*J*#kəQR.4mj`XJ*KBע2)MYӬ0DqAUiJ KdmY6r´PjS`kRjY9Rc\9Wb>P4jY=6^Y-W˕`t5QdaEqn}WVxq1:eBve /M-oG Jive̬R%ȝ b!I(nving ^zۮwmg@c+JۙJ$c-sXk[ss?~C#J9Z1f|C&W(+dž289~p:30V]b!BRѐ8N F (B&?L:c"'tI\X~0xx/ׯSo%Q 9k2` qf% x{T%C=Ff0 TPP|̾K^z ZH_\zB5$@-JhWNpEkkW29o6E-uOKǪ umt[%~VDU"ʜMd.oOlr)L)TThƎ܂J k4s!@j쾇`.JuuQ!2\_oxYuDD[EQ<^5GY9@y( a^':n1^BSHpB)c*$"^:90Aq-p5z(Ɇ0 -Ht=xU.?9Aq; b߀e]02mf9\e=!1 2m(8( 0p 12b7"ᷢ$.ڡ8: 8^c*!GJ Js p)P b?3]ƫ /h5ѢQ,P)jSHWH 55-(q,<-nHߗ9$*t-DTQ!Ih )&05RJlޯB5C "Ьx^,0c UiX0ΫB?J CJ쐌Hĕ`ؕXžOrlZ}D`ʬ J} Ŭ&+vSdDdkmD<+ YiNn ~=*&pvIEڈ:ڒ6[.iښidd6dcez]ɡΜ2Ԅh{p4gb}K4"#YZSdDL#Ģ33ɀ5 ہaÀpF0lA  F%$:~r&Z?n1s!JϮX<a* j;<)n!ip Hk鰌K慮@Ha&F$!V"hF&E5!3Z'm D՛༁IC)G)զ:k޶l|aVAzh$Z?ڍzݮ "տSkeWeO!< ax AJ=#V5EU8LHb9os q6`ꚝbWD&u ?܊>>|?bbQR(RWp1졺m?b ~#)xoWm~cMۧFyuv?kZQ7jְO ОLj4\06s*4zH(󦚭aW 2R@8b/49!Ӏ7#rA8< m][?M5+J?Ev!;k  i:*{'8 V#Jܢ-!dL7Kۋxȧhtlr Eq2"t-96$GbqY. ;$BDIoCpduO`"0f/n.VңQ[;S篟NnA%H頍.W(Uz'ҿ@k~y ACX6gR9jI?qF:$C6IqVCm+Rr{]D[ OG3$BQx~I1ӎw by$DVɻ,xZIR5o"la2hj1X!V+ؓH PRHpq,wNr˟v~m?hlN;hY[$u8 =Y+fpˋPcP+'}8(Jt4Mc_9B9tc[qux˛IOB,uO%(p3#;ǒlYOERRa :\Yy^p(p`6o+RT\,NkYdš4Fqµg:as|k|PTH| ߝH<0RYvSrl b@try $I'5!pQ({L:YLf?HB`*eRdr!݁䔇惜\dQD)dGR??wiM>cPqF0=0ĹW#v@"1‘pFx$p*sO[ŽƂ0SZbE ?g?js/WMh~橋ٟT[ٖ˯nh>98?l~vDax;e1#LQy(Aj{b"FU]JVƱ6 4@ 􆹎GEDGVum$J.T$#…"ǵPR4F4-5҂slfB!&% y#p[a.Jk`H%z*U\W;'f%"),vX퐰gܘWQ<HL)g1[N ,ާ(Pew5l<`z)4! 6&Ⱥ֬H~}~=o$__,ѹ1 [{J?nd}g46#Z߂4@% -Q.)뫭N9VW=hƬ7l7ԫj aտ7n9;jRS֌6e[sSZb!5*FPc^ ǔUEQ؇/ntԪ^sqvgʚF_aeTey E(\SG7 ( AlK/3+39ܤ(ll6cc_O|D[p᪹.-r.cwޯ$!kX@ 0-'rwd.Hbi:l/]~Kp[C(jit8E}aC}𞸈$4!IlL94pRDERjXN1t`MIeLb$CvZ{tsEԸ5o]>g2@RE]$Q <*rxLTu=@X9oc\lDf xgccc 3n2c _@sE:y V>Tv "2kLGQnqE)ooP0ftE)DQ`qBRXy~ٍ'0\(1TsxiU]WdiCbY*Yخhc펐>\smJ#ÛwY(SymJ}Tܤ6z*\U ӓ箳7l1roE` )T./k&Lm+Ebzc&ϰys 橓0+L8]^Gi|.yީ*S%ıƤTF]W7޳z)ԴތGܷ.ʬ>Zoul=Xxnl rdw-3]j#<)RMYu77!# sҁ㱋h_\lDX_ۤ8MEDq.!d[ ńklAnIyH* ꆾCE=^Ss7LA8El(kXYpy5F룫nҀ~8BOAgA1^&ڊ4s{<~]g~9_Q.feϱ:>2TVv>%Rk-jܹč j5!dž:ݣ};϶Isc46sP@j2PrfE3۬Z4:a3Y/ˆTE@ ;V$%,z`i #iˍuH' QL\L&Q+`(iBd$^*b DI0ehpRk-_.~3[_kd^NzM *gO`8TF?}P ϯ/h)ɯ.7b}g}| E/'LfY-bRQ^~w{8J)@N?-Z=3wQl P&p 2ág2 lOh-Q]no]w۠eӀ2$CE\ZъU_o:{kExz+"C?Fn>US*Mǥ'ͥ`¸r%ɥϭz} rNǥȥq)䋭K lՂȮis)ga\3 Ovp)ga\D`ǥȥBqPJcrsX̵1Esk%i U_oZqIs)%a\J3K) lXg62ƥ,KC2ƥ٪A.^z\Er10%K[ڪ91t\="Le14{ZZ*輧SR٥reqc0uxFI/OUNn/u&܁\՗TvgoLU7Sٳge}C[F&?R>-8JR̎'5i wL+$`}]ڷ41x>J ҹr͐Y6( |f>[9IhɅ&O\̑cGҺ*uz"8vJBW.+i`W}:x U_jHV@ . Kq+ er]˕Jnc֦Ś#E$)945DB@#>W+SO8c~mm%e^7`PcML̜/b'sV stSư2GHbo]2$ χ U  76*s:v6:==ݘA^̯wm 2eBF#8"m$P6`m!5{4j z;_ogS7sV%fASN߰~w>޻nَ_7o_v4'K gn;a[}qVcucV q2sIJƛ틓)qfG}؄PvFJ^/|vnBjJNn0z~v7~_kҿߢi^o,VÓ?nʄ2Ԏ_S<'7>9~J_fjEEk_ͭÁݯd2xphظ<.L-ҍ/J7^n )(JFQ"$,Io(K O-(b͵HX4rzyA7 ^ى^|l̍t +;q6L9{O.7iVs`n65F\A˓Jzw?~߫Ń=<$e8Ƥq;'$R_1>m..h].,s2bu#7/HcKw)s;$nk,#2Əs6NHqS'KwIi# & i2 3ױ $T}W҃h//45,ECmN$::45-?&%衚DSn9WOb.XP.5 )#!ښ4iЖ(,X"Xc`Q(51%ɎU$H]'-V9)v45˱QmX/ 7YP(ι/4ORP>?WMvjR;Md5->[[ \85RhAHZi-H%j)ID?%Q[;K61"I@;NPpB#`L FEeBv @G39TH:P17)E $p,4>ŠJ\АK-S q ,#j#]YlX5PյG;f)&K v& 4pIe&C<"bӄ8b% qP8FSRśA"d4V8)s Z+dbk1ӗI *EԐ0BuW1~ɦ]-Adzh&峇E.~3[=!&ne]uE />F5~֯f z첏> o}@LQG?kTd@l4B;w{W?2 t[dLZ$όJ()`c<1׃ eԿp,mycĖ$B.uP 4&TrYun{L%$5t' ~3u2Q|+IXQ] ,X0M)}QfEJb0=w۠OS>jEՑa?v';qFFJ)&3 w WȕS ?exA.<%D9'by_&F$&QI_RtBe<;J1"\JjZG\e*RxO Qp8IR)` .ib˘EDtwN@1KUk^ﷂ7Q,),E@%O/˪ /: J3a;ȄD*"ͅƤeJMU2l#"'m0W(. =.+ݺ SeP` e0m.zٿW>ne2 Q&WԿ(xM!_}Z!axJ5ʐ e#,xmwDjEEd2g!eTsBA=$ tieiuE@^lZrxv7޺v6660[Ymi'A>fxl00v{?_YO gzȱ_ J@,vz;tegMHr3˒d$HV䚇$.9<<OI 2B` ̺dYT.q+1+AsM)\:S Ȭ}Vܬ}c9`XXu mQJ-. X,G()Wռ=63}>nSߛڊ[k`vĴk6l;;Eb/FG)]]\CK7rιXO)>♣rGۛ:~QROFBY_bVP1cd B r\N%AH^u,υ3`y,(-l&gm0i;~oGi`]*ZbI"0άwr 8h> $HgC ,EJO&*R|ΧQ $rTuQ?9)&"(3 $gS$jS ԄUF^6 T{EUFI)\KKXbT`Pj!f"^cEvGz׹{L@WPB #0t_wN2!u!( .bƝAa:(8\'0:X]8׌nOqnҞjeky0^NݼZhiJo}*X^gd;n.Qũذ?&S\r62ٴzhf#;Z{ȷ71l-0Fh1[+`kVgɧy8cjjqg㬸%kdx*6fQq',w Dw]l@6[pZhF,&PS9m@=-)2_bm+t3 !7TsGB+&Qsx.~סc7EIZH/d4抁azډZi6n>$䕋Li)$/ q5b`8UBv~vڃc R3+<3exNer1[3m|q_/c_QMABVNWBG䒪8Ahmvv]-Wnv3˔Q2)Kg2I9j!R,2Lަu+C)UΘr+W"76bP,gVwC#65&|AU=0ZlL>n;p qf'-qh[n0֊>7DmLTQA#3qG d10aA.#t P9ya!!S"(.~]jzq,>5= v ^,)C)%B\ PR5 V-!,frVbiu a] BPguP:1ANdˋR( 7ߪBgyZ ,R"E9XQ-ga/a}9Tգ%>bu_OiWK[X>Ǒޠ)DpUoǟ_|!bݪۻ*nFaX`ORJya[rw*8'Ͽ={*qh $+j2OכTl/ƃJ 0y5Y^U-nv%w3vlzvZAW䎪{L`xTѳqod*) Ji[s2_k d$:7~+Nw_v |1 z^PYwˇ͓%ib#Y[y̤; s };Fa}x1)˰Բ UfK)KztQ}(/V?|" 1@VƷXƦti!3D 7A"a>9^x)eX..gkX4~!gv۹!gqc jSv;_5Ύ1:g;{dxxdQw[`85YmBT s7N3~{D)F>ᙗr?̍{s=OAN94rtSt]9_?s~NOf=1sWhbn=iB?|sR(0,leaL[}0q뗓 r8y`Y`fA4jR)v4GT]K$@P/AQĽG7FLA;(j3x; ^m'A%(j>u  *%J>)}UnTEt^N4s,׏=LivH9mzeA /7)A䭬*Xy  Ah2IbF%yFJhb|1`Z[v|z&mh8hqpqlʐýIqW Y6`.N*YવuBZ aMGtu\*"(#Nf0SIwTB5FCM5S{jM!(K+T>f1~9kQiq?q1PcvIn{[ԉqey%dkhfPHp8e],az)}ɵPy$9T U9ӶXB3ìw.gښj :HR os])A:ȿ},1wή$Tyޫ,V˯?Ws2,c$ysu=]]oFg~^Q;xa AOS}sL5ԯڠë ~LJ!2N K@}:>)q(U:j$ko/Ϲpc]whP>XbWt6n|=c\g s%}m9q.xaK]m[@BMLs%u0y\T:AZFNZDT3 (ټNU{kEw8 V"n5gBI}2M7i{9@ ?wd|c+S/=| ۰+aHZeJ }$z6H"wtA,[OX_MR\#HJIDyEͲ4)dƳ P;`8ͫN}2́X0YqS&OߕvLRr6L0CeutUڻx ǃ!t!LR(pX{hHx|E- dtǐ  =a['6hG*Khvohe*ýglG(u#BK\jcҥ|ztS>,cȎ( A*}0@BEeKiчI8š;G syv Y IH&LIFr9BB j}粃7:;fk= :|ZMpp~,7ɓ#G2Q=7b-H 1P6. @=X#H<{#sр+Zh#8W)1}pm,M!<S1BOp_F9\B-n' ܳrkLu1YтiFXEDG 5@-p04bX0&rb, : X(W^-@Ȼ1Y$l؍X̜-]&e 2BL^P ࡬1iڋci)$C|Yح $?_;d6̜[:aV`x$:¬SPy+1\(;"r(NvJY b[cI\^e4x* Acʢ̫ (BPWc3j2VCbZ;)tHUJp(C(IbkTWTr)~D+d1d_I;%3> de/d4K,@jyF0$ߛŲόui M.%I$E7Q,Gב ck%0ywLo?GD= Nh ȱ^ƨD~@L28Rz:&2ڝZL^yٕ@P}b9ydxX+Ś-M7QuӤM)]Ӂ:$N"al=ьbF!BӇoU$yhNZ)G:Em v^pX!=gsw 3%甆kEa>>;Tʸ:&n*-<~u},#ͅF\X]_RdVbX%@JTzxBd"Kx,nY; ~5gIl幌ϮIp|u6~DMSLظe%!LLmqfU%tׁٷ^gx{I -8}V)7r3FN#=XL9gϿ`iUk!+S eBIP2(0fR۩_(0"fF4mթ=M^s4z-eeOiݱq(Ɣ8d8hCm8))l~)CNt"tNI􁵕|ޱʊ.7 ;~*zQm;^ngǖg1HY3Lm<'GyGYKJ|_OWaQS2}۬Ґmw_ƹZ:v*7+v_2NIXj-hg)w)^ȩWF-R#:tFbGbXZ3hCX$knJx3(v69x};eH mò$/Z?)QچN?? 5[:^;A j8\kC{ L~MqpO&ΜGm&K"g饅Lgqt9;d2xJ7X)Jr\r!2ptqAPPkd:gxOsߒ|LP="=gMG/ i$9EvӷW L ߞSqNח `o2\ҧιU"tq,"-PCpX qNQӡ^ U)b($M7V?*ARH_^oARİ7z2eFV^zE"Fi]8U #\I<܈b]GX-8VNtr^Lr>~fcreH. E!貘\TLq+ W-gЅHqB0q+q"g9ɤ¦*U0̓L:!Tkᶭ+Pq,3r^0\XerEACDTYTjn3ᦘivs*k +&5%v{'U cʱFs8"DuS Ic T[B(H)sF\JR= 1@ kQiTFj)Yƈ5H F4ͤJEZi$BXs@VsԢe `DmLr<fS>GqیZBM I-ElZ 0ťhl5I[&'!!8 IHN QV3ph4O! exMD~X~̦nY^=Er$́^@S;3O}`?^6|L# CW?o1No/0#A"z7Dq2-95C&!o?{o*%X?>2'Np7LHIC+.a*3t8;|i8Q ?l'$0DB`8-/GЧ1h%dSJ5Tf]x8RѺi(HOVξɼ147h ?Iq2J (:\ĐA1T1Q4|kl^/&\MGS44)TB >3@53Tt-tj!.=˜MmμTpLO`)wU"%TJ4 u25N> 4w(]͝f) W|ŔʣN *DHN(ȡT IBF!բNEKE|#OXҳ[ɒh #; (AA26K 8UeŒ1 U#@He u ,:&\R!Q) GpUI،&L-"9!ϴ`&-=[㼊)-@rQjU "|Z]ܖʛ4J EMFʛ(g۳Z.gJjd-iA)aT&g`AK WR 4%DR9 h9 HaD(8(Ĭn٠Q3anf zE^گ-Z]iUO:rkRO%f ٣@k<]Ou%ik)?$K<1 ]^w\ <ɥwܖhVd/ ^mq3|{NB Ƿ0v軱GzWa(/yR橇"ɍ\\mfmyFgI"sKd[9{w;#f ?,j2IQ -l;>[lkMx.[X~u,'cGPi/G7]\ݹCUNFL Ɏ8=6 $z:89_up:]*I&4~h,d~JY|"98[pӟYXӵoW$'D!ڼ,6E6~LV5r?9r{Qb!]}qON̔d0ZL:DS =!͔@?lP-w9P$~Q-a"%5a(np1O$~7Ǡ op!"I: swK2Es17Þ(zr$/o.x5dGgϩphH6Ț 'Drբ{?Yy:U 'RC&[1d-e`b܊c<{^ n\z=p0Քjn~J>ZC[-RKKUv(@HPFPdP?3t'ᓕJ)Z1=xh &)ҩ[N~u0%nw+}`w?ɡ31?zWd. oEOW{RQ*O1W/DI1Όr7Zc C;}@so/Q\Tuwn YKZ]ջ#"!j-/ti^Fs'w7,?g*oK: EkQiT>zRr9_k̤֦z~MkL$dF?d7|rwuxJSg2K}bc 1\e)ɴ ng2 1ցlV boc gW<#1eIjQ8ad…q@Gr@BʵOۤWoNzBᴣNIlՖymSmص(3kpK-^|֗#ee;F)3k=^<2dPP8p# yZާv$2Heˢ& YFim3ORS*ϥВ8`3N2S|7xj5<50kP| w5w'Ĕ-z3ICod~9Pf.*-yh"tYk{Ԡ]1aNvI1Hk<7c#B %eQ3]-ԉBtխfl VvOv$3fc (ȢC;R6 :#kz4s o&o5jePp)6%k3Ci(!PO3G|֩fsIHqRܜ1ZVPXϩd-@sg6Ưi=׃6d9M/ 6[ Q5'X~ **"Pƨ٬f&B<"1:%nAUp s30F$V:@F0H!Jg)sx75#nGQCw(;W(W&Ƿ9nn(#q9+]rW~ֶ 0_ziGr5BHoD@2pZcrV߀1#ZdG#m;^.GZDa DvFwk떥~ʄCnY<pu7%*냟 ]P'ŔdJd*ViI.>ڥ/}VfWtTŒ3.JӒ+j`mFi},Lo~V׆b'mdn ܧ H'v3Ő*-h}̐;Oh2y{X=U&<% Ü7$_Y{8Aې`梬:Ĕ@\r8Y?8CK?Ê$֙:_k^LZ_5uN\UЎQ粟w9NZ5œko Rϴzx;s+x%U-v#h]eiךŹʍ%΍x]_JpEy/ qS19BC{F԰%*tv9iwM#u#k/TLQGj_~'Zso7& s r;}A^ B2u+*aԘ?e&1Gg^Q݆qjٙCU/#Q{}a(;gG{t1?/9㎛~zM83Fp)IϥŹjKDos=ڊ ;t*Hb$[AJ(Ƽ L_i~GO'S(;;R5!fTPkdL5#4O_?^—0g o\B /??} tjg~\T'oS}n 1J\-V 5VcUX8'A41K>]q 'E2B$>rBؙSh=cEDpdʠu@argG3^h90z?^^f_IήG.9a6L?e2^&{7t4d%2 au,A8eTF#@6 J<,ze$x2A i>n:Ɉziw^jA3eٛI{#_y'.YbmZ|nhLАHD>Aآ߲rDaQP2>_G'1U c៎`P{›kfY?f<]HS6㼵1(^!4yVk'XRg"csfƔ\{z}z7w:/>|=^gsmfqLXz7%F@'Ge{JQ*цpx_x;J8T4)TaI=~fi H"`9 CgB/W7~`EdWA`fٵ@:~Zi#(-*pMN34hB؁~rB8/aJPUjĨ КQ`I>6Y$"7ŘQZe Qr \"SEj*06;t so㛣-V]һ mq@'1e1LwHktl;pcҜ>A.Lݦ^I׹RA.C#XBp;xѼn>L-,[Yi{Fŏb?AU쩤2jY1c[P/|/?oW?Ƥŋ!MP^~u&1vPNmw`//w<:#wdR-nۏp#pY;k[t.dAsꎚ{¹i36|FQE+N( ׉Bkv`*cFz{ =E5,wńʘ5_2&dCJ5ye * YPC 1r<*X; 3[:A\DhOdo,EӦoM,ehUqnosPz]TΧjpyn瘒DMy᷏Vl6߀9l3.13r܁[`'vvg]\_ʋ0D"UOCul`ۻtr<98&69RUT#.LxNTUPZoUT3X;"K%E|x |c|goqN+/"&,P 6`fgq@1J/}Șe`jF蟜v(q{oWSg3rѢZle=v0nA =q#*}Z=e ^z#^^#+=k͓f4MC*'mr.tIaV0E.k2O\ q@L)& !0 azp>>=s6A!#cܑK !NI>Yϩu3yme\g.ߐ^} j، M(IGTw'úM9:Y ƚSg7Y'K4E;pskeDm]ybQ- :}PъA7ѭ9h8ܾ*rfs{(2BNl=;;!^iXnN^`f 2i5{?hAX׬ҎIî03G!ge1nRY;,rnJ{aD::tmo2=^oWO>bGrW\CۯsWxybuz2-+{z٤ eBQS/^b(h%a/+7k?Ƽ2RMEǘtru?1$,P|Z<~W\Z{&i5&HW;m_' ^Vm\̪lGjRofXR9pi9kqu8m"*x޵Qh ip0USr>GTp 5\ K%Ex:`n k(YAcgxri:v4$VenܹՒU}6Ӱo]5w^;3PTb49702GT$^ԏ\q6Z(ѲA8" DE` RȰCA""*+b:Ĕ!dOAZ"JBTGIǔqS)lFAcc̭E<]%W2TGd} P RY+W))9#FHI M P Y XqfdQJJDt ZZ"jiC`aY  F,},S#Xd3R̳v]ӴA;R3+1gzA{p(MtA>#'4W]޳}ڜݷXQ$/N鞿[$ Oe8|W/VJJys*#Y6pgF7滹D5!o{q4͗,p]';z8W3"1D"SZ{f6qdyxL)ђ#x5B>dw1^lxSRH5tJ8+f>/>ޝ*XK\M/곦.'7qMw&eȒf/!_#u筼 76BJ!Qd#£+aύ 4B\ \~ @z: &HWE@~8EUPeP]G57x2ő4\ecwb3-1(g+xd2lMㇼOAb'±k.`};|~ +/qd3V+f#6!bN*V'HSbr;5׏ z!# V; k<&AN4„P+wmm#ɒ;xU[urg.HIyJ@ h][&8=55HO/:v(dEN޳VjM\G?jU j(-$ Bp3+`L:֝-S[я$U#`Uƅ^ ĄZ*,ˤd:B[!:w%aj DPZvO6m\ Δ& L>gi{Ыw;"'-+z~o ej~G_ g{=8G P Dy$ف icp/hvO|}QTvV|Gf>h q##,3IJ+KxH,0 h|g%RݗcwV XTi1E^Q_< \߬pL?\f{ێ |aY$+oqs.gM-Zs5z67"t\˥L)? ڭ{WL%X"6 9 Z8KLplHr}}@+e3$" .;oB0cZ ukaԭՄVuWI|3 q Ƣ:=-a5Jl譌Xv~H<{,kA3Qbt.a?@OQݏmHʻ {N$7cƙdcOYO>Wa6]:fJߦaoe-2M[UelO>Vq@[mqozIWj\A8LvNTӭ0Y wh9wX_ݻhG-*Lh0RSzjCr}ZWWX1,QvqZ,oE "yrirpiJ&Kd͸ ~\f!L(JO OG:M9qazPMM7[RQ l | ͥX1^ߋb 3RKR7f?cWH=IZۥ]i}a*[ʴSY&pz,t(r孽]x# ̛_~}oaY.?e}]~-iRHPBK )t Ӌ&C<(?T*bq}*T!i&~*(*)6P\]U#iAt0=_|)jrV'%*pBL&zYAafE@V @P\a&FtM8yw5p_x>߯v27IvL%Kx1ƺr*ؒo{>y)n- ͋(kmEh,ԌSg]\/~N |ϫNJ6d0"M@5eBt`GK-JhFQͅC!R vBMB`qm|a&c:8UԦURz| |qHCXZ}ZR2=^I* ?3=ƯESD Kz\p'N`U _;hsmE];B%D,\j.xpj֕m?êr/+ޗ+~Pu2^v^w4YO_ ^Lأmg~h-A&P/m727ն[xٍ8 AHpRYx/[ښ0.<E3Ԫ#>GmO,Bypc?N~6\{lWB s1/lysSc+@OE]zu-hQ`EOe 2xL"5&!8K14 b&CDK,I>jnB]ϙ & 1ȃW Ofaas{Y͛KBtb: GKs# # PN^M>6 BO\wW6Y]} dbn,srVQ‘H4D(ǴՈ+jn5(ϨASDND&q; ӁhiXô o*_?ܔ-5URT5R/cc<-MF'!F\rK-OT`eJ9é#"-*:a.Q[ÙRY:r*%KSA])Sf`$ G%iY+kK%01%8rV}m-gr`Ve((͓4aS#N0.qeXTK18(K2t YḯDTF;L YJaI,ɩ8e40LM3>ASQ&wzhII;45M*m4$s5! ),w $l,t6Cou3 ^NrT%8{q,_]U&ד~42M6oП֢ha/ɖ#6{%zvah{BOổ2~r31[]؝tc}{|2Oɮ \Dd!DlJC5w tbQǻ T]nQzwkB6)XLݍOMGO P8%vf]#a:<(17L9ڂ@ǘ!??Mmp1#pwHU:M+p5Wj?AfsȎGFsR(ˬٻlh`htql8kSWydUi^LzphyLYw.В[:v(eq*k$BdL:C/]o%2B(Ez\*lOv@v[O<ݟlW"[gu'SphSL86b3L4FT.uMYvZ t^M>j, Dʜzql]$Zo'}8)$-EԃD aG^au(j4hu4p4tpIec5VNlb'fR]|b%Sc% 2ÉnjS&jZ"TG8"*`-m0[\QC=K)}> _Vh@V\[U; ]@7Iͽ0]T9]'RJ8')b|D޿q*VOXrSc`B'j%{3#=wxČvR*6 7b8AB9ٜ0MnD Յ(!lSY* 63Ia D +E¥ӄ:DuMu<4A`6Spz2R r}3MK.rZ6kF䛫'k1|{k֍Pro?PT(No ߯o-ADL!׿r1_~e4i|(%ddiӾj4Xc fjZNj|`(?$`xzFs^n٠‘hnc#r~ unxZǕүL~e+eqe*vLpJ22zA&eFL&2pX').`Z("{A$B>;D&A=؉FL2aYfLKgX ޖ$#Hvґ:&9Ofu0G3RX12 D<#%,R2SLTg,P !fOff>ITŽ#Q D&'f!"ve 0vHpˌ%TEU}Z5ՅqYcpB;CfX U$yB(95(E%RlSb[U][ad(6 ]A1Iw%4Tw*rp`9! Jb1Õ&e1tCVqWjЌ$5"k '\3$@Bs\݃k3Y!u~ 嚈$ofӨ޴uxaS7d@ 8 9>Hũ)$p |"3ki-i _)8UQ^ɋ@EGS]%:F BIcc2&o:THQ@ގu3h@޹a8}yygb{)Q:qp=-:8&IR/#Dw?J]G nҊ[CFKiɲĈ健8 0']Y Tll$N"ebJ##Iƴ #C-q1\-FU*p^:hk !XgWc " x娤\;=+BBcmy%7iہ{ݻ}IRaR$;&,?fj110 X`{bivD{8JڿJDk|G.ߝoY} {30Qoޥ_ / (mHƑQ{񳽽,lg&߾#w-h7s{؆a{m/B%?%v҃Ķ/BXT 7Aa[rͶUTO'\;$BU%[0S0Qy0̟2!i!-峖Gxt;gz!-1pҙ?eHG8GGwBg ஹaU>x_%)/}Dcz}7mEgvigXl(+uO@v Gi5X$S<~!A+}mJ=H}dey˻IOjY4кS`?HR$0Hqv'%H1p_ZкOugE{Rc)hPݮlĸxx?KM]B" \lw hMcf9@E%h)!RɴNȒ`'k}UWܛblζ 4|9Oc6h ڳ߽ڝ##+VADzH_yO`Xͅ5+@ݺpfpdv`;m:^| 7t_]\.Gqw>9_{w]zN{'ُ`3~vv7.?g,D*c{"kkw/$,]te_M[Cs͂~ڛ}{e耟͟,9ֵ-̀B<(u-M 8I`:0x/-BOMF+V hoeu 1Dؿ-ZW#5^BK*W͑BO3x%FTjo\l扣O7ur,vNf-1Ft L}|g><z]G~!=z&.~{nt7kȉ:ܩ݇A/H5RxIv.ԷҍoDi/|Wf[ŇQFټCO8}`b{FB I\Ug Tu^oy19ǿ$=~8{0ēQoEnp5Ëosw>n%&f?ez8|=4aNa=8'vyO9+[~^́> Of޹6tef=ƈ'!>& nkԈ hU u~J_NeL) Ta>G7Bпe s+mah)c #:{z> !z-rB7ZA (uRb@ sHoVauTzd ݲA>RSYaI$LQc8($B\#;,06A$FL8.D,?Aa k Q!zd|2ұ'+^zoD/Bx zXb57l)Z~1T %1ly#RX@sػD AŁK։axY;ٓzn2L~@ vC[N'm{O[ߍ&vϢu[)M9~Z~r7d2^^bE⮒Z]ljzb9dvcr?^P6`>jGxUɄJݸ͜gz[ujƀ#w>IһWQĂ 1Sd^B4``((—WRL(| ZxI _ZK2vGUx J ҿ*t$ݷF$;ܓ/weoh!ۨ^zugTii^ zZeTҳҠY;TmT_/S;."!Zyy } 3vfE>,g%/D'vBZJs\A 5Ck /aIvfGH"I g h;ȮRoȳzFa|˘dO~ GцwqQ|:ܱඝ'5a  {XPpT1aee#[-^-fSSg:E(+9{.dB AWwB&8ZZQ[4) ,1R"amL#cly'Ќm(t@CDx3woxp%p861B$((1:ĭ)Mu]->vGp?H>ߴU}ш@0‘16Ԙ]RG4OεĔW Bt̄iZJ*k,=RkR!2rdB׼o]emBtQ9(߮K'L:iCm:ۙ-mH<7/$%@csA Ysa}pWE D`ڂoס\\mITOfۗۗ`s9lCA%HefE|-c_5/*';?;3z;_X'n󧡋s黗 v>G-a40Ԓ)`"9w˨?3g$9] pF$yek🹔D)CےFGqO4"ys u9ٰ=΅ǔ y9 q0`G tSU9au3b,J1}lMoS20$1mol{y2y1ȕP/?vfqOA>xȦG쟏xљ \ =3H=ʹ fE[=Φ!+Xzp|(NĔ__$Φ]`u`6+KCh=Mao3&rE W:{ |Y{}n 龏h yot9 (z:Oe߽|>x0dj~Z*>fz22n8n5D<:lZDҥ0-6Y"o]lgۀ)nv*e)YܪNds'UU?7ѢS"hdOn,݄Yt$=B9{{;it96)OM0OVsюDOv$ܥN yg4M}4ˡh1zꍾO0-2{)ݧ `◰㙄=9bYq;yIۛ4,͟.I̼zO..^^uu[u\ ZBp+rΑ,wfOt7k-aoHԼ8=uжTfU]l+NƢ.t{*h9}KxW1a36sF"Yw7&9zۇH>j*"8-/\Yн-{Z>'Ҏzsw cD1YJs.;\>e]#Dl'_oW#l0`tπ0bVa1hE2@(US$<J )zYIyx@MlHTpQEmV16]^vС!l6  RF"7 Sň$6"^:CgCrg(vS]*V$Z Z|EP"ШXgd{7iXn}A#[#hyMH^;m WIk\aJfXmR&.G miX4^ҦVCꗴ,UF **Kh%lk;7Pq;h碶nH7sHy#3J621* t9 3'@PfDlrI7sUb^\Qt-FF.]cħUwHmZb"NNTx;"gi8fo?wI ii/{ $%zWVZ®{py(}G}tXk&h(鹨h?t[甀2J iQU-CB3Ә^1}-NӤBJi/j}DRKKL, xrIˣZz)=k)2MJT")XIi1j,Kz)=G)$MJ)x'!/j}Ԅ~?o)e& "FNBJ_ƨu)/֥IH)O\V^sRx*D)T%é *9<$7rd̵)%Y!gBݝuoQ09kv'fu %?{n:(d$ǹC"DHe$Q]1˅)ρQxk)Vb4t7l[#(0vNsu Rd]isx)ۗJ˻^"9{#RZ̴_ܻ2vV^}kY;iE,ޫ7Qӻ*r6r9lݿͦ1}$u1kb ;q0^+"J1{W/BT]gV9!hSúY Pa_;rVjեqTv<,<6Č1#PRWj5Jxfɗn6\OK"x"|8 Xq=7tV P(sv4yR6d@YNr|"ƫv0o/.rW3U!.n>'4SEy>`yѧۛ0 ?KPkoۂ3,'DH[>ALSiLZ&rXظ^W`.߾={ṁ[o faJP&:6|)l=yoY\#`|)fHj (l Sj5"dBu[w4^7aVwo1~y{BaUŁrv"&R18MHA ChWz62qrH(5"Vi`3Lxp9r晢ʕ<43L}GD Zj8s=kT~H !6Nւ[H NR.0s"|2N8w6/0 nh():6I$ eq:F$dB>٥D^Gy%,e cM2|ig,E3`{ r3->ٺ|M:h(!0uSlZ@T8& ȘX;C +@ ?Eh,U?6zy/T6+@F1pM^W kP`֌&̓pa,k;m ALD#% P09b΂P85Mu)DbYNݟځ ?_cU>,zˇ1\_> ,E&xƅ܇ ᬜ0 ~E0)lӶWvPOܼa`ð2`fjW{A{'P&y؂)n_noC[-PRHw=1DQSd_M肽[ň`ynI疐>3ZkF:r nRJFM6_ˆ=#"W]]U]]s~rKh)\`oM1"t<-DO$BVEk&Te>#2Imd,B/9Վ(tzW)b.ieWKBg&^3 ֜zFj֋EL`q\CUn=}5ob|;zIlt[vS9B9&z}gܝLeԿG۱'Jþvn ԂH73n9>ц-H3}!w(67Mv;~Am{Cr^65uQ,'>e 4wby|](0ɝLwb\Ui,] x3N͠VC{ B%eH=#{Ȃr䣁|iMTc!Ggaռ*<yTWj,0y*1՘ܵř}f׫fΫqчlmJza~ݘP" pGڊr߫ο;*Mn-r3yP-m,cSv;.2S?adz/G Vmh䶊K>^(Ն$+Q'lv#Gq}F֟%L;~R5!!_:&8O)D{n<gTn}z(AgͽvkBBr=)j)StMb퇁|PTucU[^хU~")%Phtx,g@k9l*MeSUAKba6U}Φʯr2ʦGDm<SV1s(}*U\cih|y[vFeSյ[hcUs?MU/dZ !|6]qB3f|%KէJIXG +@usd\KPΕUk(AK"({{L5%| =<>ʦPT=L;r7,7\DdѧMAlyFt\QEhwiL n61U$flC~`qvǖ ޲,fIL*ٖ2I8۳e&,&XŸ8yruW0ݩΊluYV>0XNOŞ;Vbh}-C, S]Ytj6+D {{ !92/A2+2s;0[eֵ[cB}nikH*krFC^<"_iynyU'͟\EOʭ 6}7Ӿ<$ mmR̓4]Non ©RAL;Q.GWL vhu"^LH"PE 9B՛KmP⼖9h((9 "j'eL׏ X';_8&4P ^@&F!fcF@8dzcHÇ)VǦDH#͑8W( 40 [ 9NVj"YFPPE\Kj'⌊(qCJTF,B& %!NBz,ƴMm^A!)$!,D3Ha*Db"0(ZkQؔ1e1KX*,ih68hzynH/67\WU&"WQhc z{xkdY__IаiwY !!qiţoe a^@yLERv٠g 4GLW1PR TTi;2#XsiLy&Gf{nT6jl>7.цyf62q)8Zb'cPȨrɕ5w>Te'/d G)Y!PA8>  HHSWb2#mc,FDC/AoND@|BITg L'(8ТjSRulZ$7gjiG4Qx"B/h Az"I!} v׬BAF1@ĩ"_ oCoRԋVJKJs}p$IVX•b,y&B[YG##0e,vSgX$~~{FfaQ([w6ॠ`sZk1;Ə܃z=D[BᴦXm Uwi7SZ  &3Qsn{s<5I(9 `b0 ]+{sn̘_GJ\m|RXx-7o?8R^?$';ʄJwu6]yzH-R4!a_$WcD! $AMm0)cE0ADyĕ`(\JHqkfkzygxO;qiahjs(ZBC|֗dlz;fBto7$ZN*}4vx\ގ@+ ч^>IVw7Cw")dooLd!0UMPߏA'~pG>-6qeVv?*, c*+7ݲ PGؠNU\71ɲ7-f[_07g*SyƟH֭ Cq ._*pXk#fsׁb%mSoZmJe\H wOz2J;G ڊv~&߁@Iug AL Gţ!ĜAp3ӭ"zP1"kĝtlI /s-ێzxeE?> 3x0vi^uxǬ{tm$->%Izu ^\jx aVW=(XQ^=ְ-+˫;nMI^}0E̮4ëW Z-cpKk65vb_SI֔5o2#/JUfeG >z}]9i8J;m̮HQۙe36u5q0$ –VfcCq!h]CH4 uϷ-$O}ڐ`N/WVή0h]fSjnEfn] i99X qߞjI{ TYS"K*P B0&BaECi@)b$%! ""Kt 3! ΝA^UH)Ufyrz[1t/uV~IUuũwt ifu=Y6guFu\㿪T4]Non ©R<(|:f|SC^uvhMYJRo{"|̞yw48Ԉxˀ>DzB)ؓ2+OyCjk$6Ȍ l:&Q^(e2_gzK̲_M)oG%8Yy?C[FSLiHB:\,?o j~eCH?Sd׍ߎoάs=J6iDSP.`Uha-4.ڇ1f577#3J꫌QMW$+EQ,zHR&z~WQRF_%_ϋ]urrf46/,`l`47.'A#[ݭ{wWj`ϯ4W&;v_]į^oK?o_vt߿x7"rM˛N/r׹뾸pwEZOs.m4h>QO P8Gz>Ə.Թ}s>ޛjt|qss{ۙ|m]Wo.^LFc:YĿvkqjͻ<"3pf `27Z:R'Pkid ,᜙V>zpώ=WXujR&dtsa찀.@4ֿ-Oe\N &,{0TK?YBQ`\ڦK`DZdV 9)^.AW[xP!kPڇB" VN]Jk<5ne.ۡ8`0LPwe(o اI~G}S{mxfz{[ }]HL@՟cZxz{gooO/{1;wo7̧?zq`vf.>U͊*y>y=6M> _L?j$H=x?ތF&c<{e̔Gm}1Xԙ "דx>Foe!? FL@ing"'[*&1_E/t6)"DѫOog|ywKI8.n\ͧ67P?"ANޥ}2ѓ65!wN>fN$gGNFh=v݅Âۍ"2z|VK?G\\;EM֩6<*O1}{1?o%\RlAOϬO{cEfMS 7`s`GFXCC9 8r?v3~W">qb}(qVِ$(t{ ["=*=iaW=x(9<ɨ{ÖY)0EZIDك%aAך0V& kXd0U@JB#n%a,\@%\@0=Dd?Ѣ"$DJ,bwC PSe􂦎'P&! ʦ` `asyV%2\FHgtURZ( o=4κL &N,Z*/L;ϓp➯D-c|€Y-\qAel#3m,ZgDPaO#F ։m1+ #j̍4!G`p2*#ƍ 'J?:j7~'`j%]L1*!mLS,>5WP!5C"#Qq!QlDVZIF Zmvr?ē|lF̉, !P)a mviBj6Fa$ݨ'Ff H HhkLB! $d=i)kŮ@؃"-Ig aܟ/Ec3+BIf4[k&6-^D҆Q히Xp,hy9sM-#-p j@B,XFbnza },B|iD]2Mv6Ud4oyNˉ7b-ܛ4{0`@<7tá;دy’?&FW$Sf\ZPB;:Ks/.w;iI:k.=G.E- óC03ߗRiqO<Q'Te=@XFGJBndi%3gALfIDPo΀7'4#fA1= ֥*%2buhւ܅F9Fe MNrh(!5No u  6ԩ\rmf3uBLX˰̏.w3/{׎9.a .dVbpZc!Ep~fC,fpy)% Om'(T0Lj&Bm>cjZnKrŁRXX7F `)ULsca,# w0-zwCLjҬ>ZvZH,Tn8?y^T{uʐBoyxos8uDt\)BHC8yY*$Met.`rL2 ddIEae 0MTlʒ(v'4Bpq)KdEb^` PU1Ifoq_rYƗ.,4JC3VnSg"R6# yߧd>nY9 yՀSO ^OXkF$kSF?W/Et)c`P66HoqNHhS@/hM F6wʤ*Y>}{Nr05/S'ִ'ND5VԋN  Ze'Ms&kZ\yXWvk_~ٸ0) R]B o%}$'Ip 5C%4ZTJPet)Rkpc.f8x;4yPVHd>K?'m Ea{qD +{J݈SzDRT+3Xdk`qu4r"^YS uUgiVMR%9 h48kz*Ł,R1 6;$+#q)+$Iү,&6oxx:5v~ 3ү=|nԺrnv3R PUYC/5?VN,;.D+0#R6a@VOMRr J/>ovZ^٭ LKEB$ Σ,nr]p$ɓZ[H[鶭}DH'`%B[24hB4(e0qIr,]mu,9HԒ(9(=y6]}U`Ac)!?^XQO%nPo>rźAўem}Wn& ٗ|0ivS\`>SDJB~H ʥDNˢI"]P·.ZD:9e8D:9',bLuqDvZJ"5J7.zg͊4 PU=B\!)^p}KE*`ENRR@Jo+#U*R,D}#W8`u|ĻJDW%rU}Jpo'ۇ֭}on1^j'UJ!@-Ruh1i@F XkN,يlqIl_GҳBRQ0-j߬ `YHu6IY*NC܍tL%:P&ABN)؈k Z0"{M@Q!QRzP a!´R\Zt1^@Vt3 &)brbT70XX]f/44ߧbUTܳ390zN|$$C+IJ8Yzԝ5UgYZ8rhNIKK"崪8\ 6[ {BzP) C%<yH~|6Xz^@2Ff@w)?[l݇u;˺uUWzkKRV$cTW:"L4xy bH}#)e+"i45c!%R4= m EŚqUlø;fj?vM ߢ=[55n"ɔEyϱr \h0]Ԗo43kꁒՁ ǻLO?ӿ¦m&x٫?(p7I$˷Ib-\ڀfkNx)g"(rc=1#'ack+PNM;Mo:. u' 5\z\ BiT˥]PwPCDͥgͥq)&V't1tԝ4855ƥ$3Jz\JZ%K$\z\JRau8 Y Nj|6pKm;["M{&=9"ƥ.4tԝ4#VsYsS|Ƕ3`p\H'*%\z\շ' r\՟C sKΙKa8i|ӮŠG%U &C/viԜ2<_]NT[FM([ )(Yበ38|HD ɊX=M9 Cl|4hSz,0 AГ-cs702H:7| *+μ賑Y35g8h'ܪY_Mޕq$B`v@v^Gcz^b1J:$XodHfC0,XEddDd %MԞi֨B h>8T]~;bhd*eKa1ܛ 3Ь#oF0[W(h5;W+0Cwxuo+0ۙ@'EQAŀ  D l䪈1Ha:Qh„Y4RXXe41&Tx?~ß14Uqbx217[%sa5h2$sr"T<\A#L)8WsdpfK9n۷noߺ} a̤Il5$ɔp' 1<4BJ)8X Ը?rj&8uoo\t5B/W  h8?-1G67hƵ/paf湆ngͻmiXοS8&]fu`jnQꮖ_Y,o_vAR8{-GnH4GWt`.* ³!kTN+<@[ HXU&QHT\;C/ТEB h-J"HEi\W暠ƭ4+՗@J3 :C_r"}:ȉP.zҸ~@]|BMQyK(G^**iKr*WYxd{s3U)|%KM]J.h#gM,0l1*EvLt5 h>?\]^v­W1]5+@ ȽН&Ө e[u}b*BM;+X>#ɬ uWuf-|Ps]VYcQ;sYSN;L ak78B~ 2s>i;1Hhn  0]KjZ|tDfO?ڌz=\ft7t E+RKiZM|)?9O)nQi;k7ֹ!xpF-˷Ϣc(l/7i1cuplGף}X~tQl~VxtkcUU?G ꎱ[Е{y`м^,cq M ۏ6vFZF{yǘ5gc^& SI],.s\;7W΢x `߁tS[] |XU[&Iҭ y,Z4fY(b:uǨ"zt!M MڔnyJ&8䕳;#r4Mkian2=*,)),ԥo~Eݪ׀r9iC :\0V!¬s fJY!XK(|,X 43tuZ?9O`*u9d`3*/@?L)o~U5g\5p8ѽTVZ]g*,P&zBTPUos5ML]ק %OV5!I=h&cN(v7s7sWтm(e('{W5=7W$hjoܝBٻ]uK)](IruY>\>& r'0 q*;؂\cwK V?}5hJh;~D m˘~wk1(n/u8D$2q-R)t#N6jD\At"Α;gNIumE1^eŇ[5nפλo*m~>C\Ľ}rʁ?.ȯx0$m9Q&^?#5.eww oooowgUB! -B0*64 bj("eD8b C K#mxwg;Pu? 0izwxāpB)k8q|NK+fu^L10{?WNAi" RՉd0\TT YbhL8t)i8fS P\10ae!|l5)hݒ.xc#?>ϙB~rz*B-ϧ]5d?h2J7V 5s6O4d ~D{05Zgh&c|OD10_.+wUNk;;P.cȐsB'rCFD^;G Εn0=|:)g_d%r K#7H/Ҷh0خbk`oMٵpmR1d55%(_;ҝs Z.˿@KO3*x=\V  PL;v 7ZJq̈́TNQM`H%&:(aA $ (†AH 6Kbĕ\tj➹\h|-~ٹ(OG}JZרdoK9e-y0,mV`.~sF#EQmS}]=^>S܂,!R ۫ۓt=wS聊.hPZHEfcw(D?Y)ʞe隢%BOl%%fh+6ӳco2X quyٚv -`XבE8ꭟV٢wwz#14޵UP_#V׭M7D=Ɛ <=~FNZFi;tE ebP7+Yaٌܱm+T:/Gis%ϸU_v BFEq8l\6L2$60,r{ Lzuk>T 05D\k'pW|Y鍧jO D܍xY"A? cPE$$!DZ2-dLƱ u`#nx; >z!n~Ӻӣx>÷6fihc'}ݼq-V?sETo;~Ȼ~Zi97X-XlaslMFMexs5>kuώ:/0Z'iKA,Є$VS6[CE%zdCߝտn۴C$~~y"蓀|uvj F!ϡQF[򼒣 H& sWBk0'}> a1bj!4P hs B8 T E/ʉ0 RY'"p0Id*0%ދ܎P$@uז|ܵ7`KKIt_z*Z+z39a@ѹ, q Q0p)]+)ezSN4Ƀ uNFG=ޣ/Ń_󊑦M LI loo?_ҿ/2H f_hjPq H+y\ n]O^JT3lܮWM? b ]ooZ{05`C|z>ʐ] uS>Me`D0'07C,O-TtOWo ΈLkk{q@J}R*Ay)MGRTi4b=Ŵb^چޭ̆juoОl`M res5^-p=2Y,e̻ZB-.D=`yI;i4T?E) _0~" jΩX@;EE |r*O|nH(Xn )^DE="b(sNglJT."s p6.޺Y'vy}sfoj l{|៽W"@;=k<*Cv},K'l&G鿝X.K0MlKG8s2xO=3"HS),wi#GCl]bql8ݻIQBN8oH[LOfz{f3ḚGT2Ag ?p{<׳3[剔p;RrCT_bM&H$oHጒ((qO#TQ}I 7 ϧZR̲^ȫT0N :EV;Rk$R/E ~НZ ïHu(Yh^+"ts5h~5s)V(2f)Ԏp)#*eU]Vm-ZIOŔ4ݺ2#eB!^:/}~aeä%ulGv@ {S Z"Rh(b[\,!Ezg##h5 0mIv;X2jR6t͕ݮ.Wmw 5t bcNPtW{%wOؚ`)?R3|Y)"SjքYii_WSuWXzNʗ{Kˏ5ɜjEcI:i_ger8 5%ُa2ê ib1dFYt?,zb`xs쨇`MǕHvq<:nhõ~ b1%;sĺӳx=uc3p1O٦= x:$ Ot;T3_AcSň8sg-ӫyG I/V6f~ hcnC lX ʵj'ܜ7¿Y4#:4.o/7FF]̪2|b;7=ѓw̡ׯ|ml)u`/SSո<5&t/[(4[n39 SWp~[G\¾!Qxj2F㯠3zۂn& dE㏷WWɭ;Oq|}]v~پi\\tN..7WM>TZnWk./]xxj&4rI~?CA.6D ʍ5qTUjpltg/%j|R96PDh^xɏF- GR--#<{}iihGdU VOI8=?SkE&۳&qb>mF1 7/P,C@S\|+&F]Orn'B?iTehHh7. vg>kW ݼۚRv'/ɹlៈ"HRP|A'2/rJCT7U*ݴOs>o6op3.ssΧ#Rr=نco+W2V^OE9bϟsai%S!f5Czpn(zW bn=::vmv WQ-Բz{eU7. riGYMqd@(Ү2FT="qaF"&ˏ***\$ͫL3YjD*5rP*(dNʀf  W MRʒ/ij:!;ޮ,`L˙ryJ!DIљR_ىvgyq ݬn{pⲱa䃄a8Ӏ ^^~=`=D+^v t*J8q<)MR.ZwD]Sd9AL$܉iLf,S$eGU~__Gޝll [78Sg.d|GD9!{g98>,,Q6lΝw=*_w`$@\cnxݻG-#8},\Z}LCRgF7_:)Q^2}'w䚲n g#3w7w qJos)\lW6qdn]%"=LT;77,ʟOf5-s]U S :7gCJhfo^ qYfrvauPP-3K](O{7)wŹPh,3w6\(Wt58*xsTV)]ݶ; ^bUvdkR_^BqהV8QO46ױ%E8Y*ծkcӟPE](i?Z l~-$IMHq huu94W[s$ٍt܁VY3G)] ;Pjp*js28ĢL1UG4q%fh_c{𶺡.Ⱥ Q_^Rz~~=AJ)}IÏ1GZ,(8%R$ӻ)?#]֙b @=et OaFF`&82HЈMKix`u4[V K93XG{]Vp[U(";Muz"Qm,C[ݷ5io[h:oˠrg^ϐ9PMrE*LVdr\`1fdf-%^- E`5QNև^cQxbZKwMEKx嘺w`JMVQAMw exJ@SȚtk.+i!YGqטV^]`B5T)Y)4]߫Q>E~vB";~G"Upwv3n(Kz}P#0㞢+jF^NF(JwQ.(>l90V8 jŲЊgPM1@շarG T74ӱD*ܭrn+U]w)(]Ysv ' K"Mk-8@S<wܚ ^2LQ a0{2 Ɗ>򤸅XŖ\/8nY Q0-jnIŎ:P%O ,7kFZI}Eɺy;u9lq>pM*.ffsqhJ[% *%t䟋SB1:jF!2QYMfzs3ˏ,O)`jיΑ}Ӫ0M;[|Jk2WiDdBTn|Fhr?ޔ%+ݑfڝАnϠ v؁WCYmg=WԵUWQ0Hunp3ZlK5mӔLȡ,VdJmZ GrEm!~:W7Qwe\yq;4-,]T8ӌOlsIiIVpG0'w̮2L&J"1zչɌ4ܕ+&l8LLсO8ĜrI0#9F:ӥ6\>F_Qvx$*_^h<Od> pYD= x:.G u9{:7pњ4/-g E7f}I!Y\CԤkH@o-Uu[o-MywxG֭7du`amt4(緫V'ۿuW7M·Y=i>Gيc[$m.޷;gKVո*vo ę{nN{vO~KX&5ĊeRr$uo^ބGQ ,*Pw | )~d/tuj}AhH% j;"ňx*K /2sGx˷.k  ^k/"PP G >9yR UX^mZ1=FV{ctWɫiyVZO_*g*ׁwbu"~4/i N#p|)Wƥ?l|I;&ls~>V?ീV>'YTٻFn$rŨ,Er  A&^ ؆I& ?%dbKW"Yϒ_&lA bۑp]\JL,?VGo?_}0o˛*oY}n˿O?_W[lack#8v=?ˊ6Ma[\|BrGאָ+ =, n6y2abͤQWnRY\:y6ICqѾH,R9Aߘj.|m=~(ybEiU+]`SkG2&^SD +~sz*T?///{3Pg!h$F8{AR @na9{Aۋ{ k^Ӓj"^Vܨ%ME0I9uc(^H6Z%u PtUݯVxFGjDA m b7Ts*|#a.Ջெ[[vٚz,F]҉gV[}]MQ*r$zݡZZbCo)!U8*(i R7JbצtG* .Xvjew8/aGd.(T8=\"L뺻(kD".UfrfLM Q0srARQ' \rb HgYN8ȹeM5ʹa}r(0{Kio-ihQvn6EA4P(!yjxiiA*˝J[$BE!'&qpS#<ɉض6Td'cPrJG6zM5`OcjW ߴ&usgfovPjR=׶,409;͹5N_5:_۪+{#L ͳ>TNmzt6nS\+G%czGm{2YW]1y18U6UˎHُĨAP3GݱG'3x'- xʭlyrb![', J]'}ft!;/m5)y!LT#wF "h^AZ,N-T rQK}=synr;Xepbizʩr|fX! .EفŪbRؾx7ĐA 24'eQ1heEʬ"֪"w ?j PHPZi%bm%uU()S>b{SXbe!?uzz6+[s3S}>݇U~Y-}*tsuA_F߽u>|r2翉~~],nbAFl]U9&<њR6QQ}^oW!z0ߜ^M"37 /PNI{ J1WPpV|*Nft eF -REZ/gT[vㅰ+cWTeAR/-,d3Ry`+K{~f)5e-ojD,T*wB<|Djz%Ob~Vt萮QY8)enyLցRNhAΩpnuΨ*{ s'c3zX4xh"4Ӛ0_A3V"Hl!4Z 3x}ko * i-ͨx#Ke쩢]΄wn g)I~ň'bȳ n֭ڮssރhz3:nU5oʒ3U5ևu/'Ytp6k#K; ׈ h9FYDKXJ GUN hnH4 dTxk> \jeL(yn4C[Y ]` $2+mZ(VfkA5~JM*--+ p(%$CРbWZѾ'^0`Yb h"4 2xll Pk;$뀣sK/ CBO]2~>`XEY~Te;keau:"?; 7m!k 4 ؈D E4' ق  b1ZEyvn~2(RK|$X[ `0>AhjóxzKP'*;FDӶuhŻD^Tb}b-F-1A\]=mF/ ʭ86H:4uMik{OuUpv0jT@Ol5.]qis㯏"zTrcu^Bmn1^.=^-$=7m>֛i MPqIsn>-,s~G˰tKP6Ij͸-܅qۤ?v~{Y0aY5Cl`Ss$z͈}ƿTUU)1.) ],t0gl*vK#)N t0fEdF%١mL￳Uk[uŻlԓ %U {"Ia|mSj3kBܧ}ӮۇѓOGƑJ'ozw{⏣3|k[аSb#N^~tDzO\9f}0tic(?-uZ/_.7j "X0{e܂ hFy'<3P1!Rs˜B;TYŗîaSWg A"teLޓ'ZkPI@< )sD*7LS&è5;4`pb#gB sYAp|1s>7Z),)QX yP"Q#sN="b:Bh :($]0 E#EG}>ăcx9}td-_n'ۛWue|][wëC%CV;K8K t\d /;^6 7YK3c(r(e&DJ`1Kkxl/N7,wmpLN&>,+Hdp ьx4g I%a5 JZaSebp ^^Y%#pDAzCSkHaO⺭!dAրb{C^R+LqX!Z8Wn4xCP| i=ʖP =eLrakCzL=zQ@EÕܔn{(aBN6R%hfj67*fsl]bl]K[ךxP7ي)P3 IR"1@i J-~cMc[BrWt ۼu$ϝ1h!'Ѣ@0ȟqD"y6pEWíiۈ߮8)W߮gh*ڰvY@ >ODKlmY8֚~"!x ݹcNW!0cA9뻻)JUˋrˋr]n'Zry5F{yO'X]@o r'a%E8<b@z-gI)~{nܗe}鄨8Qv9 1&|# /He*H(jԝ>XaQs" Y+l4)4'Ms1Q5%>K)UR2*=k~zt^|O|}-/hgE!=gE;0py6$p")t":JӣoYS(W=ZFnuNÔp6gQnFWxxEx?eg7WN첝]%9^#M̗9lY % )zHUbC ~ 2X)$QX/ȈNs-;}=bϳ3 (zX\3J5jlfDdXʨI. `#&Va$Dr.W@D 91$4ΰŔq F -Ϙ!qP!َ@F$Ĩ =' Ҳa` D W(=,tJPSEY֐FhOQPaUt (ѻSPTCSk[L5"fmj }"` 3fz\r- "5(NXfLT5%e 3^xr)&*A424q2䢠ȹt6U];S+R :kJm7zvfpo =~HAĮ"q°kF-E=8fjPʶm{e3)kn"~` fp,_]Oӭ2i~|*t9{pO5a0+P 0TuNA" y<yуDZʔh ?Ʉ\<Rψq< kIw~I. qX.h;A*pt @!0ţy9fZփ/o=GQJĮ7صEjN4Lebe=v|Q05>DMuD=-1~abaTֳH,=Jz;O:OD ߁Oz >yҀe|؞&y3I0J࿊#K BbN?-n\7u,KGНY>Im׵tTM4-$̺M>9BS*))dG|ybQ\\(Kh{|KΣE~Q适 |wWw^(S¿şREoqD{|-_hϛ{XŋOc`t}q<ߞG,>:(As|pƩ?M j_iS}|pa$]ml2[\Bه)Udz"?]V_wz9D<A/;-&t|?uϏ0(8(&yV(#- 'y_TN$#Ӄ&0UА 4|^ƒa!?zS;nq?G9Y~Bvgϓy1M#Vm^-~\<՗&Wt\@|]AVp8y1³%:ˎ6O_/Pd4vIKTO@O֟}꾞Y~~7"XX)SuF݂!i/~/7/=THy)hWD,JΊ63&ڟK:۰OzriYUU sNv+uYo[9#7G#/|ײ!e3ɨ+\_ aÜV)! ~#$|$4齩]Y8>?g=8sh۔80;G 0IoY$L%@p Jr),q12:8qP(XnJ&IG6S%6L4QA&3'n)W10fɍv3 K&DK p}16KܬHTʟ&JGYJRlyzu*0қ!{<L]W6=_դvՖ . r֔iK툧vI+wK9nƥ嬉fIKoƥ\J!&̥&EeN!e>o ?$rc0Lc|$VGk.:JLw![e|/))\fsUkwbKUw;WFeDe?Gw^,y 6~As-&wqHxqQ.U9Qs;LAdfRY҆ wb`{lJ;u(pRIeU%B_CKr (e=b%_<.R5xv4_g!JD 4#dbjkڤ ^H+~4uUBh%-s 7SCK'|pD8VRa3% T3) gx׻<2 7 Tp, ڪh.S0tR3MI ;! OɓMz/^ѕtek|;Ε6Yunvsci\$#^b aQfyU"bZJ[c4'тn<U n%w6 KITƘ9aAzb z1Rg`cACzJ 39G`Zi,sZcY3 Xn2W ˷]s J{\;e36Iܓ܅/q}֛[mʘs\Ki̦EˋFo#]d3o9wa#Uf&S.|x7]Z:ﮟƖr6('`.ܪߐYC6Dն7ɴZc\s( _o;fHC4ІR.fnA``kN!x5wQk7]{ (a %xP2A ͊Ģc'Ώ+9;.r2`σ`84ssD"c_rNۅٙa*m2A9 ?L`Ed, #AZi 5:PݯwDƤb`r wxZ@ü'W x @pn- ' Rl;̋9 b')q]~ɧvުw\ ϐ .RsB!ɍJGoq[E|8:Cx,) 31SroQhέSkoxtf‡5x(.dg QJJ]X>Qf:|8EG4֩3=B~'D13/cܣTcLF"d/#' n@+`FOx5hΒiH"ZMpM^?[6;gvh`-ך 1ʚKca2C6I'fD7-VLm+#;·v8&eEEJ /V1T|~ix7S=BQ=U!Sp2|;7>! #{maclvۑ` %^"wmm~b_ !},f9A_%ZE^?CnTH3쯪꫞jw<`m)AK ^L{k5QBiRaflPJ9Y!@i&$2G}3%ClC0Vݠ#ᓟn|']5М%brL{J=TUb_rԱW*gYNg7 `$8׊ܭƣ \,h4Gj4v'ш%t נfcчc##ޘSJ^|-Z#(5;B+n5 J$J%%\3-y^V rݞ;δG/qQ fyeu- (<|嶀ޓŘ)=em]B#&L. Zȇ*ZŸym)/#Kgi},eh<}EܾQܱ_r v;[%:IA6cRih/`JTe+[L#OtST vD̤!c~}Vmj2M%)bvht}S%"MJw:r#XϋЖ[e_'Z?ʢxގKem5?ͮؑÒp+(d σ&H7sQTdW׭ه7 ->oёy ʔYZZymibyI7Sflt$YB%* #(e#{2&gxA:rx.fl4dɵ$SYC< %$UY = ~-zD ?4D?LQo6.fy6BTخqR! X ms;6.AV*Zo4`"fEvm[>^pJBaݛu$q'g? 9oYA- }u"PZ̝qbЦDгm}mX8~DDcݬY Fי|Eьmq%MOMvQ])<%+LR_vժw\0Սhi=ddãf8$)%@@H4)|SyN^=6Q[1 Nh-]^||곙ȍ(Z232y`kc6n9GѴX`mSι=D9'!Y5߬QB%IEdOvmNQl)WDҲVg+ЇK RyMee%(A= 1%j_@þlkTseO|58Xc\ 33x@hKv`uM #P"5~/:h \Qh"c偫KJ4$3Dpp}umǬh Mu%Og9eY#r*fkA^Km{^\[lx4C4'e6i w0/V_K1rޞ̩zцq떑O,:]cDى\w@<:;,a g?߬x DOO^*9'S)y,2scŽVΰ_7nߔh BUʍE /z<_u_ Dߏga|6$C/苆/EQֶh3=K Zp/j#3V;yvȔc0m>@c$8˶n.\I9L)CAN$>䁜+ivF{m~Jussצe 8``jL!+ Чc(N|>@0M*TpD}B10`LߴI!ٱv̼/|(s}A#{{(v+F&Bn/;^:CS]ݣGр^|"{s{x[װŹn`T=yoR䘳34F-r Θ3Jr݅6l \e}8QõJjZ.b6ìh/ Y$cؽ$_2 D >j %4ZI\L@!A"OMXgN fW %J`;b3zQ{a@si>OoD}|v≵ kcJ~\GlJɢ(s5!J)G2ur;'zp#FU5jG13pV-uge5Ki69NAbz{BVGDhh^^StDoS,ǝ]֧-F Mg)zJx.%8>QQG5żYݱk5yTu0G0n<)ՁEGp"4%ܝfuobK` .i$tu%`mϑ)/ޥ'u۠C)cPWS"d)\- KZdm_)ڙMgeDȣzv)Dk +ڊ[gdЛCGPkUԚeʦQ7jkkʋU'jX[򼝩ZV]FZ8ݻ7i4ɰ,e2o}՘5 '\Ղ/ܨgߒ'Lj6udb(њ *{.sXS¾|8FW|sje];xu&ȮcJteHhgOPF?:Z{ڦ8P闳pvպ-A ޞ/yZotlƗC :ilק￾L(k{ra1UgԼψ-~8O 2\̈Il_-!m3_i KM3́HoijS  Wl*s?tX>I?mY?#&L^zLQ=Ecd|20Ï צ㺙pxV#kēd+-1uaAk复ްG~ϭc؎wzï<-IY`SVu]0ϞRP>?4vS}JYOvDPY5 9ʺ`D= aT)*_H Ъ_m4cړ<^PêP5[!sOf$^=J%Ꙟ!Aѳ^=6ya`2Cp4oq_C_h-[o緫%rs<)' OqĐFpx%E4ۂ7΋ 糔[`h[3j]c7Bokq޼&$ӷO# I?yG>ZrJ~ o Vb/}wЛ"Dt0MtqmW>/ \$w6>idA sg'"*)x3`^b 'TKxF+[߾h<1'OrpFfJCe#M;bBS(e*  ybe%&WbɥYIeCFh&#ɒaֶ3bw ._:]ε2cvx 5(D}G@DebF^KEF<&된G} QuNE&YGEx`^Fu)43A< nLQC/C被>>pӕJ]/[ qv/0E_mla?V^%Q\+ UN6Lϳ(/eDUBQK")NŸ{I*@}0e6QwbQ9n,IM##ٵO# @ Y~ȯ19YoA3mVF<"S@Fxj'C4|.'O2]AHaSF98J»=6j0VCi^M=jD)쎮CJ+y_qړ4 l<h-%Kd`K|G46OmCfk#w ;H$GbVh({嗋W ;UOq*"9e efgϽ}&d]RĺW t?U\S+zKnWWfK|bmA_^.0DS>}p|vDUUV0њ87G{M׵hFBS3,QԵ[+{Ҷ\kV#:C-$cfq&J--U?}ۖr5h]ghkR Ԕϛ33 RY$\N \.W'?fU`=SB%#^DV5y$.YX+}R-kq)>YIo&[/u>jzX#LF)4 i84 Y٘[%PSr”TDF/E)Hj+єT4֘iUĺ)8%V>aRZl!KOA*0 tSj` uR"ڢ.%O$wM b5 ,,P)4OqVB k/B9dTfO]Bx_ v[gy9{v,ʊ#4Ђ̜>WgU "iEGP dI.EgjytLM#2Eo,:hrgEGWg鍯r0:6XQjѵZY!,pmk F787 gF4GYiUO?//Qو/'ґX=lνa8#*M(OA~DycT8>&ˢF ,J⡫n}27 X|2#2&TPVJw1[9&QbNIP>]<Ũ-^I rgK/VBKYvg:Y4plrA<&V/o2; OBB%behh3n9VRzbUtPi6L숄_Hle=(0u%:[+=N7^YQceT _xdNFr4un*|qIsϽpr\:y:z$fpitW+ #\m ?N].)q7 HiC ;ku﮻*9<=5F='SaZH͋U ƬPiubhҹUpXFHE4IUGrU#m/!+Wj;>[EZhh}5؂|;Jv|rq:Z3J;8IQlVǗZpV8S/+ėEoNs#|ܽ h ׂhW-h-ٹJ3>|;o:zT`|L0T>\BKJ*deTUQTU y{IֳϟGf @QQdh+:bFB#U siAP-]GI1YC-ӯY|guR*VĖm }- [۶3vdk{2\RŁi_ɢ#9Eh̷Begcc _gT~0+ю`{ja,|;͊fJk4;S Ay#h /?/@_n1< "Lvao1#˸es\D!Ԣ4)Y t{W*=ˬ]/7ibtHH3UšJHbr!%T{ M<oi`7,Nj7%ɜ4qѸ\R/C`\.psi|VSJ!䫘=s %hrCEqg6%-9? ];g^-l"=@--Ղ]M{e-Ok?x>}c%'P΃zm XCk(7k]Y,rzvf.}D?pU7P42Es#w]8:< DVY#Zd u )UW`@JFLC3>LQ{[X:l[SPK_h[4]AFkv,X@+Stwgm}|$w5׮{b:դA璵$&5H~Ӧz-k[^. 9vӚ_dM =?=D_~r0GK7ZQ`ZԂfiқq4j%nvo-%nї[.F) (ߖU`R"BZc+6o+Cؽ|m@Amҭߖ,i> y<~zpiAҠb9tݢXе:Ҏ]p+gV<ΫMQwAA 봝tۢk zcҭ}Pu!ExJ[ w\t^lAy'$aڂR2" IDu\_)Zтa17 sdBFޱ ihZIT9 PKZ})޵6ncbeؗL}b RǨLKJrL;rLYfx<<$ųuxJ˗RbZ\YgVVr1[.x*8qs1n2U~[WW1J^}˒SFf+=(cW/Q.|t{}FO(}2ϓ lm͍B}Q|vp@^uҵ=W-׳UYf]Ps-!\v Ľ˄ AƄ˄?ۄA]Q o8dH#GQ# RY,W#w"T;zXGB[9frFl(E[ߟcܫej8x~|}9LJB8O(#rQz;[өD7xvpF;0O ]X< F{8>P .et7*[B&\1gFHص1bh~ %ck"~iܟ`;9ԇޛh@0'lnB@|9Cø v(9v-[kNUgݤ9>e_ ѝʾ}}]|6+r>;|z2aj Z U 0*P Ry![bJ;bLٺsbL#F]JbLsMQ}9Nц @ؘ pRfo>uk =:7ζ4A Dy7 ƫ8&(bΓ ㅱ{;lž{;v\N%N 7|~-KD A&KΘ:~oGvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000005452401715137342266017716 0ustar rootrootJan 31 07:21:31 crc systemd[1]: Starting Kubernetes Kubelet... Jan 31 07:21:31 crc restorecon[4749]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:21:31 crc restorecon[4749]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:32 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 07:21:33 crc restorecon[4749]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 07:21:33 crc restorecon[4749]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 31 07:21:35 crc kubenswrapper[4908]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 07:21:35 crc kubenswrapper[4908]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 31 07:21:35 crc kubenswrapper[4908]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 07:21:35 crc kubenswrapper[4908]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 07:21:35 crc kubenswrapper[4908]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 31 07:21:35 crc kubenswrapper[4908]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.750851 4908 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759667 4908 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759692 4908 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759698 4908 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759702 4908 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759706 4908 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759710 4908 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759714 4908 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759718 4908 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759723 4908 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759728 4908 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759732 4908 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759737 4908 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759741 4908 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759745 4908 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759748 4908 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759752 4908 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759755 4908 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759760 4908 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759764 4908 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759768 4908 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759773 4908 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759777 4908 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759786 4908 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759790 4908 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759795 4908 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759801 4908 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759806 4908 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759812 4908 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759817 4908 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759822 4908 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759825 4908 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759829 4908 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759832 4908 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759835 4908 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759839 4908 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759843 4908 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759846 4908 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759849 4908 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759854 4908 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759858 4908 feature_gate.go:330] unrecognized feature gate: Example Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759862 4908 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759866 4908 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759869 4908 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759873 4908 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759877 4908 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759880 4908 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759884 4908 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759887 4908 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759890 4908 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759894 4908 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759898 4908 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759903 4908 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759907 4908 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759910 4908 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759913 4908 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759917 4908 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759920 4908 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759926 4908 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759931 4908 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759936 4908 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759940 4908 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759945 4908 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759950 4908 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759954 4908 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759958 4908 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759964 4908 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759970 4908 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759974 4908 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.759994 4908 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.760001 4908 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.760004 4908 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763440 4908 flags.go:64] FLAG: --address="0.0.0.0" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763472 4908 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763482 4908 flags.go:64] FLAG: --anonymous-auth="true" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763489 4908 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763505 4908 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763511 4908 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763518 4908 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763525 4908 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763530 4908 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763536 4908 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763542 4908 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763547 4908 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763553 4908 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763558 4908 flags.go:64] FLAG: --cgroup-root="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763563 4908 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763568 4908 flags.go:64] FLAG: --client-ca-file="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763573 4908 flags.go:64] FLAG: --cloud-config="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763578 4908 flags.go:64] FLAG: --cloud-provider="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763583 4908 flags.go:64] FLAG: --cluster-dns="[]" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763595 4908 flags.go:64] FLAG: --cluster-domain="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763600 4908 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763605 4908 flags.go:64] FLAG: --config-dir="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763610 4908 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763616 4908 flags.go:64] FLAG: --container-log-max-files="5" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763628 4908 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763633 4908 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763638 4908 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763643 4908 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763648 4908 flags.go:64] FLAG: --contention-profiling="false" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763654 4908 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763659 4908 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763666 4908 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763671 4908 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763679 4908 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763685 4908 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763690 4908 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763696 4908 flags.go:64] FLAG: --enable-load-reader="false" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763701 4908 flags.go:64] FLAG: --enable-server="true" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763706 4908 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763713 4908 flags.go:64] FLAG: --event-burst="100" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763719 4908 flags.go:64] FLAG: --event-qps="50" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763724 4908 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763730 4908 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763735 4908 flags.go:64] FLAG: --eviction-hard="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763742 4908 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763747 4908 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763752 4908 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763758 4908 flags.go:64] FLAG: --eviction-soft="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763763 4908 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763769 4908 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763774 4908 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763779 4908 flags.go:64] FLAG: --experimental-mounter-path="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763784 4908 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763789 4908 flags.go:64] FLAG: --fail-swap-on="true" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763795 4908 flags.go:64] FLAG: --feature-gates="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763801 4908 flags.go:64] FLAG: --file-check-frequency="20s" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763806 4908 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763812 4908 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763818 4908 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763824 4908 flags.go:64] FLAG: --healthz-port="10248" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763829 4908 flags.go:64] FLAG: --help="false" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763837 4908 flags.go:64] FLAG: --hostname-override="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763842 4908 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763848 4908 flags.go:64] FLAG: --http-check-frequency="20s" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763853 4908 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763858 4908 flags.go:64] FLAG: --image-credential-provider-config="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763863 4908 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763868 4908 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763872 4908 flags.go:64] FLAG: --image-service-endpoint="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763877 4908 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763882 4908 flags.go:64] FLAG: --kube-api-burst="100" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763887 4908 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763892 4908 flags.go:64] FLAG: --kube-api-qps="50" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763896 4908 flags.go:64] FLAG: --kube-reserved="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763901 4908 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763906 4908 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763912 4908 flags.go:64] FLAG: --kubelet-cgroups="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763916 4908 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763921 4908 flags.go:64] FLAG: --lock-file="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763926 4908 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763931 4908 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763936 4908 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763944 4908 flags.go:64] FLAG: --log-json-split-stream="false" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763949 4908 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763954 4908 flags.go:64] FLAG: --log-text-split-stream="false" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763959 4908 flags.go:64] FLAG: --logging-format="text" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763964 4908 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763969 4908 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.763974 4908 flags.go:64] FLAG: --manifest-url="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764002 4908 flags.go:64] FLAG: --manifest-url-header="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764010 4908 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764017 4908 flags.go:64] FLAG: --max-open-files="1000000" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764023 4908 flags.go:64] FLAG: --max-pods="110" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764028 4908 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764033 4908 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764038 4908 flags.go:64] FLAG: --memory-manager-policy="None" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764043 4908 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764049 4908 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764054 4908 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764059 4908 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764073 4908 flags.go:64] FLAG: --node-status-max-images="50" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764079 4908 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764084 4908 flags.go:64] FLAG: --oom-score-adj="-999" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764089 4908 flags.go:64] FLAG: --pod-cidr="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764093 4908 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764102 4908 flags.go:64] FLAG: --pod-manifest-path="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764107 4908 flags.go:64] FLAG: --pod-max-pids="-1" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764113 4908 flags.go:64] FLAG: --pods-per-core="0" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764118 4908 flags.go:64] FLAG: --port="10250" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764123 4908 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764128 4908 flags.go:64] FLAG: --provider-id="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764134 4908 flags.go:64] FLAG: --qos-reserved="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764147 4908 flags.go:64] FLAG: --read-only-port="10255" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764152 4908 flags.go:64] FLAG: --register-node="true" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764157 4908 flags.go:64] FLAG: --register-schedulable="true" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764162 4908 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764172 4908 flags.go:64] FLAG: --registry-burst="10" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764177 4908 flags.go:64] FLAG: --registry-qps="5" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764182 4908 flags.go:64] FLAG: --reserved-cpus="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764187 4908 flags.go:64] FLAG: --reserved-memory="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764195 4908 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764200 4908 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764206 4908 flags.go:64] FLAG: --rotate-certificates="false" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764211 4908 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764216 4908 flags.go:64] FLAG: --runonce="false" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764221 4908 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764227 4908 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764232 4908 flags.go:64] FLAG: --seccomp-default="false" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764237 4908 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764242 4908 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764248 4908 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764254 4908 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764259 4908 flags.go:64] FLAG: --storage-driver-password="root" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764264 4908 flags.go:64] FLAG: --storage-driver-secure="false" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764269 4908 flags.go:64] FLAG: --storage-driver-table="stats" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764274 4908 flags.go:64] FLAG: --storage-driver-user="root" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764280 4908 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764285 4908 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764290 4908 flags.go:64] FLAG: --system-cgroups="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764296 4908 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764305 4908 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764310 4908 flags.go:64] FLAG: --tls-cert-file="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764315 4908 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764328 4908 flags.go:64] FLAG: --tls-min-version="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764333 4908 flags.go:64] FLAG: --tls-private-key-file="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764339 4908 flags.go:64] FLAG: --topology-manager-policy="none" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764344 4908 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764349 4908 flags.go:64] FLAG: --topology-manager-scope="container" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764355 4908 flags.go:64] FLAG: --v="2" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764371 4908 flags.go:64] FLAG: --version="false" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764383 4908 flags.go:64] FLAG: --vmodule="" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764390 4908 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764397 4908 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764513 4908 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764521 4908 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764526 4908 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764531 4908 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764535 4908 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764540 4908 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764546 4908 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764551 4908 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764556 4908 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764561 4908 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764566 4908 feature_gate.go:330] unrecognized feature gate: Example Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764570 4908 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764574 4908 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764579 4908 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764583 4908 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764587 4908 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764591 4908 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764595 4908 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764600 4908 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764605 4908 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764610 4908 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764615 4908 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764623 4908 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764627 4908 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764632 4908 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764637 4908 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764641 4908 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764646 4908 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764650 4908 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764654 4908 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764660 4908 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764667 4908 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764673 4908 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764678 4908 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764682 4908 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764686 4908 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764691 4908 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764695 4908 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764700 4908 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764704 4908 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764708 4908 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764714 4908 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764718 4908 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764722 4908 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764727 4908 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764731 4908 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764736 4908 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764740 4908 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764745 4908 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764749 4908 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764753 4908 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764758 4908 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764762 4908 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764766 4908 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764773 4908 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764777 4908 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764781 4908 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764786 4908 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764790 4908 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764795 4908 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764799 4908 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764804 4908 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764808 4908 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764812 4908 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764816 4908 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764821 4908 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764825 4908 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764831 4908 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764835 4908 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764840 4908 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.764846 4908 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.764854 4908 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.804917 4908 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.805006 4908 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805152 4908 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805169 4908 feature_gate.go:330] unrecognized feature gate: Example Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805181 4908 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805191 4908 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805204 4908 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805214 4908 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805224 4908 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805235 4908 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805245 4908 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805256 4908 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805266 4908 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805276 4908 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805286 4908 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805296 4908 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805306 4908 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805320 4908 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805335 4908 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805348 4908 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805358 4908 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805368 4908 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805378 4908 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805388 4908 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805399 4908 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805408 4908 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805423 4908 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805435 4908 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805445 4908 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805455 4908 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805465 4908 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805475 4908 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805485 4908 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805497 4908 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805507 4908 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805518 4908 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805530 4908 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805540 4908 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805550 4908 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805560 4908 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805570 4908 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805579 4908 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805589 4908 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805599 4908 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805609 4908 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805620 4908 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805633 4908 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805646 4908 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805658 4908 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805669 4908 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805679 4908 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805690 4908 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805699 4908 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805709 4908 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805720 4908 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805733 4908 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805745 4908 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805756 4908 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805768 4908 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805780 4908 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805790 4908 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805801 4908 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805813 4908 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805826 4908 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805837 4908 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805848 4908 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805858 4908 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805868 4908 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805878 4908 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805888 4908 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805897 4908 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805907 4908 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.805938 4908 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.805955 4908 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806258 4908 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806278 4908 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806289 4908 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806301 4908 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806311 4908 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806322 4908 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806335 4908 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806350 4908 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806361 4908 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806374 4908 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806384 4908 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806396 4908 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806406 4908 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806418 4908 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806431 4908 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806444 4908 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806455 4908 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806465 4908 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806475 4908 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806485 4908 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806496 4908 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806506 4908 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806516 4908 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806526 4908 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806536 4908 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806546 4908 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806581 4908 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806592 4908 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806603 4908 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806613 4908 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806647 4908 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806660 4908 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806672 4908 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806683 4908 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806696 4908 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806709 4908 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806719 4908 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806728 4908 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806739 4908 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806749 4908 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806761 4908 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806772 4908 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806782 4908 feature_gate.go:330] unrecognized feature gate: Example Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806791 4908 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806801 4908 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806814 4908 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806827 4908 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806837 4908 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806847 4908 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806857 4908 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806866 4908 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806876 4908 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806886 4908 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806899 4908 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806909 4908 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806920 4908 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806930 4908 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806940 4908 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806949 4908 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806959 4908 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.806968 4908 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.807005 4908 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.807015 4908 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.807025 4908 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.807035 4908 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.807045 4908 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.807055 4908 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.807065 4908 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.807075 4908 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.807084 4908 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 07:21:35 crc kubenswrapper[4908]: W0131 07:21:35.807093 4908 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.807106 4908 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.807382 4908 server.go:940] "Client rotation is on, will bootstrap in background" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.821158 4908 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.821301 4908 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.823370 4908 server.go:997] "Starting client certificate rotation" Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.823414 4908 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.824377 4908 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-07 15:32:52.852827103 +0000 UTC Jan 31 07:21:35 crc kubenswrapper[4908]: I0131 07:21:35.824466 4908 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 07:21:36 crc kubenswrapper[4908]: I0131 07:21:36.215456 4908 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 07:21:36 crc kubenswrapper[4908]: E0131 07:21:36.234580 4908 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:21:36 crc kubenswrapper[4908]: I0131 07:21:36.241906 4908 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 07:21:36 crc kubenswrapper[4908]: I0131 07:21:36.282116 4908 log.go:25] "Validated CRI v1 runtime API" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.153321 4908 log.go:25] "Validated CRI v1 image API" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.160110 4908 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.170955 4908 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-31-07-16-26-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.171013 4908 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.191483 4908 manager.go:217] Machine: {Timestamp:2026-01-31 07:21:37.183506993 +0000 UTC m=+3.799451667 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:3a1d33fb-cc50-40c4-b06d-abd3cdc211c1 BootID:efb1f9ea-64bc-4ee6-b73e-d54792ad39f9 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:fb:5d:e0 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:fb:5d:e0 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:7e:4c:14 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:6e:54:7c Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ed:3a:fc Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:55:4c:70 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:84:65:dd Speed:-1 Mtu:1496} {Name:eth10 MacAddress:f6:6d:61:03:c5:e2 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:26:81:8c:e4:9f:06 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.191724 4908 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.191914 4908 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.192520 4908 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.192711 4908 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.192753 4908 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.192948 4908 topology_manager.go:138] "Creating topology manager with none policy" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.192958 4908 container_manager_linux.go:303] "Creating device plugin manager" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.195303 4908 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.195328 4908 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.195910 4908 state_mem.go:36] "Initialized new in-memory state store" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.196790 4908 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.211924 4908 kubelet.go:418] "Attempting to sync node with API server" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.211989 4908 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.212013 4908 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.212026 4908 kubelet.go:324] "Adding apiserver pod source" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.212039 4908 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 31 07:21:37 crc kubenswrapper[4908]: W0131 07:21:37.214831 4908 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 31 07:21:37 crc kubenswrapper[4908]: E0131 07:21:37.214962 4908 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:21:37 crc kubenswrapper[4908]: W0131 07:21:37.223155 4908 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 31 07:21:37 crc kubenswrapper[4908]: E0131 07:21:37.223257 4908 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.233230 4908 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.235040 4908 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.321297 4908 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.323342 4908 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.323379 4908 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.323391 4908 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.323401 4908 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.323418 4908 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.323439 4908 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.323449 4908 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.323467 4908 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.323480 4908 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.323489 4908 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.323511 4908 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.323522 4908 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.323548 4908 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.324248 4908 server.go:1280] "Started kubelet" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.324474 4908 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.324954 4908 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.325227 4908 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.325657 4908 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 31 07:21:37 crc systemd[1]: Started Kubernetes Kubelet. Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.330495 4908 server.go:460] "Adding debug handlers to kubelet server" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.337952 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.338089 4908 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.338138 4908 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.338153 4908 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 31 07:21:37 crc kubenswrapper[4908]: E0131 07:21:37.338163 4908 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.338233 4908 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.338180 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 12:25:18.524762929 +0000 UTC Jan 31 07:21:37 crc kubenswrapper[4908]: E0131 07:21:37.338971 4908 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="200ms" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.342260 4908 factory.go:55] Registering systemd factory Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.342298 4908 factory.go:221] Registration of the systemd container factory successfully Jan 31 07:21:37 crc kubenswrapper[4908]: W0131 07:21:37.342359 4908 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 31 07:21:37 crc kubenswrapper[4908]: E0131 07:21:37.342465 4908 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.342701 4908 factory.go:153] Registering CRI-O factory Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.342724 4908 factory.go:221] Registration of the crio container factory successfully Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.342783 4908 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.342809 4908 factory.go:103] Registering Raw factory Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.342826 4908 manager.go:1196] Started watching for new ooms in manager Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.343363 4908 manager.go:319] Starting recovery of all containers Jan 31 07:21:37 crc kubenswrapper[4908]: E0131 07:21:37.342919 4908 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.46:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188fbfd458d4e188 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 07:21:37.324212616 +0000 UTC m=+3.940157280,LastTimestamp:2026-01-31 07:21:37.324212616 +0000 UTC m=+3.940157280,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.363355 4908 manager.go:324] Recovery completed Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.376548 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.378723 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.378770 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.378782 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.379490 4908 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.379512 4908 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.379539 4908 state_mem.go:36] "Initialized new in-memory state store" Jan 31 07:21:37 crc kubenswrapper[4908]: E0131 07:21:37.438827 4908 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 07:21:37 crc kubenswrapper[4908]: E0131 07:21:37.539205 4908 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 07:21:37 crc kubenswrapper[4908]: E0131 07:21:37.539885 4908 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="400ms" Jan 31 07:21:37 crc kubenswrapper[4908]: E0131 07:21:37.640171 4908 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 07:21:37 crc kubenswrapper[4908]: E0131 07:21:37.741071 4908 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.815594 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.816023 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.816142 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.816243 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.816324 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.816385 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.816448 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.816505 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.816571 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.816628 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.816687 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.816745 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.816812 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.816903 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.817000 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.817097 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.817176 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.817242 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.817304 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.817373 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.817436 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.817500 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.817561 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.817656 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.817725 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.817790 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.817892 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.817959 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.818048 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.818126 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.818188 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.818249 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.818304 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.818367 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.818430 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.818493 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.818574 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.818638 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.818700 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.818765 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.818824 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.818882 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.818945 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.819020 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.819097 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.819166 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.819234 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.819294 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.819361 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.819422 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.819481 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.819539 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.819604 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.819667 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.819731 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.819791 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.819855 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.819915 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.820001 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.820065 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.820144 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.820222 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.820293 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.820365 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.820425 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821346 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821377 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821393 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821409 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821424 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821439 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821454 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821466 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821480 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821495 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821509 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821524 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821538 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821552 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821566 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821581 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821595 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821630 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821644 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821659 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821672 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821686 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821699 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821714 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821727 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821740 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821754 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821767 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821782 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821797 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821811 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821825 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821849 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821864 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821878 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821892 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821904 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821917 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821931 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.821968 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822005 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822021 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822037 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822054 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822069 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822084 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822100 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822115 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822130 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822142 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822154 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822167 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822178 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822191 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822202 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822216 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822228 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822242 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822254 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822267 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822280 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822293 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822306 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822319 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822333 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822347 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822360 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822371 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822383 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822410 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822422 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822435 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822447 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822461 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822474 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822487 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822498 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822512 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822525 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822538 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822552 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822565 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822579 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822591 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822604 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822616 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822628 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822640 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822652 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822672 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822684 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822696 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822711 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822724 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822737 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822750 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822763 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822775 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822786 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822798 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822809 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822824 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822838 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822850 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.822864 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828112 4908 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828195 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828221 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828243 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828261 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828279 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828300 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828327 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828346 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828366 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828389 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828407 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828423 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828437 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828453 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828469 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828485 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828502 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828518 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828534 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828554 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828570 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828589 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828607 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828620 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828634 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828650 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828665 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828681 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828696 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828712 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828746 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828764 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828779 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828795 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828813 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828829 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828844 4908 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828872 4908 reconstruct.go:97] "Volume reconstruction finished" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.828882 4908 reconciler.go:26] "Reconciler: start to sync state" Jan 31 07:21:37 crc kubenswrapper[4908]: E0131 07:21:37.841878 4908 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.937025 4908 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.938839 4908 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.938880 4908 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 31 07:21:37 crc kubenswrapper[4908]: I0131 07:21:37.938909 4908 kubelet.go:2335] "Starting kubelet main sync loop" Jan 31 07:21:37 crc kubenswrapper[4908]: E0131 07:21:37.938962 4908 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 31 07:21:37 crc kubenswrapper[4908]: E0131 07:21:37.940320 4908 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="800ms" Jan 31 07:21:37 crc kubenswrapper[4908]: W0131 07:21:37.940150 4908 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 31 07:21:37 crc kubenswrapper[4908]: E0131 07:21:37.940686 4908 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:21:37 crc kubenswrapper[4908]: E0131 07:21:37.942231 4908 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 07:21:38 crc kubenswrapper[4908]: E0131 07:21:38.039546 4908 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 31 07:21:38 crc kubenswrapper[4908]: E0131 07:21:38.042715 4908 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 07:21:38 crc kubenswrapper[4908]: E0131 07:21:38.142866 4908 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 07:21:38 crc kubenswrapper[4908]: E0131 07:21:38.240327 4908 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 31 07:21:38 crc kubenswrapper[4908]: E0131 07:21:38.243599 4908 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 07:21:38 crc kubenswrapper[4908]: E0131 07:21:38.343992 4908 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 07:21:38 crc kubenswrapper[4908]: I0131 07:21:38.353774 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 20:05:43.371736733 +0000 UTC Jan 31 07:21:38 crc kubenswrapper[4908]: I0131 07:21:38.354492 4908 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 31 07:21:38 crc kubenswrapper[4908]: W0131 07:21:38.354496 4908 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 31 07:21:38 crc kubenswrapper[4908]: E0131 07:21:38.354663 4908 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:21:38 crc kubenswrapper[4908]: I0131 07:21:38.398131 4908 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 07:21:38 crc kubenswrapper[4908]: E0131 07:21:38.399212 4908 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:21:38 crc kubenswrapper[4908]: W0131 07:21:38.438621 4908 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 31 07:21:38 crc kubenswrapper[4908]: E0131 07:21:38.438709 4908 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:21:38 crc kubenswrapper[4908]: E0131 07:21:38.444419 4908 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 07:21:38 crc kubenswrapper[4908]: I0131 07:21:38.458077 4908 policy_none.go:49] "None policy: Start" Jan 31 07:21:38 crc kubenswrapper[4908]: I0131 07:21:38.459311 4908 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 31 07:21:38 crc kubenswrapper[4908]: I0131 07:21:38.459358 4908 state_mem.go:35] "Initializing new in-memory state store" Jan 31 07:21:38 crc kubenswrapper[4908]: E0131 07:21:38.545241 4908 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 07:21:38 crc kubenswrapper[4908]: E0131 07:21:38.640838 4908 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 31 07:21:38 crc kubenswrapper[4908]: W0131 07:21:38.641829 4908 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 31 07:21:38 crc kubenswrapper[4908]: E0131 07:21:38.641967 4908 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:21:38 crc kubenswrapper[4908]: E0131 07:21:38.646374 4908 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 07:21:38 crc kubenswrapper[4908]: E0131 07:21:38.741161 4908 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="1.6s" Jan 31 07:21:38 crc kubenswrapper[4908]: E0131 07:21:38.747433 4908 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 07:21:38 crc kubenswrapper[4908]: E0131 07:21:38.848035 4908 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 07:21:38 crc kubenswrapper[4908]: I0131 07:21:38.933094 4908 manager.go:334] "Starting Device Plugin manager" Jan 31 07:21:38 crc kubenswrapper[4908]: I0131 07:21:38.933240 4908 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 31 07:21:38 crc kubenswrapper[4908]: I0131 07:21:38.933258 4908 server.go:79] "Starting device plugin registration server" Jan 31 07:21:38 crc kubenswrapper[4908]: I0131 07:21:38.933689 4908 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 31 07:21:38 crc kubenswrapper[4908]: I0131 07:21:38.933757 4908 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 31 07:21:38 crc kubenswrapper[4908]: I0131 07:21:38.933968 4908 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 31 07:21:38 crc kubenswrapper[4908]: I0131 07:21:38.934063 4908 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 31 07:21:38 crc kubenswrapper[4908]: I0131 07:21:38.934071 4908 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 31 07:21:38 crc kubenswrapper[4908]: E0131 07:21:38.939429 4908 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.034424 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.035861 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.035925 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.035956 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.036032 4908 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 07:21:39 crc kubenswrapper[4908]: E0131 07:21:39.036715 4908 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.46:6443: connect: connection refused" node="crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.237539 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.238947 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.239016 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.239030 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.239071 4908 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 07:21:39 crc kubenswrapper[4908]: E0131 07:21:39.239647 4908 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.46:6443: connect: connection refused" node="crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.327228 4908 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.354369 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 15:48:48.318593933 +0000 UTC Jan 31 07:21:39 crc kubenswrapper[4908]: W0131 07:21:39.385558 4908 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 31 07:21:39 crc kubenswrapper[4908]: E0131 07:21:39.385643 4908 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.441699 4908 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.441865 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.443286 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.443362 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.443388 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.443609 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.443791 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.443870 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.444902 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.445000 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.445024 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.550389 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.550479 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.550507 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.550525 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.550541 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.550558 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.550576 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.550592 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.640721 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.642527 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.642599 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.642624 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.642669 4908 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 07:21:39 crc kubenswrapper[4908]: E0131 07:21:39.643249 4908 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.46:6443: connect: connection refused" node="crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.651363 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.651402 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.651422 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.651457 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.651474 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.651490 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.651519 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.651537 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.651660 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.651709 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.651729 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.651766 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.651794 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.651681 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.651798 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.651871 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.769923 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.914741 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.914794 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.914804 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.915017 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.915148 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.915200 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.918008 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.918047 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.918077 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.918096 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.918109 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.918109 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.918795 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.918967 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.919021 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.919702 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.919732 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.919742 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.919809 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.919831 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.919843 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.919880 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.919834 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.919903 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.920619 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.920645 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.920655 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.920840 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.920870 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.920881 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.921099 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.921125 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.921670 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.921687 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.921697 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.932012 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.955508 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.955562 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.955593 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.955620 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.955647 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.955668 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 07:21:39 crc kubenswrapper[4908]: I0131 07:21:39.955690 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:21:40 crc kubenswrapper[4908]: I0131 07:21:40.057072 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:21:40 crc kubenswrapper[4908]: I0131 07:21:40.057147 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:21:40 crc kubenswrapper[4908]: I0131 07:21:40.057178 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 07:21:40 crc kubenswrapper[4908]: I0131 07:21:40.057210 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:21:40 crc kubenswrapper[4908]: I0131 07:21:40.057243 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:21:40 crc kubenswrapper[4908]: I0131 07:21:40.057272 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:21:40 crc kubenswrapper[4908]: I0131 07:21:40.057281 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:21:40 crc kubenswrapper[4908]: I0131 07:21:40.057334 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 07:21:40 crc kubenswrapper[4908]: I0131 07:21:40.057297 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 07:21:40 crc kubenswrapper[4908]: I0131 07:21:40.057362 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 07:21:40 crc kubenswrapper[4908]: I0131 07:21:40.057407 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:21:40 crc kubenswrapper[4908]: I0131 07:21:40.057401 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:21:40 crc kubenswrapper[4908]: I0131 07:21:40.057434 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:21:40 crc kubenswrapper[4908]: I0131 07:21:40.057383 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:21:40 crc kubenswrapper[4908]: I0131 07:21:40.260075 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:21:40 crc kubenswrapper[4908]: I0131 07:21:40.288042 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:21:40 crc kubenswrapper[4908]: I0131 07:21:40.294860 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 07:21:40 crc kubenswrapper[4908]: I0131 07:21:40.327360 4908 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 31 07:21:40 crc kubenswrapper[4908]: E0131 07:21:40.342794 4908 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="3.2s" Jan 31 07:21:40 crc kubenswrapper[4908]: I0131 07:21:40.355019 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 15:37:21.291236522 +0000 UTC Jan 31 07:21:40 crc kubenswrapper[4908]: W0131 07:21:40.357070 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-62dac2ae843d4a9b1b1f934e64ee14a2999f044d254a0fc00f1e40e4f866ff3e WatchSource:0}: Error finding container 62dac2ae843d4a9b1b1f934e64ee14a2999f044d254a0fc00f1e40e4f866ff3e: Status 404 returned error can't find the container with id 62dac2ae843d4a9b1b1f934e64ee14a2999f044d254a0fc00f1e40e4f866ff3e Jan 31 07:21:40 crc kubenswrapper[4908]: W0131 07:21:40.367367 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-7e5b3bce77030b7d8161fdaa6e54031fcb94ea7a09aa4d415097e4c841a7169a WatchSource:0}: Error finding container 7e5b3bce77030b7d8161fdaa6e54031fcb94ea7a09aa4d415097e4c841a7169a: Status 404 returned error can't find the container with id 7e5b3bce77030b7d8161fdaa6e54031fcb94ea7a09aa4d415097e4c841a7169a Jan 31 07:21:40 crc kubenswrapper[4908]: W0131 07:21:40.368724 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-d42ef15b89b3413e8aec64dac08d732debf1c0549dff8fe8247ce2172acf615b WatchSource:0}: Error finding container d42ef15b89b3413e8aec64dac08d732debf1c0549dff8fe8247ce2172acf615b: Status 404 returned error can't find the container with id d42ef15b89b3413e8aec64dac08d732debf1c0549dff8fe8247ce2172acf615b Jan 31 07:21:40 crc kubenswrapper[4908]: W0131 07:21:40.373307 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-db2c4dba732a19f239f4b471c71619565db4b43558773b6a42fc7de29754b8bd WatchSource:0}: Error finding container db2c4dba732a19f239f4b471c71619565db4b43558773b6a42fc7de29754b8bd: Status 404 returned error can't find the container with id db2c4dba732a19f239f4b471c71619565db4b43558773b6a42fc7de29754b8bd Jan 31 07:21:40 crc kubenswrapper[4908]: I0131 07:21:40.444315 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:40 crc kubenswrapper[4908]: I0131 07:21:40.445887 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:40 crc kubenswrapper[4908]: I0131 07:21:40.445942 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:40 crc kubenswrapper[4908]: I0131 07:21:40.445954 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:40 crc kubenswrapper[4908]: I0131 07:21:40.445999 4908 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 07:21:40 crc kubenswrapper[4908]: E0131 07:21:40.446776 4908 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.46:6443: connect: connection refused" node="crc" Jan 31 07:21:40 crc kubenswrapper[4908]: W0131 07:21:40.533069 4908 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 31 07:21:40 crc kubenswrapper[4908]: E0131 07:21:40.533157 4908 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:21:40 crc kubenswrapper[4908]: W0131 07:21:40.628086 4908 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 31 07:21:40 crc kubenswrapper[4908]: E0131 07:21:40.628191 4908 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:21:40 crc kubenswrapper[4908]: I0131 07:21:40.946603 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"db2c4dba732a19f239f4b471c71619565db4b43558773b6a42fc7de29754b8bd"} Jan 31 07:21:40 crc kubenswrapper[4908]: I0131 07:21:40.948006 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d42ef15b89b3413e8aec64dac08d732debf1c0549dff8fe8247ce2172acf615b"} Jan 31 07:21:40 crc kubenswrapper[4908]: I0131 07:21:40.949078 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7e5b3bce77030b7d8161fdaa6e54031fcb94ea7a09aa4d415097e4c841a7169a"} Jan 31 07:21:40 crc kubenswrapper[4908]: I0131 07:21:40.950186 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"62dac2ae843d4a9b1b1f934e64ee14a2999f044d254a0fc00f1e40e4f866ff3e"} Jan 31 07:21:40 crc kubenswrapper[4908]: I0131 07:21:40.951168 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c1614805dc0cf07c82ad20e1b72de4eadf8a497f14dbc4d1b6da09a324061c83"} Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.326123 4908 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 31 07:21:41 crc kubenswrapper[4908]: W0131 07:21:41.351129 4908 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 31 07:21:41 crc kubenswrapper[4908]: E0131 07:21:41.351239 4908 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.355562 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 20:08:46.151465315 +0000 UTC Jan 31 07:21:41 crc kubenswrapper[4908]: E0131 07:21:41.603496 4908 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.46:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188fbfd458d4e188 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 07:21:37.324212616 +0000 UTC m=+3.940157280,LastTimestamp:2026-01-31 07:21:37.324212616 +0000 UTC m=+3.940157280,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 07:21:41 crc kubenswrapper[4908]: W0131 07:21:41.846120 4908 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 31 07:21:41 crc kubenswrapper[4908]: E0131 07:21:41.846197 4908 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.956750 4908 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88" exitCode=0 Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.956900 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88"} Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.957016 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.958408 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd"} Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.958839 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.958887 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.958907 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.960292 4908 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d" exitCode=0 Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.960393 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.960445 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d"} Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.961520 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.961563 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.961581 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.962877 4908 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="64d60d5422cd204d352318b4b40c2dad90a82b5e0284bae925192368eb101e5a" exitCode=0 Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.963008 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"64d60d5422cd204d352318b4b40c2dad90a82b5e0284bae925192368eb101e5a"} Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.963098 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.963099 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.964895 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.964923 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.964932 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.965191 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.965293 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.965376 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.965717 4908 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="c8fccdf59a4e50afe2f52463a10951ed2c28bf70698c214522758116c2a4af56" exitCode=0 Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.965757 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"c8fccdf59a4e50afe2f52463a10951ed2c28bf70698c214522758116c2a4af56"} Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.965773 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.966342 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.966359 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:41 crc kubenswrapper[4908]: I0131 07:21:41.966368 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:42 crc kubenswrapper[4908]: I0131 07:21:42.047862 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:42 crc kubenswrapper[4908]: I0131 07:21:42.050880 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:42 crc kubenswrapper[4908]: I0131 07:21:42.050927 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:42 crc kubenswrapper[4908]: I0131 07:21:42.050944 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:42 crc kubenswrapper[4908]: I0131 07:21:42.050998 4908 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 07:21:42 crc kubenswrapper[4908]: E0131 07:21:42.051668 4908 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.46:6443: connect: connection refused" node="crc" Jan 31 07:21:42 crc kubenswrapper[4908]: I0131 07:21:42.326932 4908 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 31 07:21:42 crc kubenswrapper[4908]: I0131 07:21:42.356065 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 17:17:50.367744572 +0000 UTC Jan 31 07:21:42 crc kubenswrapper[4908]: I0131 07:21:42.716221 4908 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 07:21:42 crc kubenswrapper[4908]: E0131 07:21:42.718154 4908 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:21:42 crc kubenswrapper[4908]: I0131 07:21:42.974604 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533"} Jan 31 07:21:42 crc kubenswrapper[4908]: I0131 07:21:42.977233 4908 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ee484db38d105d78e0dc3f2d203acfbc9eaaef217406b69efd0b08f1cd373a4a" exitCode=0 Jan 31 07:21:42 crc kubenswrapper[4908]: I0131 07:21:42.977287 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ee484db38d105d78e0dc3f2d203acfbc9eaaef217406b69efd0b08f1cd373a4a"} Jan 31 07:21:42 crc kubenswrapper[4908]: I0131 07:21:42.977439 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:42 crc kubenswrapper[4908]: I0131 07:21:42.978422 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:42 crc kubenswrapper[4908]: I0131 07:21:42.978468 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:42 crc kubenswrapper[4908]: I0131 07:21:42.978486 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:42 crc kubenswrapper[4908]: I0131 07:21:42.979111 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"3d140ab766db4e2a205322df77c92dfca143606ef2ad6eef8ebe7824a1fb2ce1"} Jan 31 07:21:42 crc kubenswrapper[4908]: I0131 07:21:42.979151 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:42 crc kubenswrapper[4908]: I0131 07:21:42.980055 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:42 crc kubenswrapper[4908]: I0131 07:21:42.980077 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:42 crc kubenswrapper[4908]: I0131 07:21:42.980087 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:42 crc kubenswrapper[4908]: I0131 07:21:42.981552 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"745a991c9c5f319a2963caaf508b01491692c3325e6b709376570b0fd6d874b2"} Jan 31 07:21:42 crc kubenswrapper[4908]: I0131 07:21:42.983941 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0"} Jan 31 07:21:42 crc kubenswrapper[4908]: I0131 07:21:42.983962 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8"} Jan 31 07:21:43 crc kubenswrapper[4908]: I0131 07:21:43.326729 4908 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 31 07:21:43 crc kubenswrapper[4908]: I0131 07:21:43.356273 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 23:16:38.429253912 +0000 UTC Jan 31 07:21:43 crc kubenswrapper[4908]: E0131 07:21:43.544687 4908 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="6.4s" Jan 31 07:21:43 crc kubenswrapper[4908]: I0131 07:21:43.988395 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d985bf0c21fdaa280e6e3001a5ccdf36afc39a6ad0446f25d96eb13186d69ec5"} Jan 31 07:21:43 crc kubenswrapper[4908]: I0131 07:21:43.990473 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e"} Jan 31 07:21:43 crc kubenswrapper[4908]: I0131 07:21:43.990607 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:43 crc kubenswrapper[4908]: I0131 07:21:43.991698 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:43 crc kubenswrapper[4908]: I0131 07:21:43.991817 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:43 crc kubenswrapper[4908]: I0131 07:21:43.991896 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:43 crc kubenswrapper[4908]: I0131 07:21:43.992875 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360"} Jan 31 07:21:43 crc kubenswrapper[4908]: I0131 07:21:43.994570 4908 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0197c69c2bd270e05b11cbc71918c152abe2c3d0448b14c5e9cd570f4b927a40" exitCode=0 Jan 31 07:21:43 crc kubenswrapper[4908]: I0131 07:21:43.994644 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:43 crc kubenswrapper[4908]: I0131 07:21:43.994652 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0197c69c2bd270e05b11cbc71918c152abe2c3d0448b14c5e9cd570f4b927a40"} Jan 31 07:21:43 crc kubenswrapper[4908]: I0131 07:21:43.994699 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:43 crc kubenswrapper[4908]: I0131 07:21:43.995362 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:43 crc kubenswrapper[4908]: I0131 07:21:43.995397 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:43 crc kubenswrapper[4908]: I0131 07:21:43.995414 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:43 crc kubenswrapper[4908]: I0131 07:21:43.995588 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:43 crc kubenswrapper[4908]: I0131 07:21:43.995615 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:43 crc kubenswrapper[4908]: I0131 07:21:43.995626 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:44 crc kubenswrapper[4908]: I0131 07:21:44.326683 4908 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 31 07:21:44 crc kubenswrapper[4908]: I0131 07:21:44.356442 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 15:06:45.750935601 +0000 UTC Jan 31 07:21:44 crc kubenswrapper[4908]: I0131 07:21:44.998703 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35"} Jan 31 07:21:44 crc kubenswrapper[4908]: I0131 07:21:44.998743 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01"} Jan 31 07:21:45 crc kubenswrapper[4908]: I0131 07:21:45.001180 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"419e72d748701a7cbac4f41b93088ebcbc495919d370d4585bea16173abf59c6"} Jan 31 07:21:45 crc kubenswrapper[4908]: I0131 07:21:45.001205 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"427ceefa14da0dfd762ea13ae64a686ef3c4b543370be7c3678a326a21de8b38"} Jan 31 07:21:45 crc kubenswrapper[4908]: I0131 07:21:45.003762 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:45 crc kubenswrapper[4908]: I0131 07:21:45.003779 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:45 crc kubenswrapper[4908]: I0131 07:21:45.004088 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"760487d3653d8039bb961bd2aface36198eeea534849f94840957f6f86e3f52e"} Jan 31 07:21:45 crc kubenswrapper[4908]: I0131 07:21:45.004671 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:45 crc kubenswrapper[4908]: I0131 07:21:45.004706 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:45 crc kubenswrapper[4908]: I0131 07:21:45.004715 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:45 crc kubenswrapper[4908]: I0131 07:21:45.005028 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:45 crc kubenswrapper[4908]: I0131 07:21:45.006078 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:45 crc kubenswrapper[4908]: I0131 07:21:45.006107 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:45 crc kubenswrapper[4908]: I0131 07:21:45.252102 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:45 crc kubenswrapper[4908]: I0131 07:21:45.253406 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:45 crc kubenswrapper[4908]: I0131 07:21:45.253472 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:45 crc kubenswrapper[4908]: I0131 07:21:45.253486 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:45 crc kubenswrapper[4908]: I0131 07:21:45.253525 4908 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 07:21:45 crc kubenswrapper[4908]: E0131 07:21:45.254295 4908 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.46:6443: connect: connection refused" node="crc" Jan 31 07:21:45 crc kubenswrapper[4908]: I0131 07:21:45.326216 4908 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 31 07:21:45 crc kubenswrapper[4908]: I0131 07:21:45.357001 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 02:35:41.769580162 +0000 UTC Jan 31 07:21:45 crc kubenswrapper[4908]: I0131 07:21:45.503390 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:21:45 crc kubenswrapper[4908]: W0131 07:21:45.582051 4908 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 31 07:21:45 crc kubenswrapper[4908]: E0131 07:21:45.582277 4908 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:21:45 crc kubenswrapper[4908]: W0131 07:21:45.795748 4908 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 31 07:21:45 crc kubenswrapper[4908]: E0131 07:21:45.795831 4908 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:21:46 crc kubenswrapper[4908]: I0131 07:21:46.006219 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:46 crc kubenswrapper[4908]: I0131 07:21:46.006241 4908 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 07:21:46 crc kubenswrapper[4908]: I0131 07:21:46.006327 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:46 crc kubenswrapper[4908]: I0131 07:21:46.007646 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:46 crc kubenswrapper[4908]: I0131 07:21:46.007776 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:46 crc kubenswrapper[4908]: I0131 07:21:46.007697 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:46 crc kubenswrapper[4908]: I0131 07:21:46.008025 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:46 crc kubenswrapper[4908]: I0131 07:21:46.007892 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:46 crc kubenswrapper[4908]: I0131 07:21:46.008059 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:46 crc kubenswrapper[4908]: I0131 07:21:46.326849 4908 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 31 07:21:46 crc kubenswrapper[4908]: I0131 07:21:46.357971 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 01:23:15.554397959 +0000 UTC Jan 31 07:21:46 crc kubenswrapper[4908]: W0131 07:21:46.389894 4908 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 31 07:21:46 crc kubenswrapper[4908]: E0131 07:21:46.390020 4908 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:21:46 crc kubenswrapper[4908]: I0131 07:21:46.558794 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:21:46 crc kubenswrapper[4908]: I0131 07:21:46.922402 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:21:47 crc kubenswrapper[4908]: I0131 07:21:47.010742 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7bea5de9bdcf92ed1ca51c0bf689c70a427c07170635980ad7586aa6fef3700b"} Jan 31 07:21:47 crc kubenswrapper[4908]: I0131 07:21:47.010930 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:47 crc kubenswrapper[4908]: I0131 07:21:47.011769 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:47 crc kubenswrapper[4908]: I0131 07:21:47.011792 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:47 crc kubenswrapper[4908]: I0131 07:21:47.011802 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:47 crc kubenswrapper[4908]: I0131 07:21:47.014625 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:47 crc kubenswrapper[4908]: I0131 07:21:47.015099 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cb565d0bb0f53ec00448d23eae755be1c0b61a3ac48cf4a2f3c41b31f68309e5"} Jan 31 07:21:47 crc kubenswrapper[4908]: I0131 07:21:47.015156 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4d636e4b4a957c18d504e9a7dbea1c8de2f361c6d62f43440153fd3c3a9fd114"} Jan 31 07:21:47 crc kubenswrapper[4908]: I0131 07:21:47.015482 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:47 crc kubenswrapper[4908]: I0131 07:21:47.015533 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:47 crc kubenswrapper[4908]: I0131 07:21:47.015544 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:47 crc kubenswrapper[4908]: I0131 07:21:47.326900 4908 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 31 07:21:47 crc kubenswrapper[4908]: I0131 07:21:47.358127 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 14:31:33.995649918 +0000 UTC Jan 31 07:21:47 crc kubenswrapper[4908]: W0131 07:21:47.659320 4908 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 31 07:21:47 crc kubenswrapper[4908]: E0131 07:21:47.659407 4908 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:21:47 crc kubenswrapper[4908]: I0131 07:21:47.910477 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 07:21:47 crc kubenswrapper[4908]: I0131 07:21:47.910632 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:47 crc kubenswrapper[4908]: I0131 07:21:47.912970 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:47 crc kubenswrapper[4908]: I0131 07:21:47.913043 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:47 crc kubenswrapper[4908]: I0131 07:21:47.913058 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:47 crc kubenswrapper[4908]: I0131 07:21:47.960677 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:21:48 crc kubenswrapper[4908]: I0131 07:21:48.020605 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 07:21:48 crc kubenswrapper[4908]: I0131 07:21:48.022021 4908 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7bea5de9bdcf92ed1ca51c0bf689c70a427c07170635980ad7586aa6fef3700b" exitCode=255 Jan 31 07:21:48 crc kubenswrapper[4908]: I0131 07:21:48.022078 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7bea5de9bdcf92ed1ca51c0bf689c70a427c07170635980ad7586aa6fef3700b"} Jan 31 07:21:48 crc kubenswrapper[4908]: I0131 07:21:48.022180 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:48 crc kubenswrapper[4908]: I0131 07:21:48.023182 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:48 crc kubenswrapper[4908]: I0131 07:21:48.023217 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:48 crc kubenswrapper[4908]: I0131 07:21:48.023227 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:48 crc kubenswrapper[4908]: I0131 07:21:48.023673 4908 scope.go:117] "RemoveContainer" containerID="7bea5de9bdcf92ed1ca51c0bf689c70a427c07170635980ad7586aa6fef3700b" Jan 31 07:21:48 crc kubenswrapper[4908]: I0131 07:21:48.025883 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:48 crc kubenswrapper[4908]: I0131 07:21:48.025933 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:48 crc kubenswrapper[4908]: I0131 07:21:48.025869 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0a7fa42d943a9fdfe418570df65b1f7cb0283986be5c708de3249a4437813074"} Jan 31 07:21:48 crc kubenswrapper[4908]: I0131 07:21:48.026644 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:48 crc kubenswrapper[4908]: I0131 07:21:48.026703 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:48 crc kubenswrapper[4908]: I0131 07:21:48.026717 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:48 crc kubenswrapper[4908]: I0131 07:21:48.026781 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:48 crc kubenswrapper[4908]: I0131 07:21:48.026805 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:48 crc kubenswrapper[4908]: I0131 07:21:48.026816 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:48 crc kubenswrapper[4908]: I0131 07:21:48.054817 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:21:48 crc kubenswrapper[4908]: I0131 07:21:48.193867 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:21:48 crc kubenswrapper[4908]: I0131 07:21:48.326170 4908 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 31 07:21:48 crc kubenswrapper[4908]: I0131 07:21:48.358800 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 14:12:02.936367828 +0000 UTC Jan 31 07:21:48 crc kubenswrapper[4908]: I0131 07:21:48.411893 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:21:48 crc kubenswrapper[4908]: I0131 07:21:48.412393 4908 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 31 07:21:48 crc kubenswrapper[4908]: I0131 07:21:48.412449 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 31 07:21:48 crc kubenswrapper[4908]: E0131 07:21:48.939630 4908 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 07:21:49 crc kubenswrapper[4908]: I0131 07:21:49.030571 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 07:21:49 crc kubenswrapper[4908]: I0131 07:21:49.033041 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:49 crc kubenswrapper[4908]: I0131 07:21:49.033757 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:49 crc kubenswrapper[4908]: I0131 07:21:49.033861 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274"} Jan 31 07:21:49 crc kubenswrapper[4908]: I0131 07:21:49.033951 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:49 crc kubenswrapper[4908]: I0131 07:21:49.034013 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:21:49 crc kubenswrapper[4908]: I0131 07:21:49.034836 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:49 crc kubenswrapper[4908]: I0131 07:21:49.034845 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:49 crc kubenswrapper[4908]: I0131 07:21:49.034959 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:49 crc kubenswrapper[4908]: I0131 07:21:49.034972 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:49 crc kubenswrapper[4908]: I0131 07:21:49.034899 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:49 crc kubenswrapper[4908]: I0131 07:21:49.035190 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:49 crc kubenswrapper[4908]: I0131 07:21:49.035257 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:49 crc kubenswrapper[4908]: I0131 07:21:49.034940 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:49 crc kubenswrapper[4908]: I0131 07:21:49.035721 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:49 crc kubenswrapper[4908]: I0131 07:21:49.326884 4908 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 31 07:21:49 crc kubenswrapper[4908]: I0131 07:21:49.359716 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 11:45:50.968359348 +0000 UTC Jan 31 07:21:49 crc kubenswrapper[4908]: I0131 07:21:49.558797 4908 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 07:21:49 crc kubenswrapper[4908]: I0131 07:21:49.558873 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 07:21:50 crc kubenswrapper[4908]: I0131 07:21:50.037838 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 31 07:21:50 crc kubenswrapper[4908]: I0131 07:21:50.038327 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 07:21:50 crc kubenswrapper[4908]: I0131 07:21:50.040022 4908 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274" exitCode=255 Jan 31 07:21:50 crc kubenswrapper[4908]: I0131 07:21:50.040101 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274"} Jan 31 07:21:50 crc kubenswrapper[4908]: I0131 07:21:50.040166 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:50 crc kubenswrapper[4908]: I0131 07:21:50.040173 4908 scope.go:117] "RemoveContainer" containerID="7bea5de9bdcf92ed1ca51c0bf689c70a427c07170635980ad7586aa6fef3700b" Jan 31 07:21:50 crc kubenswrapper[4908]: I0131 07:21:50.040166 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:50 crc kubenswrapper[4908]: I0131 07:21:50.041095 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:50 crc kubenswrapper[4908]: I0131 07:21:50.041140 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:50 crc kubenswrapper[4908]: I0131 07:21:50.041157 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:50 crc kubenswrapper[4908]: I0131 07:21:50.041447 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:50 crc kubenswrapper[4908]: I0131 07:21:50.041502 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:50 crc kubenswrapper[4908]: I0131 07:21:50.041525 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:50 crc kubenswrapper[4908]: I0131 07:21:50.041788 4908 scope.go:117] "RemoveContainer" containerID="28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274" Jan 31 07:21:50 crc kubenswrapper[4908]: E0131 07:21:50.042066 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 31 07:21:50 crc kubenswrapper[4908]: I0131 07:21:50.360398 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 02:32:42.20048962 +0000 UTC Jan 31 07:21:50 crc kubenswrapper[4908]: I0131 07:21:50.740881 4908 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:21:51 crc kubenswrapper[4908]: I0131 07:21:51.043350 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 31 07:21:51 crc kubenswrapper[4908]: I0131 07:21:51.044830 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:51 crc kubenswrapper[4908]: I0131 07:21:51.045628 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:51 crc kubenswrapper[4908]: I0131 07:21:51.045657 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:51 crc kubenswrapper[4908]: I0131 07:21:51.045669 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:51 crc kubenswrapper[4908]: I0131 07:21:51.046170 4908 scope.go:117] "RemoveContainer" containerID="28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274" Jan 31 07:21:51 crc kubenswrapper[4908]: E0131 07:21:51.046349 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 31 07:21:51 crc kubenswrapper[4908]: I0131 07:21:51.307692 4908 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 07:21:51 crc kubenswrapper[4908]: I0131 07:21:51.361437 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 16:45:39.846931064 +0000 UTC Jan 31 07:21:51 crc kubenswrapper[4908]: I0131 07:21:51.594577 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 31 07:21:51 crc kubenswrapper[4908]: I0131 07:21:51.594824 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:51 crc kubenswrapper[4908]: I0131 07:21:51.596025 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:51 crc kubenswrapper[4908]: I0131 07:21:51.596059 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:51 crc kubenswrapper[4908]: I0131 07:21:51.596070 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:51 crc kubenswrapper[4908]: I0131 07:21:51.654825 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:51 crc kubenswrapper[4908]: I0131 07:21:51.656090 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:51 crc kubenswrapper[4908]: I0131 07:21:51.656123 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:51 crc kubenswrapper[4908]: I0131 07:21:51.656135 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:51 crc kubenswrapper[4908]: I0131 07:21:51.656159 4908 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 07:21:51 crc kubenswrapper[4908]: I0131 07:21:51.721065 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:21:52 crc kubenswrapper[4908]: I0131 07:21:52.046798 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:52 crc kubenswrapper[4908]: I0131 07:21:52.047733 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:52 crc kubenswrapper[4908]: I0131 07:21:52.047771 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:52 crc kubenswrapper[4908]: I0131 07:21:52.047801 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:52 crc kubenswrapper[4908]: I0131 07:21:52.048404 4908 scope.go:117] "RemoveContainer" containerID="28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274" Jan 31 07:21:52 crc kubenswrapper[4908]: E0131 07:21:52.048575 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 31 07:21:52 crc kubenswrapper[4908]: I0131 07:21:52.362065 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 01:19:06.188386205 +0000 UTC Jan 31 07:21:53 crc kubenswrapper[4908]: I0131 07:21:53.049280 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:53 crc kubenswrapper[4908]: I0131 07:21:53.050195 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:53 crc kubenswrapper[4908]: I0131 07:21:53.050252 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:53 crc kubenswrapper[4908]: I0131 07:21:53.050264 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:53 crc kubenswrapper[4908]: I0131 07:21:53.050801 4908 scope.go:117] "RemoveContainer" containerID="28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274" Jan 31 07:21:53 crc kubenswrapper[4908]: E0131 07:21:53.051005 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 31 07:21:53 crc kubenswrapper[4908]: I0131 07:21:53.362473 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 04:56:18.667165746 +0000 UTC Jan 31 07:21:54 crc kubenswrapper[4908]: I0131 07:21:54.363202 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 05:29:42.287437756 +0000 UTC Jan 31 07:21:55 crc kubenswrapper[4908]: I0131 07:21:55.363411 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 03:10:28.040843885 +0000 UTC Jan 31 07:21:55 crc kubenswrapper[4908]: I0131 07:21:55.511565 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:21:55 crc kubenswrapper[4908]: I0131 07:21:55.511812 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:55 crc kubenswrapper[4908]: I0131 07:21:55.513353 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:55 crc kubenswrapper[4908]: I0131 07:21:55.513774 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:55 crc kubenswrapper[4908]: I0131 07:21:55.514074 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:56 crc kubenswrapper[4908]: I0131 07:21:56.363613 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 09:57:14.430751102 +0000 UTC Jan 31 07:21:56 crc kubenswrapper[4908]: I0131 07:21:56.569499 4908 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 31 07:21:56 crc kubenswrapper[4908]: I0131 07:21:56.569970 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 31 07:21:56 crc kubenswrapper[4908]: I0131 07:21:56.632368 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 31 07:21:56 crc kubenswrapper[4908]: I0131 07:21:56.632916 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:56 crc kubenswrapper[4908]: I0131 07:21:56.634330 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:56 crc kubenswrapper[4908]: I0131 07:21:56.634377 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:56 crc kubenswrapper[4908]: I0131 07:21:56.634392 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:56 crc kubenswrapper[4908]: I0131 07:21:56.694165 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 31 07:21:57 crc kubenswrapper[4908]: I0131 07:21:57.058690 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:57 crc kubenswrapper[4908]: I0131 07:21:57.059883 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:57 crc kubenswrapper[4908]: I0131 07:21:57.059918 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:57 crc kubenswrapper[4908]: I0131 07:21:57.059929 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:57 crc kubenswrapper[4908]: I0131 07:21:57.072335 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 31 07:21:57 crc kubenswrapper[4908]: I0131 07:21:57.363944 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 14:59:08.937053895 +0000 UTC Jan 31 07:21:58 crc kubenswrapper[4908]: I0131 07:21:58.060696 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:58 crc kubenswrapper[4908]: I0131 07:21:58.062109 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:58 crc kubenswrapper[4908]: I0131 07:21:58.062178 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:58 crc kubenswrapper[4908]: I0131 07:21:58.062196 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:58 crc kubenswrapper[4908]: I0131 07:21:58.365150 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 02:58:29.175140776 +0000 UTC Jan 31 07:21:58 crc kubenswrapper[4908]: I0131 07:21:58.417975 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:21:58 crc kubenswrapper[4908]: I0131 07:21:58.418184 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:58 crc kubenswrapper[4908]: I0131 07:21:58.419255 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:58 crc kubenswrapper[4908]: I0131 07:21:58.419291 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:58 crc kubenswrapper[4908]: I0131 07:21:58.419331 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:58 crc kubenswrapper[4908]: I0131 07:21:58.420037 4908 scope.go:117] "RemoveContainer" containerID="28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274" Jan 31 07:21:58 crc kubenswrapper[4908]: E0131 07:21:58.420262 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 31 07:21:58 crc kubenswrapper[4908]: I0131 07:21:58.422872 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:21:58 crc kubenswrapper[4908]: E0131 07:21:58.939815 4908 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 07:21:59 crc kubenswrapper[4908]: I0131 07:21:59.064065 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:21:59 crc kubenswrapper[4908]: I0131 07:21:59.065354 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:21:59 crc kubenswrapper[4908]: I0131 07:21:59.065418 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:21:59 crc kubenswrapper[4908]: I0131 07:21:59.065442 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:21:59 crc kubenswrapper[4908]: I0131 07:21:59.066333 4908 scope.go:117] "RemoveContainer" containerID="28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274" Jan 31 07:21:59 crc kubenswrapper[4908]: E0131 07:21:59.066627 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 31 07:21:59 crc kubenswrapper[4908]: I0131 07:21:59.365935 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 16:45:29.435638502 +0000 UTC Jan 31 07:21:59 crc kubenswrapper[4908]: I0131 07:21:59.559537 4908 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 07:21:59 crc kubenswrapper[4908]: I0131 07:21:59.560056 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 07:22:00 crc kubenswrapper[4908]: I0131 07:22:00.367355 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 03:01:35.54890328 +0000 UTC Jan 31 07:22:01 crc kubenswrapper[4908]: I0131 07:22:01.368504 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 00:59:22.474345263 +0000 UTC Jan 31 07:22:01 crc kubenswrapper[4908]: E0131 07:22:01.564153 4908 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="7s" Jan 31 07:22:01 crc kubenswrapper[4908]: I0131 07:22:01.566113 4908 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 07:22:01 crc kubenswrapper[4908]: I0131 07:22:01.567935 4908 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 31 07:22:01 crc kubenswrapper[4908]: I0131 07:22:01.568592 4908 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 31 07:22:01 crc kubenswrapper[4908]: E0131 07:22:01.569441 4908 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 31 07:22:01 crc kubenswrapper[4908]: I0131 07:22:01.570864 4908 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 07:22:01 crc kubenswrapper[4908]: I0131 07:22:01.892747 4908 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.367195 4908 apiserver.go:52] "Watching apiserver" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.371366 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 18:08:02.434650852 +0000 UTC Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.381499 4908 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.381777 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.382234 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.382367 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.382369 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 07:22:02 crc kubenswrapper[4908]: E0131 07:22:02.382426 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.382556 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:02 crc kubenswrapper[4908]: E0131 07:22:02.382694 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.382574 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:02 crc kubenswrapper[4908]: E0131 07:22:02.382856 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.382617 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.385607 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.385890 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.386142 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.386283 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.386639 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.386780 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.386995 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.387144 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.387281 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.439235 4908 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.467927 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.473806 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.473874 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.473909 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.473955 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474026 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474059 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474091 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474120 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474154 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474185 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474205 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474216 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474247 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474281 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474312 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474373 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474407 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474410 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474439 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474470 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474504 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474532 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474567 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474600 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474629 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474662 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474695 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474727 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474758 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474789 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474819 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474850 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474880 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474911 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474952 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.474953 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475037 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475072 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475103 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475139 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475177 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475229 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475279 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475309 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475322 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475339 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475371 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475404 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475433 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475465 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475497 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475530 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475560 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475593 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475622 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475655 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475689 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475725 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475756 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475793 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475825 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475855 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475888 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475922 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475969 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.476043 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.476076 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.476106 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.476135 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.476173 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.476231 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.476273 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.476321 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.476361 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.476403 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.476504 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.476540 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.476573 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.476658 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.476704 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.476767 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.476821 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.476869 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.476926 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.477077 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.477124 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.477157 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.477192 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.477229 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.477262 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.477297 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.477330 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.477374 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.477416 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.477452 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.477484 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.477521 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.477562 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.477611 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.477684 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.477721 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.477753 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.477789 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.477820 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.477859 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.478113 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.478191 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.478232 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.478265 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.478298 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.478329 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.478364 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.478400 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.478437 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.478483 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.478519 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.478551 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.478592 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.478625 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.478658 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.478693 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.478726 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.478759 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.478793 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.478825 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.478860 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.478894 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.478927 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.478962 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.479217 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.479262 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.479310 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.479358 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.479467 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.479507 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.479542 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.479577 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.479706 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.479749 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.479784 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.479817 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.479849 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.479884 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.479920 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.479960 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480025 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480060 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480099 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480133 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480167 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480202 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480238 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480275 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480311 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480351 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480398 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480443 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480492 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480536 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480574 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480630 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480674 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480710 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480748 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480788 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480823 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480866 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480900 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480934 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480970 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.481031 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.481072 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.481107 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.481142 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.481178 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.481212 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.481251 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.481289 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.481324 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.481358 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.481394 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.481428 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.481465 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.481502 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.481537 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.481574 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.481611 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.481645 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.481728 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.481775 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.481814 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.481852 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.481891 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.481935 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.482014 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.482069 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.482116 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.482153 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.482189 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.482258 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.482303 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.482341 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.482378 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.482417 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.482454 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.482491 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.482890 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.482932 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.482963 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.483007 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.483030 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.483052 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.483075 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.483124 4908 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.483136 4908 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.483232 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.483245 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.484859 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475341 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475490 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475813 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.486885 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.475931 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.476035 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.476357 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.476359 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.476362 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.476771 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.476899 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.477102 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.477170 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.477293 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.477318 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.478552 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.478638 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.478727 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.478965 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.479354 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.479617 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.479638 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.479763 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.479901 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.479903 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.479965 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.479964 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480136 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480186 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480238 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480155 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480452 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480516 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480609 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480634 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.480626 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.481326 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.481436 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.481588 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.481756 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.481756 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.482198 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.482065 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.482451 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.482874 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.482898 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.483012 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.483474 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.483374 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.483517 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.491597 4908 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.492305 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.493294 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: E0131 07:22:02.496334 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:22:02.996311583 +0000 UTC m=+29.612256237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.489873 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.490263 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: E0131 07:22:02.484549 4908 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:22:02 crc kubenswrapper[4908]: E0131 07:22:02.502240 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:03.00221189 +0000 UTC m=+29.618156544 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.485171 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.485192 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.485509 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.485562 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.485707 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.485826 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.486036 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.486582 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.486686 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.486737 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.486904 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.487041 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.487153 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.487290 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.487380 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.487400 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.487444 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.487638 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.487704 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.487895 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.487953 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.488054 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.488146 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.488237 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.488318 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.488384 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.488395 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.488765 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.489099 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: E0131 07:22:02.489188 4908 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:22:02 crc kubenswrapper[4908]: E0131 07:22:02.502827 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:03.002817906 +0000 UTC m=+29.618762560 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.489247 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.489378 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.489384 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.489465 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.489607 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.489613 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.489616 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.489784 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: E0131 07:22:02.504096 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:22:02 crc kubenswrapper[4908]: E0131 07:22:02.504122 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:22:02 crc kubenswrapper[4908]: E0131 07:22:02.504137 4908 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:22:02 crc kubenswrapper[4908]: E0131 07:22:02.504185 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:03.004171552 +0000 UTC m=+29.620116206 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.504555 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.504768 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.504755 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.504827 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.504910 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.504994 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.505054 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.505432 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.505910 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.505998 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.506033 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.506851 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.507892 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.509417 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.509482 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: E0131 07:22:02.509591 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.509597 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: E0131 07:22:02.509608 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:22:02 crc kubenswrapper[4908]: E0131 07:22:02.509627 4908 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:22:02 crc kubenswrapper[4908]: E0131 07:22:02.509700 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:03.009684689 +0000 UTC m=+29.625629533 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.510269 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.510379 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.510595 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.511042 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.511798 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.512958 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.513071 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.513468 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.513549 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.513831 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.514668 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.514742 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.514956 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.515058 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.515448 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.515482 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.515585 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.515656 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.515688 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.515800 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.517935 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.518056 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.518088 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.518173 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.518213 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.518283 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.518384 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.518719 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.519117 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.519274 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.519547 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.519566 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.519575 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.519933 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.520542 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.520711 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.520769 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.520972 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.519617 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.521086 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.521106 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.521088 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.521133 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.521144 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.521250 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.521300 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.520852 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.521535 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.521641 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.521843 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.522345 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.522846 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.523048 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.523211 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.523337 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.523832 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.524129 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.524444 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.524622 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.524628 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.524799 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.524900 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.524969 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.524975 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.525044 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.525157 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.525183 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.525275 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.525341 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.525733 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.525762 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.525178 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.525461 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.525790 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.525617 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.525642 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.525744 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.525870 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.525883 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.525941 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.526059 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.526124 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.526293 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.526347 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.526733 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.527586 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.527777 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.529846 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.530632 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.535100 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.539850 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.546388 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.554953 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.555562 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.561360 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.567354 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.585508 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.585575 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.585744 4908 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.585758 4908 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.585776 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.585789 4908 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.585802 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.585813 4908 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.585839 4908 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.585850 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.585860 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.585869 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.585882 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.585892 4908 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.585888 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.585959 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.585902 4908 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586003 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586018 4908 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586030 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586048 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586057 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586069 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586078 4908 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586091 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586103 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586112 4908 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586123 4908 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586136 4908 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586145 4908 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586155 4908 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586167 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586175 4908 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586184 4908 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586192 4908 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586204 4908 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586213 4908 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586221 4908 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586230 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586242 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586251 4908 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586260 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586270 4908 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586282 4908 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586290 4908 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586299 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586311 4908 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586321 4908 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586329 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586338 4908 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586351 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586360 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586368 4908 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586377 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586388 4908 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586397 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586405 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586416 4908 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586427 4908 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586436 4908 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586445 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586459 4908 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586468 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586477 4908 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586485 4908 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586499 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586507 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586517 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586527 4908 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586539 4908 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586549 4908 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586557 4908 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586571 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586580 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586588 4908 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586598 4908 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586611 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586620 4908 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586629 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586637 4908 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586648 4908 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586658 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586666 4908 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586677 4908 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586687 4908 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586695 4908 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586703 4908 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586713 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586722 4908 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586730 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586740 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586752 4908 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586762 4908 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586772 4908 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586801 4908 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586812 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586822 4908 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586831 4908 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586842 4908 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586851 4908 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586894 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586920 4908 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586937 4908 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586958 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586971 4908 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.586996 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587006 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587020 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587030 4908 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587039 4908 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587048 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587060 4908 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587069 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587079 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587088 4908 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587100 4908 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587120 4908 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587129 4908 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587142 4908 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587151 4908 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587160 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587169 4908 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587180 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587190 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587198 4908 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587208 4908 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587220 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587228 4908 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587315 4908 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587366 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587379 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587418 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587427 4908 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587921 4908 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587943 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587954 4908 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587965 4908 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.587991 4908 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588002 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588010 4908 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588019 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588028 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588036 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588045 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588052 4908 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588062 4908 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588070 4908 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588078 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588085 4908 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588094 4908 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588102 4908 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588112 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588121 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588129 4908 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588138 4908 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588147 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588155 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588164 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588172 4908 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588181 4908 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588190 4908 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588198 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588206 4908 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588214 4908 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588222 4908 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588230 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588239 4908 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588248 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588257 4908 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588267 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588276 4908 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588284 4908 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588293 4908 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588302 4908 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588310 4908 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588318 4908 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588328 4908 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588337 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588347 4908 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588356 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588364 4908 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588372 4908 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588380 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588389 4908 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588398 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588407 4908 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588415 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588423 4908 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588435 4908 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588443 4908 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588450 4908 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588457 4908 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588465 4908 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.588473 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.696783 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.706698 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 07:22:02 crc kubenswrapper[4908]: W0131 07:22:02.708945 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-c5d158175106b73b4db9d5029fe7684474c89c6506bfbe5c7653a532f9998fe1 WatchSource:0}: Error finding container c5d158175106b73b4db9d5029fe7684474c89c6506bfbe5c7653a532f9998fe1: Status 404 returned error can't find the container with id c5d158175106b73b4db9d5029fe7684474c89c6506bfbe5c7653a532f9998fe1 Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.712091 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 07:22:02 crc kubenswrapper[4908]: I0131 07:22:02.827268 4908 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.074598 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"04edf9850f65cc8b5ce69cb9d044d319d6ae6dd33bd12d00233fd334392deda1"} Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.076270 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ce42030508f1b04ec09edf0610b1c4dbbe11ad04dcb70d6378698568c0e00e35"} Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.077555 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c5d158175106b73b4db9d5029fe7684474c89c6506bfbe5c7653a532f9998fe1"} Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.091973 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.092111 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.092152 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.092197 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.092236 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:03 crc kubenswrapper[4908]: E0131 07:22:03.092395 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:22:03 crc kubenswrapper[4908]: E0131 07:22:03.092420 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:22:03 crc kubenswrapper[4908]: E0131 07:22:03.092439 4908 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:22:03 crc kubenswrapper[4908]: E0131 07:22:03.092502 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:04.092480633 +0000 UTC m=+30.708425317 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:22:03 crc kubenswrapper[4908]: E0131 07:22:03.092833 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:22:03 crc kubenswrapper[4908]: E0131 07:22:03.092950 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:22:03 crc kubenswrapper[4908]: E0131 07:22:03.093082 4908 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:22:03 crc kubenswrapper[4908]: E0131 07:22:03.092963 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:22:04.092942085 +0000 UTC m=+30.708886779 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:22:03 crc kubenswrapper[4908]: E0131 07:22:03.093298 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:04.093282794 +0000 UTC m=+30.709227498 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:22:03 crc kubenswrapper[4908]: E0131 07:22:03.092970 4908 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:22:03 crc kubenswrapper[4908]: E0131 07:22:03.093500 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:04.09348829 +0000 UTC m=+30.709432954 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:22:03 crc kubenswrapper[4908]: E0131 07:22:03.093024 4908 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:22:03 crc kubenswrapper[4908]: E0131 07:22:03.093704 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:04.093694015 +0000 UTC m=+30.709638769 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.372425 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 19:01:57.217585509 +0000 UTC Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.939284 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.939318 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:03 crc kubenswrapper[4908]: E0131 07:22:03.939598 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.939656 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:03 crc kubenswrapper[4908]: E0131 07:22:03.939752 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:22:03 crc kubenswrapper[4908]: E0131 07:22:03.939832 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.945712 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.947165 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.949941 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.950956 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.952215 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.952871 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.953615 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.954802 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.955593 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.956709 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.957382 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.958677 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.959302 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.959928 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.961054 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.961680 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.962813 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.963324 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.964043 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.965292 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.966063 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.967334 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.967858 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.969119 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.969704 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.970667 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.972291 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.972936 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.973731 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.974388 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.974969 4908 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.975128 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.976791 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.977489 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.978169 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.979644 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.980517 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.981225 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.982742 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.983672 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.984600 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.985483 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.986560 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.987151 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.987957 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.988488 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.989345 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.990067 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.990900 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.991343 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.991785 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.992484 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.993082 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 31 07:22:03 crc kubenswrapper[4908]: I0131 07:22:03.994008 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 31 07:22:04 crc kubenswrapper[4908]: I0131 07:22:04.100765 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:22:04 crc kubenswrapper[4908]: I0131 07:22:04.100848 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:04 crc kubenswrapper[4908]: I0131 07:22:04.100875 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:04 crc kubenswrapper[4908]: E0131 07:22:04.100898 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:22:06.100871183 +0000 UTC m=+32.716815867 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:22:04 crc kubenswrapper[4908]: I0131 07:22:04.100937 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:04 crc kubenswrapper[4908]: E0131 07:22:04.100993 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:22:04 crc kubenswrapper[4908]: E0131 07:22:04.101006 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:22:04 crc kubenswrapper[4908]: I0131 07:22:04.101006 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:04 crc kubenswrapper[4908]: E0131 07:22:04.101016 4908 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:22:04 crc kubenswrapper[4908]: E0131 07:22:04.101032 4908 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:22:04 crc kubenswrapper[4908]: E0131 07:22:04.101093 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:06.101079948 +0000 UTC m=+32.717024622 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:22:04 crc kubenswrapper[4908]: E0131 07:22:04.101054 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:22:04 crc kubenswrapper[4908]: E0131 07:22:04.101163 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:06.10113396 +0000 UTC m=+32.717078644 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:22:04 crc kubenswrapper[4908]: E0131 07:22:04.101171 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:22:04 crc kubenswrapper[4908]: E0131 07:22:04.101166 4908 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:22:04 crc kubenswrapper[4908]: E0131 07:22:04.101276 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:06.101250933 +0000 UTC m=+32.717195657 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:22:04 crc kubenswrapper[4908]: E0131 07:22:04.101184 4908 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:22:04 crc kubenswrapper[4908]: E0131 07:22:04.101379 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:06.101350806 +0000 UTC m=+32.717295570 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:22:04 crc kubenswrapper[4908]: I0131 07:22:04.373353 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 11:32:16.891516887 +0000 UTC Jan 31 07:22:05 crc kubenswrapper[4908]: I0131 07:22:05.084495 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1"} Jan 31 07:22:05 crc kubenswrapper[4908]: I0131 07:22:05.085936 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864"} Jan 31 07:22:05 crc kubenswrapper[4908]: I0131 07:22:05.100471 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:05 crc kubenswrapper[4908]: I0131 07:22:05.109633 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:05 crc kubenswrapper[4908]: I0131 07:22:05.118164 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:05 crc kubenswrapper[4908]: I0131 07:22:05.130435 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:05 crc kubenswrapper[4908]: I0131 07:22:05.140426 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:05 crc kubenswrapper[4908]: I0131 07:22:05.150508 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:05 crc kubenswrapper[4908]: I0131 07:22:05.374365 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 12:58:39.406692402 +0000 UTC Jan 31 07:22:05 crc kubenswrapper[4908]: I0131 07:22:05.939552 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:05 crc kubenswrapper[4908]: E0131 07:22:05.939675 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:22:05 crc kubenswrapper[4908]: I0131 07:22:05.939557 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:05 crc kubenswrapper[4908]: E0131 07:22:05.939836 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:22:05 crc kubenswrapper[4908]: I0131 07:22:05.939839 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:05 crc kubenswrapper[4908]: E0131 07:22:05.940064 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:22:06 crc kubenswrapper[4908]: I0131 07:22:06.091886 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19"} Jan 31 07:22:06 crc kubenswrapper[4908]: I0131 07:22:06.121423 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:22:06 crc kubenswrapper[4908]: I0131 07:22:06.121512 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:06 crc kubenswrapper[4908]: I0131 07:22:06.121555 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:06 crc kubenswrapper[4908]: I0131 07:22:06.121590 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:06 crc kubenswrapper[4908]: E0131 07:22:06.121680 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:22:10.121643197 +0000 UTC m=+36.737587881 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:22:06 crc kubenswrapper[4908]: E0131 07:22:06.121791 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:22:06 crc kubenswrapper[4908]: I0131 07:22:06.121808 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:06 crc kubenswrapper[4908]: E0131 07:22:06.121818 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:22:06 crc kubenswrapper[4908]: E0131 07:22:06.121871 4908 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:22:06 crc kubenswrapper[4908]: E0131 07:22:06.121884 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:22:06 crc kubenswrapper[4908]: E0131 07:22:06.121902 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:22:06 crc kubenswrapper[4908]: E0131 07:22:06.121919 4908 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:22:06 crc kubenswrapper[4908]: E0131 07:22:06.121920 4908 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:22:06 crc kubenswrapper[4908]: E0131 07:22:06.121955 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:10.121941435 +0000 UTC m=+36.737886129 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:22:06 crc kubenswrapper[4908]: E0131 07:22:06.122026 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:10.121966756 +0000 UTC m=+36.737911490 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:22:06 crc kubenswrapper[4908]: E0131 07:22:06.121814 4908 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:22:06 crc kubenswrapper[4908]: E0131 07:22:06.122065 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:10.122045388 +0000 UTC m=+36.737990152 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:22:06 crc kubenswrapper[4908]: E0131 07:22:06.122138 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:10.122099999 +0000 UTC m=+36.738044703 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:22:06 crc kubenswrapper[4908]: I0131 07:22:06.375237 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 01:07:27.757604615 +0000 UTC Jan 31 07:22:06 crc kubenswrapper[4908]: I0131 07:22:06.562752 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:22:06 crc kubenswrapper[4908]: I0131 07:22:06.566862 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:22:06 crc kubenswrapper[4908]: I0131 07:22:06.574943 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:06 crc kubenswrapper[4908]: I0131 07:22:06.589424 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:06 crc kubenswrapper[4908]: I0131 07:22:06.600150 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:06 crc kubenswrapper[4908]: I0131 07:22:06.614137 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:06 crc kubenswrapper[4908]: I0131 07:22:06.621577 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 31 07:22:06 crc kubenswrapper[4908]: I0131 07:22:06.627127 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:06 crc kubenswrapper[4908]: I0131 07:22:06.639078 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:06 crc kubenswrapper[4908]: I0131 07:22:06.648223 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:06 crc kubenswrapper[4908]: I0131 07:22:06.662088 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:06 crc kubenswrapper[4908]: I0131 07:22:06.672108 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:06 crc kubenswrapper[4908]: I0131 07:22:06.682865 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:06 crc kubenswrapper[4908]: I0131 07:22:06.691673 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:06 crc kubenswrapper[4908]: I0131 07:22:06.700264 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:06 crc kubenswrapper[4908]: I0131 07:22:06.708295 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:07 crc kubenswrapper[4908]: I0131 07:22:07.097288 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9"} Jan 31 07:22:07 crc kubenswrapper[4908]: I0131 07:22:07.113166 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:07 crc kubenswrapper[4908]: I0131 07:22:07.125286 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:07 crc kubenswrapper[4908]: I0131 07:22:07.135704 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:07 crc kubenswrapper[4908]: I0131 07:22:07.150948 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:07 crc kubenswrapper[4908]: I0131 07:22:07.167681 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:07 crc kubenswrapper[4908]: I0131 07:22:07.180858 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:07 crc kubenswrapper[4908]: I0131 07:22:07.195618 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:07 crc kubenswrapper[4908]: I0131 07:22:07.376062 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 07:16:21.542730121 +0000 UTC Jan 31 07:22:07 crc kubenswrapper[4908]: I0131 07:22:07.940075 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:07 crc kubenswrapper[4908]: I0131 07:22:07.940121 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:07 crc kubenswrapper[4908]: E0131 07:22:07.940232 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:22:07 crc kubenswrapper[4908]: I0131 07:22:07.940251 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:07 crc kubenswrapper[4908]: E0131 07:22:07.940349 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:22:07 crc kubenswrapper[4908]: E0131 07:22:07.940528 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:22:07 crc kubenswrapper[4908]: I0131 07:22:07.951500 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:07 crc kubenswrapper[4908]: I0131 07:22:07.961689 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:07 crc kubenswrapper[4908]: I0131 07:22:07.971123 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:07 crc kubenswrapper[4908]: I0131 07:22:07.981705 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:07 crc kubenswrapper[4908]: I0131 07:22:07.990935 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.001790 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.011408 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.109682 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.118186 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.126735 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.135315 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.142941 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.151936 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.161381 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.376613 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 11:52:35.743942691 +0000 UTC Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.570308 4908 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.574442 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.574485 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.574497 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.574554 4908 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.581163 4908 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.581446 4908 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.582662 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.582706 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.582759 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.582775 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.582785 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:08Z","lastTransitionTime":"2026-01-31T07:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:08 crc kubenswrapper[4908]: E0131 07:22:08.596652 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.600368 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.600403 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.600415 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.600432 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.600444 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:08Z","lastTransitionTime":"2026-01-31T07:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:08 crc kubenswrapper[4908]: E0131 07:22:08.610069 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.613906 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.613951 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.613960 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.613974 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.613996 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:08Z","lastTransitionTime":"2026-01-31T07:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:08 crc kubenswrapper[4908]: E0131 07:22:08.623087 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.626042 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.626071 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.626080 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.626093 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.626104 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:08Z","lastTransitionTime":"2026-01-31T07:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:08 crc kubenswrapper[4908]: E0131 07:22:08.634030 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.638372 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.638399 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.638410 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.638425 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.638435 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:08Z","lastTransitionTime":"2026-01-31T07:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:08 crc kubenswrapper[4908]: E0131 07:22:08.658801 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:22:08 crc kubenswrapper[4908]: E0131 07:22:08.658927 4908 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.660491 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.660536 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.660548 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.660562 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.660571 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:08Z","lastTransitionTime":"2026-01-31T07:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.762818 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.762851 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.762877 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.762890 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.762899 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:08Z","lastTransitionTime":"2026-01-31T07:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.866061 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.866117 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.866133 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.866154 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.866165 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:08Z","lastTransitionTime":"2026-01-31T07:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.967710 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.967945 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.968048 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.968180 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:08 crc kubenswrapper[4908]: I0131 07:22:08.968268 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:08Z","lastTransitionTime":"2026-01-31T07:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.070879 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.070912 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.070921 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.070935 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.070944 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:09Z","lastTransitionTime":"2026-01-31T07:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.172879 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.172928 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.172941 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.172961 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.172994 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:09Z","lastTransitionTime":"2026-01-31T07:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.183064 4908 csr.go:261] certificate signing request csr-6k8k9 is approved, waiting to be issued Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.237079 4908 csr.go:257] certificate signing request csr-6k8k9 is issued Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.275058 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.275097 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.275108 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.275123 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.275134 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:09Z","lastTransitionTime":"2026-01-31T07:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.377461 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 14:16:06.950539754 +0000 UTC Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.378009 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.378043 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.378052 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.378066 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.378075 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:09Z","lastTransitionTime":"2026-01-31T07:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.480078 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.480147 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.480158 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.480174 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.480185 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:09Z","lastTransitionTime":"2026-01-31T07:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.582600 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.582642 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.582655 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.582670 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.582681 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:09Z","lastTransitionTime":"2026-01-31T07:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.685154 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.685196 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.685208 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.685224 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.685239 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:09Z","lastTransitionTime":"2026-01-31T07:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.721072 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-nxc4t"] Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.721430 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-nxc4t" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.723256 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.723334 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.723516 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.739960 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:09Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.751965 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:09Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.762903 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:09Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.779089 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:09Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.787364 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.787426 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.787436 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.787451 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.787461 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:09Z","lastTransitionTime":"2026-01-31T07:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.794561 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:09Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.805907 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:09Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.829151 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:09Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.847725 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:09Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.851162 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nhw9\" (UniqueName: \"kubernetes.io/projected/b6ae0245-683c-4bd0-b14f-10d048e5db01-kube-api-access-6nhw9\") pod \"node-resolver-nxc4t\" (UID: \"b6ae0245-683c-4bd0-b14f-10d048e5db01\") " pod="openshift-dns/node-resolver-nxc4t" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.851230 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b6ae0245-683c-4bd0-b14f-10d048e5db01-hosts-file\") pod \"node-resolver-nxc4t\" (UID: \"b6ae0245-683c-4bd0-b14f-10d048e5db01\") " pod="openshift-dns/node-resolver-nxc4t" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.889936 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.889970 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.889998 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.890013 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.890023 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:09Z","lastTransitionTime":"2026-01-31T07:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.941720 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:09 crc kubenswrapper[4908]: E0131 07:22:09.941817 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.942149 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:09 crc kubenswrapper[4908]: E0131 07:22:09.942200 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.942236 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:09 crc kubenswrapper[4908]: E0131 07:22:09.942275 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.951879 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nhw9\" (UniqueName: \"kubernetes.io/projected/b6ae0245-683c-4bd0-b14f-10d048e5db01-kube-api-access-6nhw9\") pod \"node-resolver-nxc4t\" (UID: \"b6ae0245-683c-4bd0-b14f-10d048e5db01\") " pod="openshift-dns/node-resolver-nxc4t" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.951919 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b6ae0245-683c-4bd0-b14f-10d048e5db01-hosts-file\") pod \"node-resolver-nxc4t\" (UID: \"b6ae0245-683c-4bd0-b14f-10d048e5db01\") " pod="openshift-dns/node-resolver-nxc4t" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.952032 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b6ae0245-683c-4bd0-b14f-10d048e5db01-hosts-file\") pod \"node-resolver-nxc4t\" (UID: \"b6ae0245-683c-4bd0-b14f-10d048e5db01\") " pod="openshift-dns/node-resolver-nxc4t" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.968557 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nhw9\" (UniqueName: \"kubernetes.io/projected/b6ae0245-683c-4bd0-b14f-10d048e5db01-kube-api-access-6nhw9\") pod \"node-resolver-nxc4t\" (UID: \"b6ae0245-683c-4bd0-b14f-10d048e5db01\") " pod="openshift-dns/node-resolver-nxc4t" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.992403 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.992441 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.992451 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.992467 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:09 crc kubenswrapper[4908]: I0131 07:22:09.992478 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:09Z","lastTransitionTime":"2026-01-31T07:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.033714 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-nxc4t" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.095772 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.095847 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.095861 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.095885 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.095898 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:10Z","lastTransitionTime":"2026-01-31T07:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.103969 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-j7vgm"] Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.104439 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-944z2"] Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.104599 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.104817 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.107045 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.107801 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-nxc4t" event={"ID":"b6ae0245-683c-4bd0-b14f-10d048e5db01","Type":"ContainerStarted","Data":"c88ad1ec356656c9ea34e7c7f2a6786886471913648b30e42e7603c39fbbaa32"} Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.107943 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.108078 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.108195 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.110655 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.110727 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.110925 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.110944 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.111150 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.111337 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.114662 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-fwlxr"] Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.115268 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.116653 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.118181 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.125995 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.156999 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.157095 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.157131 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.157160 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.157187 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:10 crc kubenswrapper[4908]: E0131 07:22:10.157254 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:22:18.157225828 +0000 UTC m=+44.773170482 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:22:10 crc kubenswrapper[4908]: E0131 07:22:10.157269 4908 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:22:10 crc kubenswrapper[4908]: E0131 07:22:10.157327 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:18.157317691 +0000 UTC m=+44.773262345 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:22:10 crc kubenswrapper[4908]: E0131 07:22:10.157347 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:22:10 crc kubenswrapper[4908]: E0131 07:22:10.157365 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:22:10 crc kubenswrapper[4908]: E0131 07:22:10.157377 4908 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:22:10 crc kubenswrapper[4908]: E0131 07:22:10.157423 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:18.157408593 +0000 UTC m=+44.773353297 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:22:10 crc kubenswrapper[4908]: E0131 07:22:10.157481 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:22:10 crc kubenswrapper[4908]: E0131 07:22:10.157494 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:22:10 crc kubenswrapper[4908]: E0131 07:22:10.157503 4908 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:22:10 crc kubenswrapper[4908]: E0131 07:22:10.157532 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:18.157521246 +0000 UTC m=+44.773465990 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:22:10 crc kubenswrapper[4908]: E0131 07:22:10.157582 4908 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:22:10 crc kubenswrapper[4908]: E0131 07:22:10.157610 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:18.157601888 +0000 UTC m=+44.773546642 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.159528 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.173630 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.206510 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.206577 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.206592 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.206611 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.206631 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:10Z","lastTransitionTime":"2026-01-31T07:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.208861 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.237932 4908 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-31 07:17:09 +0000 UTC, rotation deadline is 2026-12-13 10:06:11.507799958 +0000 UTC Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.238011 4908 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7586h44m1.269791238s for next certificate rotation Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.239390 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.250354 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.258251 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-multus-daemon-config\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.258407 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-host-var-lib-cni-bin\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.258448 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-host-var-lib-cni-multus\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.258468 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a4e21704-e401-411f-99c0-4b4afe2bcf9f-proxy-tls\") pod \"machine-config-daemon-j7vgm\" (UID: \"a4e21704-e401-411f-99c0-4b4afe2bcf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.258482 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-multus-cni-dir\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.258499 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a4e21704-e401-411f-99c0-4b4afe2bcf9f-mcd-auth-proxy-config\") pod \"machine-config-daemon-j7vgm\" (UID: \"a4e21704-e401-411f-99c0-4b4afe2bcf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.258514 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-host-var-lib-kubelet\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.258581 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2e2a4089-bcb9-4be0-bfbc-30ca54029e9d-system-cni-dir\") pod \"multus-additional-cni-plugins-fwlxr\" (UID: \"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\") " pod="openshift-multus/multus-additional-cni-plugins-fwlxr" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.258658 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-etc-kubernetes\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.258725 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qsgp\" (UniqueName: \"kubernetes.io/projected/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-kube-api-access-4qsgp\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.258760 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2e2a4089-bcb9-4be0-bfbc-30ca54029e9d-cni-binary-copy\") pod \"multus-additional-cni-plugins-fwlxr\" (UID: \"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\") " pod="openshift-multus/multus-additional-cni-plugins-fwlxr" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.258799 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-multus-conf-dir\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.258822 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-multus-socket-dir-parent\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.258842 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-host-run-k8s-cni-cncf-io\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.258863 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-host-run-multus-certs\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.258883 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2e2a4089-bcb9-4be0-bfbc-30ca54029e9d-os-release\") pod \"multus-additional-cni-plugins-fwlxr\" (UID: \"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\") " pod="openshift-multus/multus-additional-cni-plugins-fwlxr" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.258911 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfplt\" (UniqueName: \"kubernetes.io/projected/2e2a4089-bcb9-4be0-bfbc-30ca54029e9d-kube-api-access-qfplt\") pod \"multus-additional-cni-plugins-fwlxr\" (UID: \"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\") " pod="openshift-multus/multus-additional-cni-plugins-fwlxr" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.258954 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-system-cni-dir\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.258994 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89wtw\" (UniqueName: \"kubernetes.io/projected/a4e21704-e401-411f-99c0-4b4afe2bcf9f-kube-api-access-89wtw\") pod \"machine-config-daemon-j7vgm\" (UID: \"a4e21704-e401-411f-99c0-4b4afe2bcf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.259020 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2e2a4089-bcb9-4be0-bfbc-30ca54029e9d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-fwlxr\" (UID: \"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\") " pod="openshift-multus/multus-additional-cni-plugins-fwlxr" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.259045 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2e2a4089-bcb9-4be0-bfbc-30ca54029e9d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-fwlxr\" (UID: \"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\") " pod="openshift-multus/multus-additional-cni-plugins-fwlxr" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.259066 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-os-release\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.259085 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2e2a4089-bcb9-4be0-bfbc-30ca54029e9d-cnibin\") pod \"multus-additional-cni-plugins-fwlxr\" (UID: \"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\") " pod="openshift-multus/multus-additional-cni-plugins-fwlxr" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.259106 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a4e21704-e401-411f-99c0-4b4afe2bcf9f-rootfs\") pod \"machine-config-daemon-j7vgm\" (UID: \"a4e21704-e401-411f-99c0-4b4afe2bcf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.259125 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-host-run-netns\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.259144 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-cnibin\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.259162 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-cni-binary-copy\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.259181 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-hostroot\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.261271 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.274830 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.283823 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.295578 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.306317 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.308997 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.309023 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.309031 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.309043 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.309051 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:10Z","lastTransitionTime":"2026-01-31T07:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.317343 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.330544 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.342702 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.355787 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.359578 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a4e21704-e401-411f-99c0-4b4afe2bcf9f-mcd-auth-proxy-config\") pod \"machine-config-daemon-j7vgm\" (UID: \"a4e21704-e401-411f-99c0-4b4afe2bcf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.359621 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-host-var-lib-kubelet\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.359644 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2e2a4089-bcb9-4be0-bfbc-30ca54029e9d-system-cni-dir\") pod \"multus-additional-cni-plugins-fwlxr\" (UID: \"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\") " pod="openshift-multus/multus-additional-cni-plugins-fwlxr" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.359676 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-etc-kubernetes\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.359704 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qsgp\" (UniqueName: \"kubernetes.io/projected/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-kube-api-access-4qsgp\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.359735 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2e2a4089-bcb9-4be0-bfbc-30ca54029e9d-cni-binary-copy\") pod \"multus-additional-cni-plugins-fwlxr\" (UID: \"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\") " pod="openshift-multus/multus-additional-cni-plugins-fwlxr" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.359755 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-multus-socket-dir-parent\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.359773 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-host-run-k8s-cni-cncf-io\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.359792 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-multus-conf-dir\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.359811 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-host-run-multus-certs\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.359830 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2e2a4089-bcb9-4be0-bfbc-30ca54029e9d-os-release\") pod \"multus-additional-cni-plugins-fwlxr\" (UID: \"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\") " pod="openshift-multus/multus-additional-cni-plugins-fwlxr" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.359859 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfplt\" (UniqueName: \"kubernetes.io/projected/2e2a4089-bcb9-4be0-bfbc-30ca54029e9d-kube-api-access-qfplt\") pod \"multus-additional-cni-plugins-fwlxr\" (UID: \"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\") " pod="openshift-multus/multus-additional-cni-plugins-fwlxr" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.359891 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-system-cni-dir\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.359913 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89wtw\" (UniqueName: \"kubernetes.io/projected/a4e21704-e401-411f-99c0-4b4afe2bcf9f-kube-api-access-89wtw\") pod \"machine-config-daemon-j7vgm\" (UID: \"a4e21704-e401-411f-99c0-4b4afe2bcf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.359934 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2e2a4089-bcb9-4be0-bfbc-30ca54029e9d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-fwlxr\" (UID: \"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\") " pod="openshift-multus/multus-additional-cni-plugins-fwlxr" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.359956 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-os-release\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.359995 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2e2a4089-bcb9-4be0-bfbc-30ca54029e9d-cnibin\") pod \"multus-additional-cni-plugins-fwlxr\" (UID: \"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\") " pod="openshift-multus/multus-additional-cni-plugins-fwlxr" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.360019 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2e2a4089-bcb9-4be0-bfbc-30ca54029e9d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-fwlxr\" (UID: \"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\") " pod="openshift-multus/multus-additional-cni-plugins-fwlxr" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.360040 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a4e21704-e401-411f-99c0-4b4afe2bcf9f-rootfs\") pod \"machine-config-daemon-j7vgm\" (UID: \"a4e21704-e401-411f-99c0-4b4afe2bcf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.360059 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-host-run-netns\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.360079 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-cnibin\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.360101 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-cni-binary-copy\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.360119 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-hostroot\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.360141 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-multus-daemon-config\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.360166 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a4e21704-e401-411f-99c0-4b4afe2bcf9f-proxy-tls\") pod \"machine-config-daemon-j7vgm\" (UID: \"a4e21704-e401-411f-99c0-4b4afe2bcf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.360186 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-multus-cni-dir\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.360207 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-host-var-lib-cni-bin\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.360229 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-host-var-lib-cni-multus\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.360306 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-host-var-lib-cni-multus\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.360348 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-host-var-lib-kubelet\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.360386 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2e2a4089-bcb9-4be0-bfbc-30ca54029e9d-system-cni-dir\") pod \"multus-additional-cni-plugins-fwlxr\" (UID: \"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\") " pod="openshift-multus/multus-additional-cni-plugins-fwlxr" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.360428 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-etc-kubernetes\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.360475 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a4e21704-e401-411f-99c0-4b4afe2bcf9f-mcd-auth-proxy-config\") pod \"machine-config-daemon-j7vgm\" (UID: \"a4e21704-e401-411f-99c0-4b4afe2bcf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.360547 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-os-release\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.360687 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-hostroot\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.361164 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2e2a4089-bcb9-4be0-bfbc-30ca54029e9d-cni-binary-copy\") pod \"multus-additional-cni-plugins-fwlxr\" (UID: \"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\") " pod="openshift-multus/multus-additional-cni-plugins-fwlxr" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.361226 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-multus-socket-dir-parent\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.361264 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-host-run-k8s-cni-cncf-io\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.361257 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-cni-binary-copy\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.361304 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-multus-conf-dir\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.361341 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2e2a4089-bcb9-4be0-bfbc-30ca54029e9d-cnibin\") pod \"multus-additional-cni-plugins-fwlxr\" (UID: \"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\") " pod="openshift-multus/multus-additional-cni-plugins-fwlxr" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.361349 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-host-run-multus-certs\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.361356 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-multus-daemon-config\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.361409 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2e2a4089-bcb9-4be0-bfbc-30ca54029e9d-os-release\") pod \"multus-additional-cni-plugins-fwlxr\" (UID: \"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\") " pod="openshift-multus/multus-additional-cni-plugins-fwlxr" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.361642 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-system-cni-dir\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.361748 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2e2a4089-bcb9-4be0-bfbc-30ca54029e9d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-fwlxr\" (UID: \"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\") " pod="openshift-multus/multus-additional-cni-plugins-fwlxr" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.361793 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a4e21704-e401-411f-99c0-4b4afe2bcf9f-rootfs\") pod \"machine-config-daemon-j7vgm\" (UID: \"a4e21704-e401-411f-99c0-4b4afe2bcf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.361819 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-host-run-netns\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.361879 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-multus-cni-dir\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.361930 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-cnibin\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.361937 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-host-var-lib-cni-bin\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.365166 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a4e21704-e401-411f-99c0-4b4afe2bcf9f-proxy-tls\") pod \"machine-config-daemon-j7vgm\" (UID: \"a4e21704-e401-411f-99c0-4b4afe2bcf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.370209 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.376141 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qsgp\" (UniqueName: \"kubernetes.io/projected/c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b-kube-api-access-4qsgp\") pod \"multus-944z2\" (UID: \"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\") " pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.376905 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfplt\" (UniqueName: \"kubernetes.io/projected/2e2a4089-bcb9-4be0-bfbc-30ca54029e9d-kube-api-access-qfplt\") pod \"multus-additional-cni-plugins-fwlxr\" (UID: \"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\") " pod="openshift-multus/multus-additional-cni-plugins-fwlxr" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.377751 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 11:30:58.270092247 +0000 UTC Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.380727 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89wtw\" (UniqueName: \"kubernetes.io/projected/a4e21704-e401-411f-99c0-4b4afe2bcf9f-kube-api-access-89wtw\") pod \"machine-config-daemon-j7vgm\" (UID: \"a4e21704-e401-411f-99c0-4b4afe2bcf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.381702 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.395318 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.411758 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.411791 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.411801 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.411819 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.411831 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:10Z","lastTransitionTime":"2026-01-31T07:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.419056 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-944z2" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.430177 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.446669 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2e2a4089-bcb9-4be0-bfbc-30ca54029e9d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-fwlxr\" (UID: \"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\") " pod="openshift-multus/multus-additional-cni-plugins-fwlxr" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.447625 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: W0131 07:22:10.463707 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4e21704_e401_411f_99c0_4b4afe2bcf9f.slice/crio-e32d5816f4372b0657b917cd1273796a2ab6911d6e7ba79f3f0622817eff3adf WatchSource:0}: Error finding container e32d5816f4372b0657b917cd1273796a2ab6911d6e7ba79f3f0622817eff3adf: Status 404 returned error can't find the container with id e32d5816f4372b0657b917cd1273796a2ab6911d6e7ba79f3f0622817eff3adf Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.467129 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.486358 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.513243 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xkd4f"] Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.513759 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.513789 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.513799 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.513813 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.513822 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:10Z","lastTransitionTime":"2026-01-31T07:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.514589 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.516685 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.516935 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.517142 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.517158 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.517230 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.517332 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.518360 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.530807 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.542018 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.555827 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.567112 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.584595 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.597153 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.607405 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.615684 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.615722 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.615732 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.615746 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.615757 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:10Z","lastTransitionTime":"2026-01-31T07:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.624347 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.636806 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.648621 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.659443 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.662073 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-run-ovn-kubernetes\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.662143 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-node-log\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.662162 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-log-socket\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.662213 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-kubelet\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.662230 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-var-lib-openvswitch\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.662246 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-etc-openvswitch\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.662271 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-run-systemd\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.662291 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdzpb\" (UniqueName: \"kubernetes.io/projected/d0d1945f-bd78-48c9-89be-35b3f2908dab-kube-api-access-mdzpb\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.662387 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-cni-bin\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.662437 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-systemd-units\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.662454 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-slash\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.662471 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d0d1945f-bd78-48c9-89be-35b3f2908dab-ovnkube-script-lib\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.662501 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-run-openvswitch\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.662523 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.662548 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d0d1945f-bd78-48c9-89be-35b3f2908dab-ovnkube-config\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.662564 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d0d1945f-bd78-48c9-89be-35b3f2908dab-env-overrides\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.662583 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-run-ovn\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.662599 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d0d1945f-bd78-48c9-89be-35b3f2908dab-ovn-node-metrics-cert\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.662636 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-run-netns\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.662652 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-cni-netd\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.672222 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.717989 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.718037 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.718048 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.718064 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.718075 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:10Z","lastTransitionTime":"2026-01-31T07:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.737318 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" Jan 31 07:22:10 crc kubenswrapper[4908]: W0131 07:22:10.747378 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e2a4089_bcb9_4be0_bfbc_30ca54029e9d.slice/crio-eab52f463f0d83f7e085f9b519f90d272af771b29d79f4593fd3eaf8eded53ef WatchSource:0}: Error finding container eab52f463f0d83f7e085f9b519f90d272af771b29d79f4593fd3eaf8eded53ef: Status 404 returned error can't find the container with id eab52f463f0d83f7e085f9b519f90d272af771b29d79f4593fd3eaf8eded53ef Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.763949 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-run-systemd\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.764010 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdzpb\" (UniqueName: \"kubernetes.io/projected/d0d1945f-bd78-48c9-89be-35b3f2908dab-kube-api-access-mdzpb\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.764030 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-cni-bin\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.764050 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d0d1945f-bd78-48c9-89be-35b3f2908dab-ovnkube-script-lib\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.764082 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-systemd-units\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.764105 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-slash\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.764128 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.764144 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-run-openvswitch\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.764160 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d0d1945f-bd78-48c9-89be-35b3f2908dab-ovnkube-config\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.764175 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d0d1945f-bd78-48c9-89be-35b3f2908dab-env-overrides\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.764199 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-run-ovn\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.764215 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d0d1945f-bd78-48c9-89be-35b3f2908dab-ovn-node-metrics-cert\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.764231 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-run-netns\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.764244 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-cni-netd\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.764260 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-run-ovn-kubernetes\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.764282 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-node-log\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.764302 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-log-socket\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.764322 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-kubelet\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.764338 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-var-lib-openvswitch\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.764358 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-etc-openvswitch\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.764420 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-etc-openvswitch\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.764455 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-run-systemd\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.764686 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-cni-bin\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.765290 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d0d1945f-bd78-48c9-89be-35b3f2908dab-ovnkube-script-lib\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.765333 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-systemd-units\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.765354 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-slash\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.765375 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.765397 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-run-openvswitch\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.765766 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d0d1945f-bd78-48c9-89be-35b3f2908dab-ovnkube-config\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.766160 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d0d1945f-bd78-48c9-89be-35b3f2908dab-env-overrides\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.766204 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-run-ovn\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.766456 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-node-log\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.766526 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-run-netns\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.766534 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-run-ovn-kubernetes\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.766540 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-kubelet\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.766550 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-log-socket\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.766582 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-var-lib-openvswitch\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.766668 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-cni-netd\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.768809 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d0d1945f-bd78-48c9-89be-35b3f2908dab-ovn-node-metrics-cert\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.784898 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdzpb\" (UniqueName: \"kubernetes.io/projected/d0d1945f-bd78-48c9-89be-35b3f2908dab-kube-api-access-mdzpb\") pod \"ovnkube-node-xkd4f\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.820165 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.820208 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.820219 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.820235 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.820246 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:10Z","lastTransitionTime":"2026-01-31T07:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.828754 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.921643 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.921667 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.921674 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.921687 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:10 crc kubenswrapper[4908]: I0131 07:22:10.921696 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:10Z","lastTransitionTime":"2026-01-31T07:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.023943 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.024019 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.024034 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.024048 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.024058 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:11Z","lastTransitionTime":"2026-01-31T07:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.110761 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerStarted","Data":"e32d5816f4372b0657b917cd1273796a2ab6911d6e7ba79f3f0622817eff3adf"} Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.111498 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" event={"ID":"d0d1945f-bd78-48c9-89be-35b3f2908dab","Type":"ContainerStarted","Data":"ea1b8e54729b90d23a6ce5d473a33039d2b4f1b372882394d178ca684441b84c"} Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.112272 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" event={"ID":"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d","Type":"ContainerStarted","Data":"eab52f463f0d83f7e085f9b519f90d272af771b29d79f4593fd3eaf8eded53ef"} Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.113028 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-944z2" event={"ID":"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b","Type":"ContainerStarted","Data":"a78e18e6d49d4c928c45f96c61271b4701c753836fc03cd558c5b0565b5e1585"} Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.113960 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-nxc4t" event={"ID":"b6ae0245-683c-4bd0-b14f-10d048e5db01","Type":"ContainerStarted","Data":"f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e"} Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.123995 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:11Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.126351 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.126400 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.126413 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.126429 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.126440 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:11Z","lastTransitionTime":"2026-01-31T07:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.139108 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:11Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.152876 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:11Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.163605 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:11Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.173859 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:11Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.184952 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:11Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.196859 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:11Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.210195 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:11Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.220396 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:11Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.229375 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.229414 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.229425 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.229442 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.229453 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:11Z","lastTransitionTime":"2026-01-31T07:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.231947 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:11Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.247955 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:11Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.258905 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:11Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.332300 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.332342 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.332351 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.332363 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.332374 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:11Z","lastTransitionTime":"2026-01-31T07:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.378114 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 04:13:53.503160391 +0000 UTC Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.435568 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.435618 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.435630 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.435645 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.435655 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:11Z","lastTransitionTime":"2026-01-31T07:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.538614 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.538651 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.538661 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.538676 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.538687 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:11Z","lastTransitionTime":"2026-01-31T07:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.641522 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.641564 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.641574 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.641591 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.641602 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:11Z","lastTransitionTime":"2026-01-31T07:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.743883 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.743937 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.743957 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.744000 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.744017 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:11Z","lastTransitionTime":"2026-01-31T07:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.847038 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.847281 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.847291 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.847304 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.847313 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:11Z","lastTransitionTime":"2026-01-31T07:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.939274 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.939315 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.939360 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:11 crc kubenswrapper[4908]: E0131 07:22:11.939497 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:22:11 crc kubenswrapper[4908]: E0131 07:22:11.939776 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:22:11 crc kubenswrapper[4908]: E0131 07:22:11.939937 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.949095 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.949133 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.949142 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.949155 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:11 crc kubenswrapper[4908]: I0131 07:22:11.949164 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:11Z","lastTransitionTime":"2026-01-31T07:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.052173 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.052216 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.052227 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.052245 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.052256 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:12Z","lastTransitionTime":"2026-01-31T07:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.119575 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerStarted","Data":"34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d"} Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.121384 4908 generic.go:334] "Generic (PLEG): container finished" podID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerID="3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0" exitCode=0 Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.121449 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" event={"ID":"d0d1945f-bd78-48c9-89be-35b3f2908dab","Type":"ContainerDied","Data":"3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0"} Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.123226 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" event={"ID":"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d","Type":"ContainerStarted","Data":"e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297"} Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.125103 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-944z2" event={"ID":"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b","Type":"ContainerStarted","Data":"1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a"} Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.135870 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.149872 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.155019 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.155057 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.155066 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.155079 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.155089 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:12Z","lastTransitionTime":"2026-01-31T07:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.162100 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.181311 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.192606 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.208780 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.222138 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.233974 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.250488 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.257770 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.257807 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.257824 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.257840 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.257853 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:12Z","lastTransitionTime":"2026-01-31T07:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.262681 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.280114 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.290851 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.303715 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.314553 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.337423 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.350904 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.360441 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.360480 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.360491 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.360509 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.360520 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:12Z","lastTransitionTime":"2026-01-31T07:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.362708 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.374009 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.379089 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 19:46:25.355793245 +0000 UTC Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.388096 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.401052 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.417132 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.429330 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.445930 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.458451 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.462412 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.462444 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.462452 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.462466 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.462475 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:12Z","lastTransitionTime":"2026-01-31T07:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.564422 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.564455 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.564464 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.564478 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.564487 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:12Z","lastTransitionTime":"2026-01-31T07:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.668441 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.668508 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.668526 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.668548 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.668562 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:12Z","lastTransitionTime":"2026-01-31T07:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.771473 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.771521 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.771540 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.771560 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.771575 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:12Z","lastTransitionTime":"2026-01-31T07:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.771656 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-kk2t9"] Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.772170 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kk2t9" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.774672 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.775048 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.775176 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.777559 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.787205 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.799703 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.808257 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.820566 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.830819 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.843039 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.856925 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.871860 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.873755 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.873786 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.873797 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.873813 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.873822 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:12Z","lastTransitionTime":"2026-01-31T07:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.883540 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8jr8\" (UniqueName: \"kubernetes.io/projected/425085fb-8558-4dca-814f-38c080bc3672-kube-api-access-z8jr8\") pod \"node-ca-kk2t9\" (UID: \"425085fb-8558-4dca-814f-38c080bc3672\") " pod="openshift-image-registry/node-ca-kk2t9" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.883599 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/425085fb-8558-4dca-814f-38c080bc3672-host\") pod \"node-ca-kk2t9\" (UID: \"425085fb-8558-4dca-814f-38c080bc3672\") " pod="openshift-image-registry/node-ca-kk2t9" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.883636 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/425085fb-8558-4dca-814f-38c080bc3672-serviceca\") pod \"node-ca-kk2t9\" (UID: \"425085fb-8558-4dca-814f-38c080bc3672\") " pod="openshift-image-registry/node-ca-kk2t9" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.888044 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.903747 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.918404 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.932082 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.945609 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.976427 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.976468 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.976480 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.976496 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.976521 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:12Z","lastTransitionTime":"2026-01-31T07:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.985078 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/425085fb-8558-4dca-814f-38c080bc3672-serviceca\") pod \"node-ca-kk2t9\" (UID: \"425085fb-8558-4dca-814f-38c080bc3672\") " pod="openshift-image-registry/node-ca-kk2t9" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.985131 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8jr8\" (UniqueName: \"kubernetes.io/projected/425085fb-8558-4dca-814f-38c080bc3672-kube-api-access-z8jr8\") pod \"node-ca-kk2t9\" (UID: \"425085fb-8558-4dca-814f-38c080bc3672\") " pod="openshift-image-registry/node-ca-kk2t9" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.985160 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/425085fb-8558-4dca-814f-38c080bc3672-host\") pod \"node-ca-kk2t9\" (UID: \"425085fb-8558-4dca-814f-38c080bc3672\") " pod="openshift-image-registry/node-ca-kk2t9" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.985208 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/425085fb-8558-4dca-814f-38c080bc3672-host\") pod \"node-ca-kk2t9\" (UID: \"425085fb-8558-4dca-814f-38c080bc3672\") " pod="openshift-image-registry/node-ca-kk2t9" Jan 31 07:22:12 crc kubenswrapper[4908]: I0131 07:22:12.986121 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/425085fb-8558-4dca-814f-38c080bc3672-serviceca\") pod \"node-ca-kk2t9\" (UID: \"425085fb-8558-4dca-814f-38c080bc3672\") " pod="openshift-image-registry/node-ca-kk2t9" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.001443 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8jr8\" (UniqueName: \"kubernetes.io/projected/425085fb-8558-4dca-814f-38c080bc3672-kube-api-access-z8jr8\") pod \"node-ca-kk2t9\" (UID: \"425085fb-8558-4dca-814f-38c080bc3672\") " pod="openshift-image-registry/node-ca-kk2t9" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.078627 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.078679 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.078688 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.078700 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.078709 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:13Z","lastTransitionTime":"2026-01-31T07:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.084862 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kk2t9" Jan 31 07:22:13 crc kubenswrapper[4908]: W0131 07:22:13.100502 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod425085fb_8558_4dca_814f_38c080bc3672.slice/crio-713e316c5e0bdf34955d2618a11fb2f9bfe8083fa791ef7e26d88fec44798483 WatchSource:0}: Error finding container 713e316c5e0bdf34955d2618a11fb2f9bfe8083fa791ef7e26d88fec44798483: Status 404 returned error can't find the container with id 713e316c5e0bdf34955d2618a11fb2f9bfe8083fa791ef7e26d88fec44798483 Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.128143 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kk2t9" event={"ID":"425085fb-8558-4dca-814f-38c080bc3672","Type":"ContainerStarted","Data":"713e316c5e0bdf34955d2618a11fb2f9bfe8083fa791ef7e26d88fec44798483"} Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.129500 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerStarted","Data":"a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0"} Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.147357 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:13Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.161032 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:13Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.171009 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:13Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.182023 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.182073 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.182088 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.182143 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.182158 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:13Z","lastTransitionTime":"2026-01-31T07:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.185606 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:13Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.200614 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:13Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.217378 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:13Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.229304 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:13Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.241207 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:13Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.258847 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:13Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.270089 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:13Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.282153 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:13Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.284796 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.284825 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.284836 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.284851 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.284861 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:13Z","lastTransitionTime":"2026-01-31T07:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.294540 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:13Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.305444 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:13Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.313751 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:13Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.341311 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:13Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.356633 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:13Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.377000 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:13Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.380862 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 20:55:32.420704477 +0000 UTC Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.392551 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.392588 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.392599 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.392617 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.392628 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:13Z","lastTransitionTime":"2026-01-31T07:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.400262 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:13Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.414568 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:13Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.426259 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:13Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.435546 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:13Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.450733 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:13Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.461426 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:13Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.473536 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:13Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.484491 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:13Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.495046 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.495073 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.495082 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.495095 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.495104 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:13Z","lastTransitionTime":"2026-01-31T07:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.495397 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:13Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.597737 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.597904 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.597996 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.598079 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.598152 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:13Z","lastTransitionTime":"2026-01-31T07:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.700776 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.700810 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.700822 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.700836 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.700844 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:13Z","lastTransitionTime":"2026-01-31T07:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.803021 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.803063 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.803072 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.803087 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.803097 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:13Z","lastTransitionTime":"2026-01-31T07:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.905880 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.906165 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.906260 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.906340 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.906419 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:13Z","lastTransitionTime":"2026-01-31T07:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.939535 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.939574 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:13 crc kubenswrapper[4908]: E0131 07:22:13.939657 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.939703 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:13 crc kubenswrapper[4908]: E0131 07:22:13.939821 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:22:13 crc kubenswrapper[4908]: E0131 07:22:13.940216 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.956146 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 07:22:13 crc kubenswrapper[4908]: I0131 07:22:13.956526 4908 scope.go:117] "RemoveContainer" containerID="28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.010530 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.010562 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.010571 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.010585 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.010595 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:14Z","lastTransitionTime":"2026-01-31T07:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.113086 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.113126 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.113138 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.113154 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.113165 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:14Z","lastTransitionTime":"2026-01-31T07:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.134705 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" event={"ID":"d0d1945f-bd78-48c9-89be-35b3f2908dab","Type":"ContainerStarted","Data":"4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0"} Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.135638 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kk2t9" event={"ID":"425085fb-8558-4dca-814f-38c080bc3672","Type":"ContainerStarted","Data":"51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0"} Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.136734 4908 generic.go:334] "Generic (PLEG): container finished" podID="2e2a4089-bcb9-4be0-bfbc-30ca54029e9d" containerID="e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297" exitCode=0 Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.137154 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" event={"ID":"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d","Type":"ContainerDied","Data":"e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297"} Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.153678 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:14Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.164558 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:14Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.178185 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:14Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.189790 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:14Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.203648 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:14Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.214369 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:14Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.216301 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.216341 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.216351 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.216369 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.216379 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:14Z","lastTransitionTime":"2026-01-31T07:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.228498 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:14Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.251524 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:14Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.263229 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:14Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.275867 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:14Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.286865 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:14Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.299283 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:14Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.312208 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:14Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.318672 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.318698 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.318706 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.318720 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.318728 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:14Z","lastTransitionTime":"2026-01-31T07:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.327170 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:14Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.381901 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 19:00:32.768890473 +0000 UTC Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.420829 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.420881 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.420896 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.420914 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.420927 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:14Z","lastTransitionTime":"2026-01-31T07:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.523348 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.523397 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.523413 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.523435 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.523451 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:14Z","lastTransitionTime":"2026-01-31T07:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.626231 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.626308 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.626330 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.626366 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.626386 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:14Z","lastTransitionTime":"2026-01-31T07:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.729460 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.729490 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.729498 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.729513 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.729522 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:14Z","lastTransitionTime":"2026-01-31T07:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.832491 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.832528 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.832537 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.832554 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.832562 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:14Z","lastTransitionTime":"2026-01-31T07:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.935585 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.935622 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.935633 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.935649 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:14 crc kubenswrapper[4908]: I0131 07:22:14.935659 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:14Z","lastTransitionTime":"2026-01-31T07:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.037688 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.037727 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.037738 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.037755 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.037767 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:15Z","lastTransitionTime":"2026-01-31T07:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.143882 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.143923 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.143933 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.143949 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.143960 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:15Z","lastTransitionTime":"2026-01-31T07:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.146365 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.148149 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178"} Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.148565 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.151286 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" event={"ID":"d0d1945f-bd78-48c9-89be-35b3f2908dab","Type":"ContainerStarted","Data":"317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee"} Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.153147 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" event={"ID":"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d","Type":"ContainerStarted","Data":"83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943"} Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.167853 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.183659 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.196261 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.225556 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.243751 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.246076 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.246117 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.246130 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.246180 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.246193 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:15Z","lastTransitionTime":"2026-01-31T07:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.257479 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.271419 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.287057 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.298609 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.318853 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.328387 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.345052 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.348765 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.348804 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.348815 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.348832 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.348862 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:15Z","lastTransitionTime":"2026-01-31T07:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.355585 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.371392 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.382658 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 16:07:16.493632993 +0000 UTC Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.386471 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.399481 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.412625 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.423939 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.435542 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.446270 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.451671 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.451699 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.451710 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.451724 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.451733 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:15Z","lastTransitionTime":"2026-01-31T07:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.455373 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.466929 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.477027 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.490070 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.502605 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.513551 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.534111 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.546021 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.553732 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.553902 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.553965 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.554079 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.554158 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:15Z","lastTransitionTime":"2026-01-31T07:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.656901 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.656965 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.657014 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.657038 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.657054 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:15Z","lastTransitionTime":"2026-01-31T07:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.759497 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.759539 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.759551 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.759569 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.759581 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:15Z","lastTransitionTime":"2026-01-31T07:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.824198 4908 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.861534 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.861568 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.861578 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.861591 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.861600 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:15Z","lastTransitionTime":"2026-01-31T07:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.939267 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.939305 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.939289 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:15 crc kubenswrapper[4908]: E0131 07:22:15.939439 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:22:15 crc kubenswrapper[4908]: E0131 07:22:15.939516 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:22:15 crc kubenswrapper[4908]: E0131 07:22:15.939590 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.964358 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.964468 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.964540 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.964608 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:15 crc kubenswrapper[4908]: I0131 07:22:15.964673 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:15Z","lastTransitionTime":"2026-01-31T07:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.067794 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.068326 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.068410 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.068494 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.068573 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:16Z","lastTransitionTime":"2026-01-31T07:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.160569 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" event={"ID":"d0d1945f-bd78-48c9-89be-35b3f2908dab","Type":"ContainerStarted","Data":"b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8"} Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.170861 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.170949 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.170959 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.171001 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.171013 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:16Z","lastTransitionTime":"2026-01-31T07:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.274269 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.274314 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.274327 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.274372 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.274385 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:16Z","lastTransitionTime":"2026-01-31T07:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.376663 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.376716 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.376729 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.376751 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.376765 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:16Z","lastTransitionTime":"2026-01-31T07:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.383232 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 22:04:58.664590939 +0000 UTC Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.478895 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.478954 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.478997 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.479025 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.479041 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:16Z","lastTransitionTime":"2026-01-31T07:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.582608 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.583045 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.583183 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.583383 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.583504 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:16Z","lastTransitionTime":"2026-01-31T07:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.685849 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.685899 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.685911 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.685932 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.685945 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:16Z","lastTransitionTime":"2026-01-31T07:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.787927 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.787998 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.788011 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.788030 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.788044 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:16Z","lastTransitionTime":"2026-01-31T07:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.890511 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.890562 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.890570 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.890583 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.890593 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:16Z","lastTransitionTime":"2026-01-31T07:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.993019 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.993060 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.993077 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.993095 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:16 crc kubenswrapper[4908]: I0131 07:22:16.993106 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:16Z","lastTransitionTime":"2026-01-31T07:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.097498 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.097538 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.097549 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.097567 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.097578 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:17Z","lastTransitionTime":"2026-01-31T07:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.169026 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" event={"ID":"d0d1945f-bd78-48c9-89be-35b3f2908dab","Type":"ContainerStarted","Data":"2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb"} Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.169072 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" event={"ID":"d0d1945f-bd78-48c9-89be-35b3f2908dab","Type":"ContainerStarted","Data":"662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78"} Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.201443 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.201583 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.201610 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.201690 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.201715 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:17Z","lastTransitionTime":"2026-01-31T07:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.304474 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.304537 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.304549 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.304565 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.304577 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:17Z","lastTransitionTime":"2026-01-31T07:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.383670 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 01:01:06.561432769 +0000 UTC Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.406713 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.406774 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.406789 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.406808 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.406823 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:17Z","lastTransitionTime":"2026-01-31T07:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.508963 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.509008 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.509017 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.509030 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.509040 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:17Z","lastTransitionTime":"2026-01-31T07:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.611173 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.611210 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.611219 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.611234 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.611244 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:17Z","lastTransitionTime":"2026-01-31T07:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.714208 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.714254 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.714264 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.714281 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.714292 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:17Z","lastTransitionTime":"2026-01-31T07:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.816702 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.816781 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.816793 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.816837 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.816848 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:17Z","lastTransitionTime":"2026-01-31T07:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.919125 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.919168 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.919177 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.919191 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.919201 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:17Z","lastTransitionTime":"2026-01-31T07:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.939389 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.939408 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.939409 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:17 crc kubenswrapper[4908]: E0131 07:22:17.939513 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:22:17 crc kubenswrapper[4908]: E0131 07:22:17.939772 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:22:17 crc kubenswrapper[4908]: E0131 07:22:17.939846 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.954595 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:17Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.966759 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:17Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.983268 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:17Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:17 crc kubenswrapper[4908]: I0131 07:22:17.996486 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:17Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.008714 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.020090 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.021695 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.021840 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.021915 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.021996 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.022066 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:18Z","lastTransitionTime":"2026-01-31T07:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.031110 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.040578 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.050438 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.059446 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.072305 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.081552 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.091957 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.103322 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.124736 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.125074 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.125139 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.125383 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.125453 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:18Z","lastTransitionTime":"2026-01-31T07:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.227326 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.227365 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.227378 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.227395 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.227407 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:18Z","lastTransitionTime":"2026-01-31T07:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.245707 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.245808 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.245832 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.245848 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.245875 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:18 crc kubenswrapper[4908]: E0131 07:22:18.245958 4908 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:22:18 crc kubenswrapper[4908]: E0131 07:22:18.246021 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:34.24600743 +0000 UTC m=+60.861952084 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:22:18 crc kubenswrapper[4908]: E0131 07:22:18.246308 4908 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:22:18 crc kubenswrapper[4908]: E0131 07:22:18.246336 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:22:18 crc kubenswrapper[4908]: E0131 07:22:18.246366 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:22:18 crc kubenswrapper[4908]: E0131 07:22:18.246346 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:34.246338339 +0000 UTC m=+60.862282993 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:22:18 crc kubenswrapper[4908]: E0131 07:22:18.246377 4908 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:22:18 crc kubenswrapper[4908]: E0131 07:22:18.246444 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:22:34.246411451 +0000 UTC m=+60.862356105 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:22:18 crc kubenswrapper[4908]: E0131 07:22:18.246475 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:34.246467033 +0000 UTC m=+60.862411687 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:22:18 crc kubenswrapper[4908]: E0131 07:22:18.246636 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:22:18 crc kubenswrapper[4908]: E0131 07:22:18.246707 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:22:18 crc kubenswrapper[4908]: E0131 07:22:18.246781 4908 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:22:18 crc kubenswrapper[4908]: E0131 07:22:18.246915 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:34.246897764 +0000 UTC m=+60.862842418 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.328948 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.329008 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.329021 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.329037 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.329048 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:18Z","lastTransitionTime":"2026-01-31T07:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.384183 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 20:48:12.084806791 +0000 UTC Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.431916 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.431953 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.431963 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.431992 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.432029 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:18Z","lastTransitionTime":"2026-01-31T07:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.534739 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.534772 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.534797 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.534811 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.534823 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:18Z","lastTransitionTime":"2026-01-31T07:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.638830 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.638878 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.638891 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.638915 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.638925 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:18Z","lastTransitionTime":"2026-01-31T07:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.741500 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.741538 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.741547 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.741561 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.741570 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:18Z","lastTransitionTime":"2026-01-31T07:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.843657 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.843701 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.843713 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.843727 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.843737 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:18Z","lastTransitionTime":"2026-01-31T07:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.945523 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.945565 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.945578 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.945594 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:18 crc kubenswrapper[4908]: I0131 07:22:18.945652 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:18Z","lastTransitionTime":"2026-01-31T07:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.047850 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.047886 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.047895 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.047908 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.047917 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:19Z","lastTransitionTime":"2026-01-31T07:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.054441 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.054474 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.054484 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.054500 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.054511 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:19Z","lastTransitionTime":"2026-01-31T07:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:19 crc kubenswrapper[4908]: E0131 07:22:19.068029 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.071391 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.071425 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.071438 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.071456 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.071468 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:19Z","lastTransitionTime":"2026-01-31T07:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:19 crc kubenswrapper[4908]: E0131 07:22:19.084574 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.087456 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.087505 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.087515 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.087528 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.087552 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:19Z","lastTransitionTime":"2026-01-31T07:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:19 crc kubenswrapper[4908]: E0131 07:22:19.098871 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.102013 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.102043 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.102055 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.102067 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.102075 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:19Z","lastTransitionTime":"2026-01-31T07:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:19 crc kubenswrapper[4908]: E0131 07:22:19.112535 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.115319 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.115352 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.115361 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.115377 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.115391 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:19Z","lastTransitionTime":"2026-01-31T07:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:19 crc kubenswrapper[4908]: E0131 07:22:19.126265 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:19 crc kubenswrapper[4908]: E0131 07:22:19.126380 4908 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.150711 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.150740 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.150748 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.150763 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.150773 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:19Z","lastTransitionTime":"2026-01-31T07:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.178050 4908 generic.go:334] "Generic (PLEG): container finished" podID="2e2a4089-bcb9-4be0-bfbc-30ca54029e9d" containerID="83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943" exitCode=0 Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.178091 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" event={"ID":"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d","Type":"ContainerDied","Data":"83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943"} Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.192089 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.204091 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.214873 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.226853 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.239336 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.252587 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.252636 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.252649 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.252670 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.252689 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:19Z","lastTransitionTime":"2026-01-31T07:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.252630 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.262693 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.274627 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.286557 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.298282 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.308639 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.327022 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.339663 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.353499 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.355026 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.355054 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.355063 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.355086 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.355096 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:19Z","lastTransitionTime":"2026-01-31T07:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.384491 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 23:27:36.480247067 +0000 UTC Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.457266 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.457295 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.457304 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.457318 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.457328 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:19Z","lastTransitionTime":"2026-01-31T07:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.558732 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.558797 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.558810 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.558827 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.558839 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:19Z","lastTransitionTime":"2026-01-31T07:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.661504 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.661539 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.661550 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.661565 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.661575 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:19Z","lastTransitionTime":"2026-01-31T07:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.764232 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.764425 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.764443 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.764458 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.764468 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:19Z","lastTransitionTime":"2026-01-31T07:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.867125 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.867168 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.867177 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.867193 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.867204 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:19Z","lastTransitionTime":"2026-01-31T07:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.939527 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.939833 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.939746 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:19 crc kubenswrapper[4908]: E0131 07:22:19.939947 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:22:19 crc kubenswrapper[4908]: E0131 07:22:19.940193 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:22:19 crc kubenswrapper[4908]: E0131 07:22:19.940315 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.970378 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.970491 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.970506 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.970528 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:19 crc kubenswrapper[4908]: I0131 07:22:19.970552 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:19Z","lastTransitionTime":"2026-01-31T07:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.073221 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.073286 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.073301 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.073326 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.073349 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:20Z","lastTransitionTime":"2026-01-31T07:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.175855 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.175918 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.175929 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.175953 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.175965 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:20Z","lastTransitionTime":"2026-01-31T07:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.186492 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" event={"ID":"d0d1945f-bd78-48c9-89be-35b3f2908dab","Type":"ContainerStarted","Data":"d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07"} Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.278871 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.278926 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.278937 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.278962 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.279012 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:20Z","lastTransitionTime":"2026-01-31T07:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.381236 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.381284 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.381294 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.381312 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.381323 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:20Z","lastTransitionTime":"2026-01-31T07:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.385406 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 23:38:04.132840482 +0000 UTC Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.485715 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.485783 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.485798 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.485823 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.485837 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:20Z","lastTransitionTime":"2026-01-31T07:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.589095 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.589143 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.589152 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.589168 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.589179 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:20Z","lastTransitionTime":"2026-01-31T07:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.692035 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.692480 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.692561 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.692649 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.692729 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:20Z","lastTransitionTime":"2026-01-31T07:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.795714 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.795771 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.795789 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.795813 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.795827 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:20Z","lastTransitionTime":"2026-01-31T07:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.902130 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.902815 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.902891 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.902916 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:20 crc kubenswrapper[4908]: I0131 07:22:20.902930 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:20Z","lastTransitionTime":"2026-01-31T07:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.005423 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.005476 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.005489 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.005506 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.005517 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:21Z","lastTransitionTime":"2026-01-31T07:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.107932 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.107991 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.108004 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.108019 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.108030 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:21Z","lastTransitionTime":"2026-01-31T07:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.192568 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" event={"ID":"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d","Type":"ContainerStarted","Data":"429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba"} Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.210736 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.210783 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.210796 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.210813 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.210825 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:21Z","lastTransitionTime":"2026-01-31T07:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.212651 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.226285 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.238948 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.250139 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.262737 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.275359 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.286002 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.299595 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.310639 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.313265 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.313301 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.313310 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.313323 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.313332 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:21Z","lastTransitionTime":"2026-01-31T07:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.322561 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.334672 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.346275 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.358238 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.374175 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.386489 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 19:27:42.636613807 +0000 UTC Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.441820 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.441862 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.441912 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.441930 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.441941 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:21Z","lastTransitionTime":"2026-01-31T07:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.545212 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.545302 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.545317 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.545335 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.545348 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:21Z","lastTransitionTime":"2026-01-31T07:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.648760 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.648803 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.648814 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.648841 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.648853 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:21Z","lastTransitionTime":"2026-01-31T07:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.751420 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.751452 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.751462 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.751479 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.751492 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:21Z","lastTransitionTime":"2026-01-31T07:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.853618 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.853649 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.853664 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.853676 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.853684 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:21Z","lastTransitionTime":"2026-01-31T07:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.939601 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.939665 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.939800 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:21 crc kubenswrapper[4908]: E0131 07:22:21.940087 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:22:21 crc kubenswrapper[4908]: E0131 07:22:21.940162 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:22:21 crc kubenswrapper[4908]: E0131 07:22:21.940260 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.955917 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.955960 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.955968 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.955995 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:21 crc kubenswrapper[4908]: I0131 07:22:21.956006 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:21Z","lastTransitionTime":"2026-01-31T07:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.058320 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.058356 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.058364 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.058377 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.058387 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:22Z","lastTransitionTime":"2026-01-31T07:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.160421 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.160474 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.160490 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.160511 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.160535 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:22Z","lastTransitionTime":"2026-01-31T07:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.198259 4908 generic.go:334] "Generic (PLEG): container finished" podID="2e2a4089-bcb9-4be0-bfbc-30ca54029e9d" containerID="429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba" exitCode=0 Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.198306 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" event={"ID":"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d","Type":"ContainerDied","Data":"429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba"} Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.223689 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.242266 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.258037 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.263432 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.263503 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.263520 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.263536 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.263547 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:22Z","lastTransitionTime":"2026-01-31T07:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.271204 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.278929 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp"] Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.279419 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.280752 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.280969 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.288686 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.301952 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.315901 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.330036 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.343373 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.354617 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.367880 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.367918 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.367928 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.367944 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.367954 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:22Z","lastTransitionTime":"2026-01-31T07:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.368467 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.382496 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.385349 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/85b723d6-2526-40a1-9e55-05487affbda0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-49tqp\" (UID: \"85b723d6-2526-40a1-9e55-05487affbda0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.385382 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcd7m\" (UniqueName: \"kubernetes.io/projected/85b723d6-2526-40a1-9e55-05487affbda0-kube-api-access-mcd7m\") pod \"ovnkube-control-plane-749d76644c-49tqp\" (UID: \"85b723d6-2526-40a1-9e55-05487affbda0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.385417 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/85b723d6-2526-40a1-9e55-05487affbda0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-49tqp\" (UID: \"85b723d6-2526-40a1-9e55-05487affbda0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.385432 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/85b723d6-2526-40a1-9e55-05487affbda0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-49tqp\" (UID: \"85b723d6-2526-40a1-9e55-05487affbda0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.386966 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 23:17:37.834355125 +0000 UTC Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.392584 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.409969 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.422256 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.433277 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85b723d6-2526-40a1-9e55-05487affbda0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49tqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.445501 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.455626 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.467308 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.470612 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.470637 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.470644 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.470658 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.470666 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:22Z","lastTransitionTime":"2026-01-31T07:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.476753 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.485767 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/85b723d6-2526-40a1-9e55-05487affbda0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-49tqp\" (UID: \"85b723d6-2526-40a1-9e55-05487affbda0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.485803 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/85b723d6-2526-40a1-9e55-05487affbda0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-49tqp\" (UID: \"85b723d6-2526-40a1-9e55-05487affbda0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.485838 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/85b723d6-2526-40a1-9e55-05487affbda0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-49tqp\" (UID: \"85b723d6-2526-40a1-9e55-05487affbda0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.485865 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcd7m\" (UniqueName: \"kubernetes.io/projected/85b723d6-2526-40a1-9e55-05487affbda0-kube-api-access-mcd7m\") pod \"ovnkube-control-plane-749d76644c-49tqp\" (UID: \"85b723d6-2526-40a1-9e55-05487affbda0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.486450 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/85b723d6-2526-40a1-9e55-05487affbda0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-49tqp\" (UID: \"85b723d6-2526-40a1-9e55-05487affbda0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.486608 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/85b723d6-2526-40a1-9e55-05487affbda0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-49tqp\" (UID: \"85b723d6-2526-40a1-9e55-05487affbda0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.489737 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.491891 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/85b723d6-2526-40a1-9e55-05487affbda0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-49tqp\" (UID: \"85b723d6-2526-40a1-9e55-05487affbda0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.502638 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcd7m\" (UniqueName: \"kubernetes.io/projected/85b723d6-2526-40a1-9e55-05487affbda0-kube-api-access-mcd7m\") pod \"ovnkube-control-plane-749d76644c-49tqp\" (UID: \"85b723d6-2526-40a1-9e55-05487affbda0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.504454 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.514313 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.526597 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.535130 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.546739 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.565085 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.572885 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.572916 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.572924 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.572938 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.572949 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:22Z","lastTransitionTime":"2026-01-31T07:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.577615 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.593128 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.595923 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.675682 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.675715 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.675723 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.675738 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.675747 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:22Z","lastTransitionTime":"2026-01-31T07:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.777952 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.778015 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.778027 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.778045 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.778059 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:22Z","lastTransitionTime":"2026-01-31T07:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.880771 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.880821 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.880834 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.880854 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.880866 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:22Z","lastTransitionTime":"2026-01-31T07:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.986485 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.986537 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.986554 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.986574 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:22 crc kubenswrapper[4908]: I0131 07:22:22.986588 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:22Z","lastTransitionTime":"2026-01-31T07:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.089812 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.089892 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.089918 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.089953 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.090009 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:23Z","lastTransitionTime":"2026-01-31T07:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.192085 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.192134 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.192151 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.192171 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.192188 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:23Z","lastTransitionTime":"2026-01-31T07:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.202777 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" event={"ID":"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d","Type":"ContainerStarted","Data":"1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482"} Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.205542 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" event={"ID":"85b723d6-2526-40a1-9e55-05487affbda0","Type":"ContainerStarted","Data":"6d463e63c37a88dbb9e1d5d52d077c8af598a740d68a601b420741a14ca0684a"} Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.293901 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.294058 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.294137 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.294207 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.294273 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:23Z","lastTransitionTime":"2026-01-31T07:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.388077 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 22:00:51.020592167 +0000 UTC Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.396958 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.397016 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.397028 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.397045 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.397056 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:23Z","lastTransitionTime":"2026-01-31T07:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.499532 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.499607 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.499639 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.499671 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.499689 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:23Z","lastTransitionTime":"2026-01-31T07:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.601435 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.601497 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.601517 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.601537 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.601552 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:23Z","lastTransitionTime":"2026-01-31T07:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.704667 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.704738 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.704756 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.704782 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.704802 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:23Z","lastTransitionTime":"2026-01-31T07:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.748676 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-2cg54"] Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.749494 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:22:23 crc kubenswrapper[4908]: E0131 07:22:23.749620 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.768596 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.787086 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.801632 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.807425 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.807464 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.807481 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.807502 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.807520 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:23Z","lastTransitionTime":"2026-01-31T07:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.814353 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.828512 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.840617 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.854203 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.864855 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.876754 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2cg54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1242d7b7-ba0b-4084-88f1-fedf57d84b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2cg54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.889535 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.899708 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1242d7b7-ba0b-4084-88f1-fedf57d84b11-metrics-certs\") pod \"network-metrics-daemon-2cg54\" (UID: \"1242d7b7-ba0b-4084-88f1-fedf57d84b11\") " pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.899780 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn5gh\" (UniqueName: \"kubernetes.io/projected/1242d7b7-ba0b-4084-88f1-fedf57d84b11-kube-api-access-cn5gh\") pod \"network-metrics-daemon-2cg54\" (UID: \"1242d7b7-ba0b-4084-88f1-fedf57d84b11\") " pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.903154 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.909961 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.910010 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.910022 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.910037 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.910047 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:23Z","lastTransitionTime":"2026-01-31T07:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.915844 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.926093 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.940014 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.940060 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.940107 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:23 crc kubenswrapper[4908]: E0131 07:22:23.940145 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:22:23 crc kubenswrapper[4908]: E0131 07:22:23.940216 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:22:23 crc kubenswrapper[4908]: E0131 07:22:23.940332 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.944261 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.954707 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:23 crc kubenswrapper[4908]: I0131 07:22:23.966037 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85b723d6-2526-40a1-9e55-05487affbda0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49tqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.001272 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1242d7b7-ba0b-4084-88f1-fedf57d84b11-metrics-certs\") pod \"network-metrics-daemon-2cg54\" (UID: \"1242d7b7-ba0b-4084-88f1-fedf57d84b11\") " pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.001367 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn5gh\" (UniqueName: \"kubernetes.io/projected/1242d7b7-ba0b-4084-88f1-fedf57d84b11-kube-api-access-cn5gh\") pod \"network-metrics-daemon-2cg54\" (UID: \"1242d7b7-ba0b-4084-88f1-fedf57d84b11\") " pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:22:24 crc kubenswrapper[4908]: E0131 07:22:24.001392 4908 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:22:24 crc kubenswrapper[4908]: E0131 07:22:24.001454 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1242d7b7-ba0b-4084-88f1-fedf57d84b11-metrics-certs podName:1242d7b7-ba0b-4084-88f1-fedf57d84b11 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:24.501436641 +0000 UTC m=+51.117381295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1242d7b7-ba0b-4084-88f1-fedf57d84b11-metrics-certs") pod "network-metrics-daemon-2cg54" (UID: "1242d7b7-ba0b-4084-88f1-fedf57d84b11") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.012071 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.012128 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.012144 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.012170 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.012185 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:24Z","lastTransitionTime":"2026-01-31T07:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.019330 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn5gh\" (UniqueName: \"kubernetes.io/projected/1242d7b7-ba0b-4084-88f1-fedf57d84b11-kube-api-access-cn5gh\") pod \"network-metrics-daemon-2cg54\" (UID: \"1242d7b7-ba0b-4084-88f1-fedf57d84b11\") " pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.117505 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.117915 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.117946 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.117996 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.118012 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:24Z","lastTransitionTime":"2026-01-31T07:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.212746 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" event={"ID":"d0d1945f-bd78-48c9-89be-35b3f2908dab","Type":"ContainerStarted","Data":"f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf"} Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.215068 4908 generic.go:334] "Generic (PLEG): container finished" podID="2e2a4089-bcb9-4be0-bfbc-30ca54029e9d" containerID="1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482" exitCode=0 Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.215132 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" event={"ID":"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d","Type":"ContainerDied","Data":"1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482"} Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.219548 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" event={"ID":"85b723d6-2526-40a1-9e55-05487affbda0","Type":"ContainerStarted","Data":"abce21a8cb5d8563627e3b86718c101d521671c14af9463131aa9d3777565d55"} Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.219602 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" event={"ID":"85b723d6-2526-40a1-9e55-05487affbda0","Type":"ContainerStarted","Data":"4dc31b24b6cfd1400d56d1db7b6c204037f55d607e1f1d03c78c0cc61ec38bcd"} Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.220701 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.220755 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.220774 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.220799 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.220819 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:24Z","lastTransitionTime":"2026-01-31T07:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.232391 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.244063 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.255831 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.272140 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.282394 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.296552 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.305395 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.315478 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2cg54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1242d7b7-ba0b-4084-88f1-fedf57d84b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2cg54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.339238 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.355354 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.355391 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.355403 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.355419 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.355449 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:24Z","lastTransitionTime":"2026-01-31T07:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.369535 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.384941 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.389236 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 03:22:31.773323921 +0000 UTC Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.395689 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.415129 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.429836 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.440559 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85b723d6-2526-40a1-9e55-05487affbda0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49tqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.451263 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.458002 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.458040 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.458053 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.458068 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.458079 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:24Z","lastTransitionTime":"2026-01-31T07:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.506351 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1242d7b7-ba0b-4084-88f1-fedf57d84b11-metrics-certs\") pod \"network-metrics-daemon-2cg54\" (UID: \"1242d7b7-ba0b-4084-88f1-fedf57d84b11\") " pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:22:24 crc kubenswrapper[4908]: E0131 07:22:24.506559 4908 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:22:24 crc kubenswrapper[4908]: E0131 07:22:24.506642 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1242d7b7-ba0b-4084-88f1-fedf57d84b11-metrics-certs podName:1242d7b7-ba0b-4084-88f1-fedf57d84b11 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:25.506623897 +0000 UTC m=+52.122568551 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1242d7b7-ba0b-4084-88f1-fedf57d84b11-metrics-certs") pod "network-metrics-daemon-2cg54" (UID: "1242d7b7-ba0b-4084-88f1-fedf57d84b11") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.560642 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.560684 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.560695 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.560712 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.560723 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:24Z","lastTransitionTime":"2026-01-31T07:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.662787 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.662824 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.662833 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.662848 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.662857 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:24Z","lastTransitionTime":"2026-01-31T07:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.764601 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.764649 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.764666 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.764688 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.764705 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:24Z","lastTransitionTime":"2026-01-31T07:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.867115 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.867164 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.867181 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.867199 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.867211 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:24Z","lastTransitionTime":"2026-01-31T07:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.939776 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:22:24 crc kubenswrapper[4908]: E0131 07:22:24.939917 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.969790 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.969833 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.969842 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.969857 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:24 crc kubenswrapper[4908]: I0131 07:22:24.969870 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:24Z","lastTransitionTime":"2026-01-31T07:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.072495 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.072538 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.072548 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.072563 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.072573 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:25Z","lastTransitionTime":"2026-01-31T07:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.174807 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.174849 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.174861 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.174877 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.174889 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:25Z","lastTransitionTime":"2026-01-31T07:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.226024 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" event={"ID":"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d","Type":"ContainerStarted","Data":"74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9"} Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.232737 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" event={"ID":"d0d1945f-bd78-48c9-89be-35b3f2908dab","Type":"ContainerStarted","Data":"bed6706d35d82710c1c15eebe07b4a161e74e35734e8a63ea1dbbdc620e6ead5"} Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.232970 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.233010 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.233023 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.246012 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.262349 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.275558 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.278223 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.278276 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.278288 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.278306 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.278322 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:25Z","lastTransitionTime":"2026-01-31T07:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.283713 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.283830 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.291389 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.303930 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.314965 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85b723d6-2526-40a1-9e55-05487affbda0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49tqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.327445 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.339774 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.354102 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.365833 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.375831 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2cg54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1242d7b7-ba0b-4084-88f1-fedf57d84b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2cg54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.381778 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.381933 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.382025 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.382098 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.382160 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:25Z","lastTransitionTime":"2026-01-31T07:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.388920 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.389917 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 17:25:12.683463805 +0000 UTC Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.401128 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.411399 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.421630 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.437605 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.449248 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.462203 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.473106 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.484583 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.484625 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.484638 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.484657 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.484669 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:25Z","lastTransitionTime":"2026-01-31T07:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.489818 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.500772 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.512151 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2cg54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1242d7b7-ba0b-4084-88f1-fedf57d84b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2cg54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.517111 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1242d7b7-ba0b-4084-88f1-fedf57d84b11-metrics-certs\") pod \"network-metrics-daemon-2cg54\" (UID: \"1242d7b7-ba0b-4084-88f1-fedf57d84b11\") " pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:22:25 crc kubenswrapper[4908]: E0131 07:22:25.517236 4908 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:22:25 crc kubenswrapper[4908]: E0131 07:22:25.517280 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1242d7b7-ba0b-4084-88f1-fedf57d84b11-metrics-certs podName:1242d7b7-ba0b-4084-88f1-fedf57d84b11 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:27.517268497 +0000 UTC m=+54.133213151 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1242d7b7-ba0b-4084-88f1-fedf57d84b11-metrics-certs") pod "network-metrics-daemon-2cg54" (UID: "1242d7b7-ba0b-4084-88f1-fedf57d84b11") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.523438 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.537149 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.548806 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.563048 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.580703 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed6706d35d82710c1c15eebe07b4a161e74e35734e8a63ea1dbbdc620e6ead5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.587160 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.587217 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.587234 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.587258 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.587377 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:25Z","lastTransitionTime":"2026-01-31T07:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.595548 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.605474 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85b723d6-2526-40a1-9e55-05487affbda0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc31b24b6cfd1400d56d1db7b6c204037f55d607e1f1d03c78c0cc61ec38bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abce21a8cb5d8563627e3b86718c101d521671c14af9463131aa9d3777565d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49tqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.615336 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.627997 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.640534 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.689586 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.689622 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.689630 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.689642 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.689651 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:25Z","lastTransitionTime":"2026-01-31T07:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.792831 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.792893 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.792908 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.792929 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.792945 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:25Z","lastTransitionTime":"2026-01-31T07:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.896646 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.896710 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.896731 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.896780 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.896807 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:25Z","lastTransitionTime":"2026-01-31T07:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.939734 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.939843 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:25 crc kubenswrapper[4908]: E0131 07:22:25.939973 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.940116 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:25 crc kubenswrapper[4908]: E0131 07:22:25.940265 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:22:25 crc kubenswrapper[4908]: E0131 07:22:25.940364 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.999479 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.999535 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.999546 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:25 crc kubenswrapper[4908]: I0131 07:22:25.999567 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:25.999580 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:25Z","lastTransitionTime":"2026-01-31T07:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.102191 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.102262 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.102280 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.102305 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.102323 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:26Z","lastTransitionTime":"2026-01-31T07:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.205748 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.205801 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.205820 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.205848 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.205862 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:26Z","lastTransitionTime":"2026-01-31T07:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.308826 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.308944 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.308966 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.309019 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.309037 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:26Z","lastTransitionTime":"2026-01-31T07:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.390570 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 06:11:08.746487334 +0000 UTC Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.412096 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.412147 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.412161 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.412184 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.412201 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:26Z","lastTransitionTime":"2026-01-31T07:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.515430 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.515482 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.515490 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.515510 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.515521 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:26Z","lastTransitionTime":"2026-01-31T07:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.619199 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.619262 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.619273 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.619295 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.619310 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:26Z","lastTransitionTime":"2026-01-31T07:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.722670 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.722714 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.722725 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.722744 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.722755 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:26Z","lastTransitionTime":"2026-01-31T07:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.826740 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.826804 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.826822 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.826848 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.826866 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:26Z","lastTransitionTime":"2026-01-31T07:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.929863 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.929906 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.929916 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.929933 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.929946 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:26Z","lastTransitionTime":"2026-01-31T07:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:26 crc kubenswrapper[4908]: I0131 07:22:26.939693 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:22:26 crc kubenswrapper[4908]: E0131 07:22:26.939884 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.033385 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.033442 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.033461 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.033486 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.033504 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:27Z","lastTransitionTime":"2026-01-31T07:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.136448 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.136529 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.136554 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.136588 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.136613 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:27Z","lastTransitionTime":"2026-01-31T07:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.238970 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.239094 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.239112 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.239136 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.239154 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:27Z","lastTransitionTime":"2026-01-31T07:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.341957 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.342019 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.342027 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.342043 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.342058 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:27Z","lastTransitionTime":"2026-01-31T07:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.391590 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 05:17:26.671325568 +0000 UTC Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.444183 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.444250 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.444272 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.444299 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.444329 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:27Z","lastTransitionTime":"2026-01-31T07:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.536783 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1242d7b7-ba0b-4084-88f1-fedf57d84b11-metrics-certs\") pod \"network-metrics-daemon-2cg54\" (UID: \"1242d7b7-ba0b-4084-88f1-fedf57d84b11\") " pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:22:27 crc kubenswrapper[4908]: E0131 07:22:27.537165 4908 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:22:27 crc kubenswrapper[4908]: E0131 07:22:27.537288 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1242d7b7-ba0b-4084-88f1-fedf57d84b11-metrics-certs podName:1242d7b7-ba0b-4084-88f1-fedf57d84b11 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:31.537254652 +0000 UTC m=+58.153199356 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1242d7b7-ba0b-4084-88f1-fedf57d84b11-metrics-certs") pod "network-metrics-daemon-2cg54" (UID: "1242d7b7-ba0b-4084-88f1-fedf57d84b11") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.546528 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.546572 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.546589 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.546611 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.546626 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:27Z","lastTransitionTime":"2026-01-31T07:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.649053 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.649086 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.649094 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.649106 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.649117 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:27Z","lastTransitionTime":"2026-01-31T07:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.751700 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.751780 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.751803 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.751833 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.751856 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:27Z","lastTransitionTime":"2026-01-31T07:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.854538 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.854581 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.854590 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.854602 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.854610 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:27Z","lastTransitionTime":"2026-01-31T07:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.916847 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.926617 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.934793 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:27Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.941169 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.941156 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.941499 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:27 crc kubenswrapper[4908]: E0131 07:22:27.941509 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:22:27 crc kubenswrapper[4908]: E0131 07:22:27.941542 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:22:27 crc kubenswrapper[4908]: E0131 07:22:27.941760 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.957199 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.957262 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.957282 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.957310 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.957329 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:27Z","lastTransitionTime":"2026-01-31T07:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.972616 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed6706d35d82710c1c15eebe07b4a161e74e35734e8a63ea1dbbdc620e6ead5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:27Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:27 crc kubenswrapper[4908]: I0131 07:22:27.992448 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:27Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.008336 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.022065 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.037735 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85b723d6-2526-40a1-9e55-05487affbda0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc31b24b6cfd1400d56d1db7b6c204037f55d607e1f1d03c78c0cc61ec38bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abce21a8cb5d8563627e3b86718c101d521671c14af9463131aa9d3777565d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49tqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.052132 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.060740 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.061053 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.061196 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.061285 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.061351 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:28Z","lastTransitionTime":"2026-01-31T07:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.066845 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.082505 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.097737 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.108377 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.119036 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2cg54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1242d7b7-ba0b-4084-88f1-fedf57d84b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2cg54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.131323 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.146137 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.160201 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.164023 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.164070 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.164086 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.164103 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.164114 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:28Z","lastTransitionTime":"2026-01-31T07:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.172240 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.189524 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.202421 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85b723d6-2526-40a1-9e55-05487affbda0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc31b24b6cfd1400d56d1db7b6c204037f55d607e1f1d03c78c0cc61ec38bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abce21a8cb5d8563627e3b86718c101d521671c14af9463131aa9d3777565d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49tqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.217412 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.231662 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.245666 4908 generic.go:334] "Generic (PLEG): container finished" podID="2e2a4089-bcb9-4be0-bfbc-30ca54029e9d" containerID="74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9" exitCode=0 Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.245673 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.245827 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" event={"ID":"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d","Type":"ContainerDied","Data":"74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9"} Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.255276 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.266028 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.266062 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.266072 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.266088 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.266100 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:28Z","lastTransitionTime":"2026-01-31T07:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.268085 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.276339 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.285373 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2cg54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1242d7b7-ba0b-4084-88f1-fedf57d84b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2cg54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.296953 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.307488 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.318802 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.332087 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.340637 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.357033 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed6706d35d82710c1c15eebe07b4a161e74e35734e8a63ea1dbbdc620e6ead5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.368628 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.368663 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.368673 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.368693 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.368709 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:28Z","lastTransitionTime":"2026-01-31T07:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.375122 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.390811 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db6ec852-e95e-45de-ad44-ddc38907c9a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745a991c9c5f319a2963caaf508b01491692c3325e6b709376570b0fd6d874b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d985bf0c21fdaa280e6e3001a5ccdf36afc39a6ad0446f25d96eb13186d69ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://760487d3653d8039bb961bd2aface36198eeea534849f94840957f6f86e3f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.391804 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 01:48:05.907529509 +0000 UTC Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.407527 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.422066 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.436595 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.448137 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.463828 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.471217 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.471241 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.471248 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.471260 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.471269 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:28Z","lastTransitionTime":"2026-01-31T07:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.476708 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.491590 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2cg54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1242d7b7-ba0b-4084-88f1-fedf57d84b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2cg54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.505412 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.522951 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.536567 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.548086 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.558864 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.574595 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.574872 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.575023 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.575210 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.575330 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:28Z","lastTransitionTime":"2026-01-31T07:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.575597 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed6706d35d82710c1c15eebe07b4a161e74e35734e8a63ea1dbbdc620e6ead5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.590326 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.600537 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db6ec852-e95e-45de-ad44-ddc38907c9a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745a991c9c5f319a2963caaf508b01491692c3325e6b709376570b0fd6d874b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d985bf0c21fdaa280e6e3001a5ccdf36afc39a6ad0446f25d96eb13186d69ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://760487d3653d8039bb961bd2aface36198eeea534849f94840957f6f86e3f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.612411 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.628298 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85b723d6-2526-40a1-9e55-05487affbda0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc31b24b6cfd1400d56d1db7b6c204037f55d607e1f1d03c78c0cc61ec38bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abce21a8cb5d8563627e3b86718c101d521671c14af9463131aa9d3777565d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49tqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.678385 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.678427 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.678436 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.678452 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.678462 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:28Z","lastTransitionTime":"2026-01-31T07:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.781375 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.781444 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.781463 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.781490 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.781510 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:28Z","lastTransitionTime":"2026-01-31T07:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.884433 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.884781 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.885041 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.885264 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.885603 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:28Z","lastTransitionTime":"2026-01-31T07:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.939736 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:22:28 crc kubenswrapper[4908]: E0131 07:22:28.940148 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.989274 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.989388 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.989417 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.989450 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:28 crc kubenswrapper[4908]: I0131 07:22:28.989475 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:28Z","lastTransitionTime":"2026-01-31T07:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.091928 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.091965 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.091998 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.092014 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.092027 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:29Z","lastTransitionTime":"2026-01-31T07:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.196566 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.196625 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.196641 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.196660 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.196679 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:29Z","lastTransitionTime":"2026-01-31T07:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.299163 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.299213 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.299229 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.299248 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.299263 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:29Z","lastTransitionTime":"2026-01-31T07:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.392775 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 11:08:59.478145608 +0000 UTC Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.401642 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.401677 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.401687 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.401704 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.401725 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:29Z","lastTransitionTime":"2026-01-31T07:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.425081 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.425139 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.425153 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.425175 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.425190 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:29Z","lastTransitionTime":"2026-01-31T07:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:29 crc kubenswrapper[4908]: E0131 07:22:29.442627 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:29Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.447461 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.447509 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.447522 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.447540 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.447552 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:29Z","lastTransitionTime":"2026-01-31T07:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:29 crc kubenswrapper[4908]: E0131 07:22:29.462243 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:29Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.465994 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.466038 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.466050 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.466066 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.466078 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:29Z","lastTransitionTime":"2026-01-31T07:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:29 crc kubenswrapper[4908]: E0131 07:22:29.480013 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:29Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.484654 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.484688 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.484698 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.484714 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.484725 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:29Z","lastTransitionTime":"2026-01-31T07:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:29 crc kubenswrapper[4908]: E0131 07:22:29.503520 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:29Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.507086 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.507134 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.507149 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.507167 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.507181 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:29Z","lastTransitionTime":"2026-01-31T07:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:29 crc kubenswrapper[4908]: E0131 07:22:29.525819 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:29Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:29 crc kubenswrapper[4908]: E0131 07:22:29.526083 4908 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.528007 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.528062 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.528080 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.528103 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.528123 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:29Z","lastTransitionTime":"2026-01-31T07:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.631249 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.631298 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.631314 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.631339 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.631355 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:29Z","lastTransitionTime":"2026-01-31T07:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.734079 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.734127 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.734142 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.734176 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.734194 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:29Z","lastTransitionTime":"2026-01-31T07:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.836151 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.836184 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.836196 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.836213 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.836224 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:29Z","lastTransitionTime":"2026-01-31T07:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.938539 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.938574 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.938583 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.938596 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.938606 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:29Z","lastTransitionTime":"2026-01-31T07:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.939143 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:29 crc kubenswrapper[4908]: E0131 07:22:29.939223 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.939143 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:29 crc kubenswrapper[4908]: E0131 07:22:29.939283 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:22:29 crc kubenswrapper[4908]: I0131 07:22:29.939284 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:29 crc kubenswrapper[4908]: E0131 07:22:29.939386 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.040829 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.040864 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.040874 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.040888 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.040898 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:30Z","lastTransitionTime":"2026-01-31T07:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.143109 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.143162 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.143180 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.143202 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.143218 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:30Z","lastTransitionTime":"2026-01-31T07:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.246611 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.246659 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.246667 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.246681 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.246689 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:30Z","lastTransitionTime":"2026-01-31T07:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.256945 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" event={"ID":"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d","Type":"ContainerStarted","Data":"05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17"} Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.276347 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed6706d35d82710c1c15eebe07b4a161e74e35734e8a63ea1dbbdc620e6ead5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.302833 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.315117 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db6ec852-e95e-45de-ad44-ddc38907c9a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745a991c9c5f319a2963caaf508b01491692c3325e6b709376570b0fd6d874b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d985bf0c21fdaa280e6e3001a5ccdf36afc39a6ad0446f25d96eb13186d69ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://760487d3653d8039bb961bd2aface36198eeea534849f94840957f6f86e3f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.333671 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.346012 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.348717 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.348748 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.348759 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.348773 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.348783 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:30Z","lastTransitionTime":"2026-01-31T07:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.359283 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.372096 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85b723d6-2526-40a1-9e55-05487affbda0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc31b24b6cfd1400d56d1db7b6c204037f55d607e1f1d03c78c0cc61ec38bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abce21a8cb5d8563627e3b86718c101d521671c14af9463131aa9d3777565d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49tqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.385171 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.393427 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 12:17:10.924611966 +0000 UTC Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.396002 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.407742 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.417631 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.427359 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2cg54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1242d7b7-ba0b-4084-88f1-fedf57d84b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2cg54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.439446 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.451833 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.451862 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.451870 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.451886 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.451894 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:30Z","lastTransitionTime":"2026-01-31T07:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.453116 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.462818 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.471648 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.484380 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.554702 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.554749 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.554767 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.554790 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.554806 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:30Z","lastTransitionTime":"2026-01-31T07:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.658049 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.658464 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.658577 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.658703 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.658794 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:30Z","lastTransitionTime":"2026-01-31T07:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.762355 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.762399 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.762410 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.762429 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.762441 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:30Z","lastTransitionTime":"2026-01-31T07:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.864280 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.864326 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.864337 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.864353 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.864367 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:30Z","lastTransitionTime":"2026-01-31T07:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.940108 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:22:30 crc kubenswrapper[4908]: E0131 07:22:30.940266 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.966592 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.966657 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.966673 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.966698 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:30 crc kubenswrapper[4908]: I0131 07:22:30.966714 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:30Z","lastTransitionTime":"2026-01-31T07:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.068752 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.068808 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.068827 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.068851 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.068867 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:31Z","lastTransitionTime":"2026-01-31T07:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.172513 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.172558 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.172568 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.172582 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.172591 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:31Z","lastTransitionTime":"2026-01-31T07:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.268186 4908 generic.go:334] "Generic (PLEG): container finished" podID="2e2a4089-bcb9-4be0-bfbc-30ca54029e9d" containerID="05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17" exitCode=0 Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.268235 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" event={"ID":"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d","Type":"ContainerDied","Data":"05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17"} Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.274300 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.274358 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.274378 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.274401 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.274418 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:31Z","lastTransitionTime":"2026-01-31T07:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.281950 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.293641 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.302373 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.315332 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.324270 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.332443 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2cg54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1242d7b7-ba0b-4084-88f1-fedf57d84b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2cg54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.344037 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.356148 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.368866 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db6ec852-e95e-45de-ad44-ddc38907c9a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745a991c9c5f319a2963caaf508b01491692c3325e6b709376570b0fd6d874b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d985bf0c21fdaa280e6e3001a5ccdf36afc39a6ad0446f25d96eb13186d69ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://760487d3653d8039bb961bd2aface36198eeea534849f94840957f6f86e3f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.376811 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.376848 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.376859 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.376875 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.376886 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:31Z","lastTransitionTime":"2026-01-31T07:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.384722 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.394251 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 09:51:37.879523761 +0000 UTC Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.395004 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.415298 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed6706d35d82710c1c15eebe07b4a161e74e35734e8a63ea1dbbdc620e6ead5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.426862 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.436712 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85b723d6-2526-40a1-9e55-05487affbda0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc31b24b6cfd1400d56d1db7b6c204037f55d607e1f1d03c78c0cc61ec38bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abce21a8cb5d8563627e3b86718c101d521671c14af9463131aa9d3777565d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49tqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.449118 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.462039 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.473584 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.478936 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.478972 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.479000 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.479020 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.479032 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:31Z","lastTransitionTime":"2026-01-31T07:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.573301 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1242d7b7-ba0b-4084-88f1-fedf57d84b11-metrics-certs\") pod \"network-metrics-daemon-2cg54\" (UID: \"1242d7b7-ba0b-4084-88f1-fedf57d84b11\") " pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:22:31 crc kubenswrapper[4908]: E0131 07:22:31.573474 4908 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:22:31 crc kubenswrapper[4908]: E0131 07:22:31.573597 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1242d7b7-ba0b-4084-88f1-fedf57d84b11-metrics-certs podName:1242d7b7-ba0b-4084-88f1-fedf57d84b11 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:39.573569692 +0000 UTC m=+66.189514376 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1242d7b7-ba0b-4084-88f1-fedf57d84b11-metrics-certs") pod "network-metrics-daemon-2cg54" (UID: "1242d7b7-ba0b-4084-88f1-fedf57d84b11") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.581516 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.581583 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.581595 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.581613 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.581625 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:31Z","lastTransitionTime":"2026-01-31T07:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.684063 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.684117 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.684131 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.684150 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.684163 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:31Z","lastTransitionTime":"2026-01-31T07:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.726161 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.742163 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.762842 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85b723d6-2526-40a1-9e55-05487affbda0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc31b24b6cfd1400d56d1db7b6c204037f55d607e1f1d03c78c0cc61ec38bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abce21a8cb5d8563627e3b86718c101d521671c14af9463131aa9d3777565d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49tqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.774092 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.786513 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.786549 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.786559 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.786574 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.786582 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:31Z","lastTransitionTime":"2026-01-31T07:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.787874 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.800450 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.812594 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.831703 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.843443 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.858855 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2cg54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1242d7b7-ba0b-4084-88f1-fedf57d84b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2cg54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.873498 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.888664 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.888706 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.888722 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.888736 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.888752 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:31Z","lastTransitionTime":"2026-01-31T07:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.890460 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.904699 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.917715 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.930068 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.940370 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.940365 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:31 crc kubenswrapper[4908]: E0131 07:22:31.940482 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.940754 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:31 crc kubenswrapper[4908]: E0131 07:22:31.940848 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:22:31 crc kubenswrapper[4908]: E0131 07:22:31.940828 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.951490 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed6706d35d82710c1c15eebe07b4a161e74e35734e8a63ea1dbbdc620e6ead5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.968687 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.979645 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db6ec852-e95e-45de-ad44-ddc38907c9a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745a991c9c5f319a2963caaf508b01491692c3325e6b709376570b0fd6d874b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d985bf0c21fdaa280e6e3001a5ccdf36afc39a6ad0446f25d96eb13186d69ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://760487d3653d8039bb961bd2aface36198eeea534849f94840957f6f86e3f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.991022 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.991052 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.991060 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.991073 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:31 crc kubenswrapper[4908]: I0131 07:22:31.991082 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:31Z","lastTransitionTime":"2026-01-31T07:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.096810 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.096899 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.096908 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.096922 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.096931 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:32Z","lastTransitionTime":"2026-01-31T07:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.198555 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.198603 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.198615 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.198632 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.198645 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:32Z","lastTransitionTime":"2026-01-31T07:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.302132 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.302167 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.302179 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.302199 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.302210 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:32Z","lastTransitionTime":"2026-01-31T07:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.395166 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 20:58:20.751112052 +0000 UTC Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.405459 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.405501 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.405511 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.405526 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.405538 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:32Z","lastTransitionTime":"2026-01-31T07:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.508895 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.508951 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.508968 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.509010 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.509026 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:32Z","lastTransitionTime":"2026-01-31T07:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.614427 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.614507 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.614528 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.614558 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.614579 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:32Z","lastTransitionTime":"2026-01-31T07:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.717164 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.717196 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.717207 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.717221 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.717233 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:32Z","lastTransitionTime":"2026-01-31T07:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.819748 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.819781 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.819790 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.819803 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.819812 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:32Z","lastTransitionTime":"2026-01-31T07:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.922364 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.922418 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.922442 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.922462 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.922477 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:32Z","lastTransitionTime":"2026-01-31T07:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:32 crc kubenswrapper[4908]: I0131 07:22:32.939911 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:22:32 crc kubenswrapper[4908]: E0131 07:22:32.940084 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.024800 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.024838 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.024846 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.024862 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.024871 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:33Z","lastTransitionTime":"2026-01-31T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.127184 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.127220 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.127231 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.127246 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.127256 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:33Z","lastTransitionTime":"2026-01-31T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.230205 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.230269 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.230286 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.230324 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.230361 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:33Z","lastTransitionTime":"2026-01-31T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.281928 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" event={"ID":"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d","Type":"ContainerStarted","Data":"bde508a2a81cd89c4b62aef1f00bf38bd16df44670ae52b0402b240c013819c7"} Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.333753 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.333813 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.333833 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.333857 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.333875 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:33Z","lastTransitionTime":"2026-01-31T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.396108 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 20:08:35.088173565 +0000 UTC Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.436408 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.436449 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.436461 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.436477 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.436488 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:33Z","lastTransitionTime":"2026-01-31T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.538604 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.538639 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.538647 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.538660 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.538670 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:33Z","lastTransitionTime":"2026-01-31T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.641565 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.641612 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.641621 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.641635 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.641704 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:33Z","lastTransitionTime":"2026-01-31T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.744052 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.744117 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.744136 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.744161 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.744203 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:33Z","lastTransitionTime":"2026-01-31T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.846819 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.846851 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.846859 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.846872 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.846881 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:33Z","lastTransitionTime":"2026-01-31T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.939631 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.939685 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.939707 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:33 crc kubenswrapper[4908]: E0131 07:22:33.939806 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:22:33 crc kubenswrapper[4908]: E0131 07:22:33.940042 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:22:33 crc kubenswrapper[4908]: E0131 07:22:33.940257 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.949233 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.949300 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.949318 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.949344 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:33 crc kubenswrapper[4908]: I0131 07:22:33.949363 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:33Z","lastTransitionTime":"2026-01-31T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.052263 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.052321 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.052339 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.052361 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.052378 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:34Z","lastTransitionTime":"2026-01-31T07:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.155227 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.155297 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.155319 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.155351 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.155376 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:34Z","lastTransitionTime":"2026-01-31T07:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.257607 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.257661 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.257672 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.257689 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.257701 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:34Z","lastTransitionTime":"2026-01-31T07:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.288026 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xkd4f_d0d1945f-bd78-48c9-89be-35b3f2908dab/ovnkube-controller/0.log" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.291966 4908 generic.go:334] "Generic (PLEG): container finished" podID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerID="bed6706d35d82710c1c15eebe07b4a161e74e35734e8a63ea1dbbdc620e6ead5" exitCode=1 Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.292040 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" event={"ID":"d0d1945f-bd78-48c9-89be-35b3f2908dab","Type":"ContainerDied","Data":"bed6706d35d82710c1c15eebe07b4a161e74e35734e8a63ea1dbbdc620e6ead5"} Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.293025 4908 scope.go:117] "RemoveContainer" containerID="bed6706d35d82710c1c15eebe07b4a161e74e35734e8a63ea1dbbdc620e6ead5" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.298313 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:22:34 crc kubenswrapper[4908]: E0131 07:22:34.298407 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:23:06.29838851 +0000 UTC m=+92.914333174 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.298834 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.298875 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.298908 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:34 crc kubenswrapper[4908]: E0131 07:22:34.299036 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:22:34 crc kubenswrapper[4908]: E0131 07:22:34.299051 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:22:34 crc kubenswrapper[4908]: E0131 07:22:34.299063 4908 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:22:34 crc kubenswrapper[4908]: E0131 07:22:34.299098 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 07:23:06.299088839 +0000 UTC m=+92.915033503 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.299178 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:34 crc kubenswrapper[4908]: E0131 07:22:34.300694 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:22:34 crc kubenswrapper[4908]: E0131 07:22:34.300763 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:22:34 crc kubenswrapper[4908]: E0131 07:22:34.300788 4908 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:22:34 crc kubenswrapper[4908]: E0131 07:22:34.300881 4908 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:22:34 crc kubenswrapper[4908]: E0131 07:22:34.300899 4908 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:22:34 crc kubenswrapper[4908]: E0131 07:22:34.300907 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 07:23:06.300881277 +0000 UTC m=+92.916825971 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:22:34 crc kubenswrapper[4908]: E0131 07:22:34.301213 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:23:06.301194145 +0000 UTC m=+92.917138829 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:22:34 crc kubenswrapper[4908]: E0131 07:22:34.301238 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:23:06.301225346 +0000 UTC m=+92.917170030 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.315658 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.328544 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85b723d6-2526-40a1-9e55-05487affbda0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc31b24b6cfd1400d56d1db7b6c204037f55d607e1f1d03c78c0cc61ec38bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abce21a8cb5d8563627e3b86718c101d521671c14af9463131aa9d3777565d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49tqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.345201 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.357712 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.360244 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.360269 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.360278 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.360294 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.360304 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:34Z","lastTransitionTime":"2026-01-31T07:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.372241 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.386474 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.397099 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 17:58:26.441140937 +0000 UTC Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.402909 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.412430 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.430740 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde508a2a81cd89c4b62aef1f00bf38bd16df44670ae52b0402b240c013819c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.443750 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.452768 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2cg54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1242d7b7-ba0b-4084-88f1-fedf57d84b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2cg54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.462365 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.462491 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.462573 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.462675 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.462783 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:34Z","lastTransitionTime":"2026-01-31T07:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.466639 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.482923 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.496348 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db6ec852-e95e-45de-ad44-ddc38907c9a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745a991c9c5f319a2963caaf508b01491692c3325e6b709376570b0fd6d874b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d985bf0c21fdaa280e6e3001a5ccdf36afc39a6ad0446f25d96eb13186d69ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://760487d3653d8039bb961bd2aface36198eeea534849f94840957f6f86e3f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.512404 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.524117 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.548095 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed6706d35d82710c1c15eebe07b4a161e74e35734e8a63ea1dbbdc620e6ead5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.561888 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.565301 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.565330 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.565341 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.565356 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.565367 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:34Z","lastTransitionTime":"2026-01-31T07:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.574544 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.589789 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.601213 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2cg54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1242d7b7-ba0b-4084-88f1-fedf57d84b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2cg54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.612591 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.624714 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.634947 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.645223 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.657897 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde508a2a81cd89c4b62aef1f00bf38bd16df44670ae52b0402b240c013819c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.667072 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.668270 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.668301 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.668312 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.668329 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.668386 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:34Z","lastTransitionTime":"2026-01-31T07:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.678968 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.691721 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db6ec852-e95e-45de-ad44-ddc38907c9a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745a991c9c5f319a2963caaf508b01491692c3325e6b709376570b0fd6d874b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d985bf0c21fdaa280e6e3001a5ccdf36afc39a6ad0446f25d96eb13186d69ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://760487d3653d8039bb961bd2aface36198eeea534849f94840957f6f86e3f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.706445 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.716435 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.732861 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed6706d35d82710c1c15eebe07b4a161e74e35734e8a63ea1dbbdc620e6ead5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed6706d35d82710c1c15eebe07b4a161e74e35734e8a63ea1dbbdc620e6ead5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:22:33Z\\\",\\\"message\\\":\\\"factory.go:141\\\\nI0131 07:22:32.656244 6292 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 07:22:32.656475 6292 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 07:22:32.656490 6292 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 07:22:32.656522 6292 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 07:22:32.656536 6292 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 07:22:32.656552 6292 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 07:22:32.656587 6292 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:22:32.656607 6292 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 07:22:32.656613 6292 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 07:22:32.656615 6292 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 07:22:32.656640 6292 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 07:22:32.656654 6292 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:22:32.656671 6292 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 07:22:32.657051 6292 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.744755 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.754604 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85b723d6-2526-40a1-9e55-05487affbda0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc31b24b6cfd1400d56d1db7b6c204037f55d607e1f1d03c78c0cc61ec38bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abce21a8cb5d8563627e3b86718c101d521671c14af9463131aa9d3777565d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49tqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.770235 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.770359 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.770424 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.770498 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.770572 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:34Z","lastTransitionTime":"2026-01-31T07:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.873256 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.873319 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.873337 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.873362 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.873379 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:34Z","lastTransitionTime":"2026-01-31T07:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.939958 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:22:34 crc kubenswrapper[4908]: E0131 07:22:34.940183 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.976179 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.976243 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.976266 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.976296 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:34 crc kubenswrapper[4908]: I0131 07:22:34.976319 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:34Z","lastTransitionTime":"2026-01-31T07:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.078532 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.078788 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.078871 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.078969 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.079097 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:35Z","lastTransitionTime":"2026-01-31T07:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.185498 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.185562 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.185584 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.185617 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.185638 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:35Z","lastTransitionTime":"2026-01-31T07:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.288284 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.288348 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.288366 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.288390 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.288407 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:35Z","lastTransitionTime":"2026-01-31T07:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.298105 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xkd4f_d0d1945f-bd78-48c9-89be-35b3f2908dab/ovnkube-controller/0.log" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.302149 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" event={"ID":"d0d1945f-bd78-48c9-89be-35b3f2908dab","Type":"ContainerStarted","Data":"7e3f10bbbeb862ad1f6bfa586f98e588ee502868a66e341f65877c2e7e872808"} Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.391807 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.391885 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.391914 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.391945 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.391968 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:35Z","lastTransitionTime":"2026-01-31T07:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.398311 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 13:04:06.218814093 +0000 UTC Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.495213 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.495265 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.495276 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.495295 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.495308 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:35Z","lastTransitionTime":"2026-01-31T07:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.597573 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.597610 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.597620 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.597634 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.597644 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:35Z","lastTransitionTime":"2026-01-31T07:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.701353 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.701415 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.701428 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.701453 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.701472 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:35Z","lastTransitionTime":"2026-01-31T07:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.804185 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.804225 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.804237 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.804253 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.804264 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:35Z","lastTransitionTime":"2026-01-31T07:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.906434 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.906477 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.906486 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.906498 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.906509 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:35Z","lastTransitionTime":"2026-01-31T07:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.940014 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.940021 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:35 crc kubenswrapper[4908]: E0131 07:22:35.940147 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:22:35 crc kubenswrapper[4908]: I0131 07:22:35.940197 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:35 crc kubenswrapper[4908]: E0131 07:22:35.940298 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:22:35 crc kubenswrapper[4908]: E0131 07:22:35.940389 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.009542 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.009602 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.009615 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.009633 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.009646 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:36Z","lastTransitionTime":"2026-01-31T07:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.111910 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.111954 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.111963 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.111995 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.112005 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:36Z","lastTransitionTime":"2026-01-31T07:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.214903 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.214954 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.214966 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.215016 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.215030 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:36Z","lastTransitionTime":"2026-01-31T07:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.305744 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.317671 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.317726 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.317736 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.317755 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.317768 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:36Z","lastTransitionTime":"2026-01-31T07:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.323871 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.343749 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde508a2a81cd89c4b62aef1f00bf38bd16df44670ae52b0402b240c013819c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.356824 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.372062 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2cg54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1242d7b7-ba0b-4084-88f1-fedf57d84b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2cg54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.388579 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.398737 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 16:31:31.678951797 +0000 UTC Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.402360 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.416283 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.419837 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.419884 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.419901 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.419922 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.419938 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:36Z","lastTransitionTime":"2026-01-31T07:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.430408 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.440376 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.461413 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3f10bbbeb862ad1f6bfa586f98e588ee502868a66e341f65877c2e7e872808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed6706d35d82710c1c15eebe07b4a161e74e35734e8a63ea1dbbdc620e6ead5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:22:33Z\\\",\\\"message\\\":\\\"factory.go:141\\\\nI0131 07:22:32.656244 6292 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 07:22:32.656475 6292 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 07:22:32.656490 6292 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 07:22:32.656522 6292 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 07:22:32.656536 6292 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 07:22:32.656552 6292 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 07:22:32.656587 6292 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:22:32.656607 6292 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 07:22:32.656613 6292 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 07:22:32.656615 6292 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 07:22:32.656640 6292 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 07:22:32.656654 6292 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:22:32.656671 6292 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 07:22:32.657051 6292 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.473678 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.484103 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db6ec852-e95e-45de-ad44-ddc38907c9a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745a991c9c5f319a2963caaf508b01491692c3325e6b709376570b0fd6d874b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d985bf0c21fdaa280e6e3001a5ccdf36afc39a6ad0446f25d96eb13186d69ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://760487d3653d8039bb961bd2aface36198eeea534849f94840957f6f86e3f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.495198 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.511055 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85b723d6-2526-40a1-9e55-05487affbda0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc31b24b6cfd1400d56d1db7b6c204037f55d607e1f1d03c78c0cc61ec38bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abce21a8cb5d8563627e3b86718c101d521671c14af9463131aa9d3777565d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49tqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.522003 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.522045 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.522055 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.522070 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.522073 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.522081 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:36Z","lastTransitionTime":"2026-01-31T07:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.532404 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.544252 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.624702 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.624743 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.624753 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.624765 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.624774 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:36Z","lastTransitionTime":"2026-01-31T07:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.737035 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.737070 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.737080 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.737095 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.737106 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:36Z","lastTransitionTime":"2026-01-31T07:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.839109 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.839150 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.839162 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.839178 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.839189 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:36Z","lastTransitionTime":"2026-01-31T07:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.939993 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:22:36 crc kubenswrapper[4908]: E0131 07:22:36.940111 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.941616 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.941644 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.941653 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.941664 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:36 crc kubenswrapper[4908]: I0131 07:22:36.941675 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:36Z","lastTransitionTime":"2026-01-31T07:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.045117 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.045159 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.045169 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.045187 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.045199 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:37Z","lastTransitionTime":"2026-01-31T07:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.148750 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.148820 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.148844 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.148875 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.148898 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:37Z","lastTransitionTime":"2026-01-31T07:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.251825 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.251867 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.251879 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.251897 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.251910 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:37Z","lastTransitionTime":"2026-01-31T07:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.310936 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xkd4f_d0d1945f-bd78-48c9-89be-35b3f2908dab/ovnkube-controller/1.log" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.311769 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xkd4f_d0d1945f-bd78-48c9-89be-35b3f2908dab/ovnkube-controller/0.log" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.314708 4908 generic.go:334] "Generic (PLEG): container finished" podID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerID="7e3f10bbbeb862ad1f6bfa586f98e588ee502868a66e341f65877c2e7e872808" exitCode=1 Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.314748 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" event={"ID":"d0d1945f-bd78-48c9-89be-35b3f2908dab","Type":"ContainerDied","Data":"7e3f10bbbeb862ad1f6bfa586f98e588ee502868a66e341f65877c2e7e872808"} Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.314780 4908 scope.go:117] "RemoveContainer" containerID="bed6706d35d82710c1c15eebe07b4a161e74e35734e8a63ea1dbbdc620e6ead5" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.316476 4908 scope.go:117] "RemoveContainer" containerID="7e3f10bbbeb862ad1f6bfa586f98e588ee502868a66e341f65877c2e7e872808" Jan 31 07:22:37 crc kubenswrapper[4908]: E0131 07:22:37.316771 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xkd4f_openshift-ovn-kubernetes(d0d1945f-bd78-48c9-89be-35b3f2908dab)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.332660 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.346403 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.353800 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.353851 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.353869 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.353894 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.353911 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:37Z","lastTransitionTime":"2026-01-31T07:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.362059 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.370530 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.388540 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde508a2a81cd89c4b62aef1f00bf38bd16df44670ae52b0402b240c013819c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.399141 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 06:09:37.422348193 +0000 UTC Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.404271 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.414041 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2cg54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1242d7b7-ba0b-4084-88f1-fedf57d84b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2cg54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.426972 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.437874 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db6ec852-e95e-45de-ad44-ddc38907c9a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745a991c9c5f319a2963caaf508b01491692c3325e6b709376570b0fd6d874b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d985bf0c21fdaa280e6e3001a5ccdf36afc39a6ad0446f25d96eb13186d69ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://760487d3653d8039bb961bd2aface36198eeea534849f94840957f6f86e3f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.450567 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.456670 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.456714 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.456727 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.456743 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.456752 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:37Z","lastTransitionTime":"2026-01-31T07:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.462029 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.478993 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3f10bbbeb862ad1f6bfa586f98e588ee502868a66e341f65877c2e7e872808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed6706d35d82710c1c15eebe07b4a161e74e35734e8a63ea1dbbdc620e6ead5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:22:33Z\\\",\\\"message\\\":\\\"factory.go:141\\\\nI0131 07:22:32.656244 6292 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 07:22:32.656475 6292 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 07:22:32.656490 6292 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 07:22:32.656522 6292 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 07:22:32.656536 6292 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 07:22:32.656552 6292 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 07:22:32.656587 6292 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:22:32.656607 6292 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 07:22:32.656613 6292 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 07:22:32.656615 6292 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 07:22:32.656640 6292 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 07:22:32.656654 6292 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:22:32.656671 6292 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 07:22:32.657051 6292 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e3f10bbbeb862ad1f6bfa586f98e588ee502868a66e341f65877c2e7e872808\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:22:36Z\\\",\\\"message\\\":\\\"alse, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:1936, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0131 07:22:36.106270 6549 lb_config.go:1031] Cluster endpoints for default/kubernetes for network=default are: map[TCP/https:{6443 [192.168.126.11] []}]\\\\nI0131 07:22:36.106285 6549 services_controller.go:452] Built service openshift-ingress/router-internal-default per-node LB for network=default: []services.LB{}\\\\nI0131 07:22:36.106274 6549 port_cache.go:96] port-cache(openshift-network-console_networking-console-plugin-85b44fc459-gdk6g): added port \\\\u0026{name:openshift-network-console_networking-console-plugin-85b44fc459-gdk6g uuid:c94130be-172c-477c-88c4-40cc7eba30fe logicalSwitch:crc ips:[0xc009063e30] mac:[10 88 10 217 0 92] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.92/23] and MAC: 0a:58:0a:d9:00:5c\\\\nI0131 07:22:36.106292 6549 services_controller.go:443] Built s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.491522 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.502242 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85b723d6-2526-40a1-9e55-05487affbda0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc31b24b6cfd1400d56d1db7b6c204037f55d607e1f1d03c78c0cc61ec38bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abce21a8cb5d8563627e3b86718c101d521671c14af9463131aa9d3777565d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49tqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.514084 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.529471 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.543163 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.558691 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.558723 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.558735 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.558776 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.558789 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:37Z","lastTransitionTime":"2026-01-31T07:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.661429 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.661480 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.661492 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.661510 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.661523 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:37Z","lastTransitionTime":"2026-01-31T07:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.763857 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.763892 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.763900 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.763913 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.763923 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:37Z","lastTransitionTime":"2026-01-31T07:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.866835 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.867068 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.867123 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.867156 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.867179 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:37Z","lastTransitionTime":"2026-01-31T07:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.940497 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.940549 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:37 crc kubenswrapper[4908]: E0131 07:22:37.940668 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.940686 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:37 crc kubenswrapper[4908]: E0131 07:22:37.940852 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:22:37 crc kubenswrapper[4908]: E0131 07:22:37.940963 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.965114 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.969416 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.969467 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.969484 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.969506 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.969521 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:37Z","lastTransitionTime":"2026-01-31T07:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:37 crc kubenswrapper[4908]: I0131 07:22:37.981327 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db6ec852-e95e-45de-ad44-ddc38907c9a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745a991c9c5f319a2963caaf508b01491692c3325e6b709376570b0fd6d874b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d985bf0c21fdaa280e6e3001a5ccdf36afc39a6ad0446f25d96eb13186d69ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://760487d3653d8039bb961bd2aface36198eeea534849f94840957f6f86e3f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.004504 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.018261 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.038745 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3f10bbbeb862ad1f6bfa586f98e588ee502868a66e341f65877c2e7e872808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed6706d35d82710c1c15eebe07b4a161e74e35734e8a63ea1dbbdc620e6ead5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:22:33Z\\\",\\\"message\\\":\\\"factory.go:141\\\\nI0131 07:22:32.656244 6292 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 07:22:32.656475 6292 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 07:22:32.656490 6292 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 07:22:32.656522 6292 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 07:22:32.656536 6292 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 07:22:32.656552 6292 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 07:22:32.656587 6292 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:22:32.656607 6292 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 07:22:32.656613 6292 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 07:22:32.656615 6292 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 07:22:32.656640 6292 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 07:22:32.656654 6292 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:22:32.656671 6292 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 07:22:32.657051 6292 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e3f10bbbeb862ad1f6bfa586f98e588ee502868a66e341f65877c2e7e872808\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:22:36Z\\\",\\\"message\\\":\\\"alse, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:1936, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0131 07:22:36.106270 6549 lb_config.go:1031] Cluster endpoints for default/kubernetes for network=default are: map[TCP/https:{6443 [192.168.126.11] []}]\\\\nI0131 07:22:36.106285 6549 services_controller.go:452] Built service openshift-ingress/router-internal-default per-node LB for network=default: []services.LB{}\\\\nI0131 07:22:36.106274 6549 port_cache.go:96] port-cache(openshift-network-console_networking-console-plugin-85b44fc459-gdk6g): added port \\\\u0026{name:openshift-network-console_networking-console-plugin-85b44fc459-gdk6g uuid:c94130be-172c-477c-88c4-40cc7eba30fe logicalSwitch:crc ips:[0xc009063e30] mac:[10 88 10 217 0 92] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.92/23] and MAC: 0a:58:0a:d9:00:5c\\\\nI0131 07:22:36.106292 6549 services_controller.go:443] Built s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.051708 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.065029 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85b723d6-2526-40a1-9e55-05487affbda0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc31b24b6cfd1400d56d1db7b6c204037f55d607e1f1d03c78c0cc61ec38bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abce21a8cb5d8563627e3b86718c101d521671c14af9463131aa9d3777565d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49tqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.071821 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.072070 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.072078 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.072091 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.072100 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:38Z","lastTransitionTime":"2026-01-31T07:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.081516 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.093142 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.107597 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.124941 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.141882 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.156804 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.166495 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.174480 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.174508 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.174517 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.174531 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.174540 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:38Z","lastTransitionTime":"2026-01-31T07:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.181160 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde508a2a81cd89c4b62aef1f00bf38bd16df44670ae52b0402b240c013819c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.190864 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.202540 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2cg54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1242d7b7-ba0b-4084-88f1-fedf57d84b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2cg54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.277195 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.277243 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.277254 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.277270 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.277282 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:38Z","lastTransitionTime":"2026-01-31T07:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.326509 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xkd4f_d0d1945f-bd78-48c9-89be-35b3f2908dab/ovnkube-controller/1.log" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.330317 4908 scope.go:117] "RemoveContainer" containerID="7e3f10bbbeb862ad1f6bfa586f98e588ee502868a66e341f65877c2e7e872808" Jan 31 07:22:38 crc kubenswrapper[4908]: E0131 07:22:38.330560 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xkd4f_openshift-ovn-kubernetes(d0d1945f-bd78-48c9-89be-35b3f2908dab)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.355798 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde508a2a81cd89c4b62aef1f00bf38bd16df44670ae52b0402b240c013819c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.367759 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.377835 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2cg54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1242d7b7-ba0b-4084-88f1-fedf57d84b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2cg54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.379796 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.379852 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.379877 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.379905 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.379926 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:38Z","lastTransitionTime":"2026-01-31T07:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.389354 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.399345 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 07:36:02.731393208 +0000 UTC Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.399394 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.408584 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.417547 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.427624 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.449498 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3f10bbbeb862ad1f6bfa586f98e588ee502868a66e341f65877c2e7e872808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e3f10bbbeb862ad1f6bfa586f98e588ee502868a66e341f65877c2e7e872808\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:22:36Z\\\",\\\"message\\\":\\\"alse, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:1936, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0131 07:22:36.106270 6549 lb_config.go:1031] Cluster endpoints for default/kubernetes for network=default are: map[TCP/https:{6443 [192.168.126.11] []}]\\\\nI0131 07:22:36.106285 6549 services_controller.go:452] Built service openshift-ingress/router-internal-default per-node LB for network=default: []services.LB{}\\\\nI0131 07:22:36.106274 6549 port_cache.go:96] port-cache(openshift-network-console_networking-console-plugin-85b44fc459-gdk6g): added port \\\\u0026{name:openshift-network-console_networking-console-plugin-85b44fc459-gdk6g uuid:c94130be-172c-477c-88c4-40cc7eba30fe logicalSwitch:crc ips:[0xc009063e30] mac:[10 88 10 217 0 92] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.92/23] and MAC: 0a:58:0a:d9:00:5c\\\\nI0131 07:22:36.106292 6549 services_controller.go:443] Built s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xkd4f_openshift-ovn-kubernetes(d0d1945f-bd78-48c9-89be-35b3f2908dab)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.463022 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.473181 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db6ec852-e95e-45de-ad44-ddc38907c9a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745a991c9c5f319a2963caaf508b01491692c3325e6b709376570b0fd6d874b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d985bf0c21fdaa280e6e3001a5ccdf36afc39a6ad0446f25d96eb13186d69ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://760487d3653d8039bb961bd2aface36198eeea534849f94840957f6f86e3f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.481946 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.482013 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.482032 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.482045 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.482054 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:38Z","lastTransitionTime":"2026-01-31T07:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.485255 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.495033 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.504238 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85b723d6-2526-40a1-9e55-05487affbda0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc31b24b6cfd1400d56d1db7b6c204037f55d607e1f1d03c78c0cc61ec38bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abce21a8cb5d8563627e3b86718c101d521671c14af9463131aa9d3777565d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49tqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.514552 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.525778 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.538067 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.584822 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.584859 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.584867 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.584880 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.584888 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:38Z","lastTransitionTime":"2026-01-31T07:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.686810 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.686845 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.686853 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.686867 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.686876 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:38Z","lastTransitionTime":"2026-01-31T07:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.790221 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.790285 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.790302 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.790327 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.790347 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:38Z","lastTransitionTime":"2026-01-31T07:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.893236 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.893285 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.893294 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.893307 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.893316 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:38Z","lastTransitionTime":"2026-01-31T07:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.939119 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:22:38 crc kubenswrapper[4908]: E0131 07:22:38.939300 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.995475 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.995505 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.995534 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.995548 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:38 crc kubenswrapper[4908]: I0131 07:22:38.995556 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:38Z","lastTransitionTime":"2026-01-31T07:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.097712 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.097760 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.097772 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.097789 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.097801 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:39Z","lastTransitionTime":"2026-01-31T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.200337 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.200391 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.200407 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.200426 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.200440 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:39Z","lastTransitionTime":"2026-01-31T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.303153 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.303484 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.303627 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.303813 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.303962 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:39Z","lastTransitionTime":"2026-01-31T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.400072 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 16:53:30.050883145 +0000 UTC Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.407244 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.407337 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.407356 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.407380 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.407396 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:39Z","lastTransitionTime":"2026-01-31T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.510603 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.510667 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.510826 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.510906 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.510928 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:39Z","lastTransitionTime":"2026-01-31T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.615020 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.615087 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.615105 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.615132 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.615152 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:39Z","lastTransitionTime":"2026-01-31T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.644527 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1242d7b7-ba0b-4084-88f1-fedf57d84b11-metrics-certs\") pod \"network-metrics-daemon-2cg54\" (UID: \"1242d7b7-ba0b-4084-88f1-fedf57d84b11\") " pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:22:39 crc kubenswrapper[4908]: E0131 07:22:39.644690 4908 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:22:39 crc kubenswrapper[4908]: E0131 07:22:39.644771 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1242d7b7-ba0b-4084-88f1-fedf57d84b11-metrics-certs podName:1242d7b7-ba0b-4084-88f1-fedf57d84b11 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:55.64475347 +0000 UTC m=+82.260698124 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1242d7b7-ba0b-4084-88f1-fedf57d84b11-metrics-certs") pod "network-metrics-daemon-2cg54" (UID: "1242d7b7-ba0b-4084-88f1-fedf57d84b11") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.699596 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.699641 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.699649 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.699667 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.699675 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:39Z","lastTransitionTime":"2026-01-31T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:39 crc kubenswrapper[4908]: E0131 07:22:39.714580 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:39Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.719290 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.719331 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.719343 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.719362 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.719375 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:39Z","lastTransitionTime":"2026-01-31T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:39 crc kubenswrapper[4908]: E0131 07:22:39.733845 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:39Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.737745 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.737783 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.737795 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.737813 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.737826 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:39Z","lastTransitionTime":"2026-01-31T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:39 crc kubenswrapper[4908]: E0131 07:22:39.753807 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:39Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.757030 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.757078 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.757092 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.757112 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.757124 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:39Z","lastTransitionTime":"2026-01-31T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:39 crc kubenswrapper[4908]: E0131 07:22:39.772605 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:39Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.776423 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.776477 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.776498 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.776521 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.776538 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:39Z","lastTransitionTime":"2026-01-31T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:39 crc kubenswrapper[4908]: E0131 07:22:39.791473 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:39Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:39 crc kubenswrapper[4908]: E0131 07:22:39.791722 4908 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.794886 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.794925 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.794941 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.794962 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.795002 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:39Z","lastTransitionTime":"2026-01-31T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.897401 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.897431 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.897438 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.897450 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.897458 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:39Z","lastTransitionTime":"2026-01-31T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.940238 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.940278 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.940290 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:39 crc kubenswrapper[4908]: E0131 07:22:39.940418 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:22:39 crc kubenswrapper[4908]: E0131 07:22:39.940477 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:22:39 crc kubenswrapper[4908]: E0131 07:22:39.940578 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.999801 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.999840 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.999852 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.999868 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:39 crc kubenswrapper[4908]: I0131 07:22:39.999880 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:39Z","lastTransitionTime":"2026-01-31T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.102944 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.103038 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.103065 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.103090 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.103107 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:40Z","lastTransitionTime":"2026-01-31T07:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.205545 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.205590 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.205601 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.205618 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.205630 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:40Z","lastTransitionTime":"2026-01-31T07:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.309620 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.309686 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.309699 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.309750 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.309768 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:40Z","lastTransitionTime":"2026-01-31T07:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.401252 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 03:15:54.759926129 +0000 UTC Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.411857 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.411917 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.411932 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.411949 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.411959 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:40Z","lastTransitionTime":"2026-01-31T07:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.514366 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.514406 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.514417 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.514431 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.514441 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:40Z","lastTransitionTime":"2026-01-31T07:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.616343 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.616394 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.616404 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.616418 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.616427 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:40Z","lastTransitionTime":"2026-01-31T07:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.718717 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.718768 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.718782 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.718800 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.718818 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:40Z","lastTransitionTime":"2026-01-31T07:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.821417 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.821485 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.821502 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.821524 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.821538 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:40Z","lastTransitionTime":"2026-01-31T07:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.924215 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.924257 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.924272 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.924292 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.924307 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:40Z","lastTransitionTime":"2026-01-31T07:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:40 crc kubenswrapper[4908]: I0131 07:22:40.939495 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:22:40 crc kubenswrapper[4908]: E0131 07:22:40.939634 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.027455 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.027578 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.027591 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.027610 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.027622 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:41Z","lastTransitionTime":"2026-01-31T07:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.130522 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.130564 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.130575 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.130591 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.130602 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:41Z","lastTransitionTime":"2026-01-31T07:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.233036 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.233080 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.233092 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.233109 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.233121 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:41Z","lastTransitionTime":"2026-01-31T07:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.335173 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.335271 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.335296 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.335711 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.335731 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:41Z","lastTransitionTime":"2026-01-31T07:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.402043 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 17:35:40.811904765 +0000 UTC Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.438721 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.438769 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.438782 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.438799 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.438811 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:41Z","lastTransitionTime":"2026-01-31T07:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.541694 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.541758 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.541770 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.541792 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.541808 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:41Z","lastTransitionTime":"2026-01-31T07:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.644412 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.644454 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.644463 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.644550 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.644567 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:41Z","lastTransitionTime":"2026-01-31T07:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.746198 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.746254 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.746269 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.746286 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.746296 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:41Z","lastTransitionTime":"2026-01-31T07:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.849474 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.849514 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.849524 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.849538 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.849547 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:41Z","lastTransitionTime":"2026-01-31T07:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.939466 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.939520 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.939610 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:41 crc kubenswrapper[4908]: E0131 07:22:41.939753 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:22:41 crc kubenswrapper[4908]: E0131 07:22:41.939887 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:22:41 crc kubenswrapper[4908]: E0131 07:22:41.939953 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.951811 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.951842 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.951851 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.951884 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:41 crc kubenswrapper[4908]: I0131 07:22:41.951894 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:41Z","lastTransitionTime":"2026-01-31T07:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.054633 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.054695 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.054717 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.054749 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.054771 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:42Z","lastTransitionTime":"2026-01-31T07:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.157399 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.157447 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.157459 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.157476 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.157489 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:42Z","lastTransitionTime":"2026-01-31T07:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.260147 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.260206 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.260223 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.260244 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.260260 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:42Z","lastTransitionTime":"2026-01-31T07:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.362457 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.362495 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.362504 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.362516 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.362525 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:42Z","lastTransitionTime":"2026-01-31T07:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.403115 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 19:34:33.714116493 +0000 UTC Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.464841 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.464899 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.464923 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.464938 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.464946 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:42Z","lastTransitionTime":"2026-01-31T07:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.570289 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.570330 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.570340 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.570355 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.570365 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:42Z","lastTransitionTime":"2026-01-31T07:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.673200 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.673250 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.673268 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.673320 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.673335 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:42Z","lastTransitionTime":"2026-01-31T07:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.776152 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.776210 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.776222 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.776244 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.776257 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:42Z","lastTransitionTime":"2026-01-31T07:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.879277 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.879312 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.879323 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.879337 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.879346 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:42Z","lastTransitionTime":"2026-01-31T07:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.939497 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:22:42 crc kubenswrapper[4908]: E0131 07:22:42.939617 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.982037 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.982360 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.982377 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.982394 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:42 crc kubenswrapper[4908]: I0131 07:22:42.982409 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:42Z","lastTransitionTime":"2026-01-31T07:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.085578 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.085627 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.085637 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.085654 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.085669 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:43Z","lastTransitionTime":"2026-01-31T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.188230 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.188488 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.188744 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.188869 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.188999 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:43Z","lastTransitionTime":"2026-01-31T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.292624 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.292673 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.292690 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.292712 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.292730 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:43Z","lastTransitionTime":"2026-01-31T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.395514 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.395561 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.395573 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.395592 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.395604 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:43Z","lastTransitionTime":"2026-01-31T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.403655 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 17:36:09.564881067 +0000 UTC Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.497811 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.497886 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.497899 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.497915 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.497923 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:43Z","lastTransitionTime":"2026-01-31T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.600958 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.601065 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.601079 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.601100 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.601114 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:43Z","lastTransitionTime":"2026-01-31T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.703742 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.703798 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.703810 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.703827 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.703841 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:43Z","lastTransitionTime":"2026-01-31T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.806330 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.806539 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.806645 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.806710 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.806778 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:43Z","lastTransitionTime":"2026-01-31T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.909519 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.909737 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.909798 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.909907 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.910030 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:43Z","lastTransitionTime":"2026-01-31T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.939881 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.939906 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:43 crc kubenswrapper[4908]: I0131 07:22:43.940002 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:43 crc kubenswrapper[4908]: E0131 07:22:43.940142 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:22:43 crc kubenswrapper[4908]: E0131 07:22:43.940213 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:22:43 crc kubenswrapper[4908]: E0131 07:22:43.940288 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.026304 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.026339 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.026348 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.026361 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.026371 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:44Z","lastTransitionTime":"2026-01-31T07:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.128949 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.129009 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.129020 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.129037 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.129047 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:44Z","lastTransitionTime":"2026-01-31T07:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.231399 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.231437 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.231448 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.231462 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.231471 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:44Z","lastTransitionTime":"2026-01-31T07:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.333576 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.333606 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.333614 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.333627 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.333636 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:44Z","lastTransitionTime":"2026-01-31T07:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.404808 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 16:26:57.875219387 +0000 UTC Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.436621 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.436675 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.436688 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.436708 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.436719 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:44Z","lastTransitionTime":"2026-01-31T07:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.540452 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.540518 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.540532 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.540556 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.540572 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:44Z","lastTransitionTime":"2026-01-31T07:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.643282 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.643330 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.643367 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.643383 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.643396 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:44Z","lastTransitionTime":"2026-01-31T07:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.746267 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.746326 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.746346 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.746369 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.746385 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:44Z","lastTransitionTime":"2026-01-31T07:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.849662 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.849736 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.849757 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.849797 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.849816 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:44Z","lastTransitionTime":"2026-01-31T07:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.940074 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:22:44 crc kubenswrapper[4908]: E0131 07:22:44.940245 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.952717 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.952761 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.952774 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.952791 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:44 crc kubenswrapper[4908]: I0131 07:22:44.952803 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:44Z","lastTransitionTime":"2026-01-31T07:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.055590 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.055671 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.055715 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.055733 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.055747 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:45Z","lastTransitionTime":"2026-01-31T07:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.158020 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.158081 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.158104 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.158128 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.158145 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:45Z","lastTransitionTime":"2026-01-31T07:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.261165 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.261204 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.261216 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.261233 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.261248 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:45Z","lastTransitionTime":"2026-01-31T07:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.363312 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.363349 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.363361 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.363376 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.363388 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:45Z","lastTransitionTime":"2026-01-31T07:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.405850 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 14:47:12.765125293 +0000 UTC Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.465568 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.465600 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.465610 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.465625 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.465636 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:45Z","lastTransitionTime":"2026-01-31T07:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.568415 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.568496 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.568512 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.568535 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.568551 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:45Z","lastTransitionTime":"2026-01-31T07:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.670793 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.670828 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.670836 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.670849 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.670858 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:45Z","lastTransitionTime":"2026-01-31T07:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.772574 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.772619 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.772630 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.772646 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.772656 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:45Z","lastTransitionTime":"2026-01-31T07:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.874757 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.874798 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.874807 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.874822 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.874831 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:45Z","lastTransitionTime":"2026-01-31T07:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.939351 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.939415 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.939526 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:45 crc kubenswrapper[4908]: E0131 07:22:45.939635 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:22:45 crc kubenswrapper[4908]: E0131 07:22:45.939804 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:22:45 crc kubenswrapper[4908]: E0131 07:22:45.939864 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.977766 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.977811 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.977822 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.977843 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:45 crc kubenswrapper[4908]: I0131 07:22:45.977855 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:45Z","lastTransitionTime":"2026-01-31T07:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.083766 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.083793 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.083801 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.083815 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.083824 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:46Z","lastTransitionTime":"2026-01-31T07:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.185802 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.185896 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.185918 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.185951 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.186021 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:46Z","lastTransitionTime":"2026-01-31T07:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.289125 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.289174 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.289188 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.289208 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.289220 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:46Z","lastTransitionTime":"2026-01-31T07:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.392582 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.392663 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.392685 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.392720 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.392741 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:46Z","lastTransitionTime":"2026-01-31T07:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.406644 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 04:30:29.555506764 +0000 UTC Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.495766 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.495840 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.495860 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.495886 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.495901 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:46Z","lastTransitionTime":"2026-01-31T07:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.598333 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.598375 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.598386 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.598402 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.598415 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:46Z","lastTransitionTime":"2026-01-31T07:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.700806 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.700868 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.700885 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.700908 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.700926 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:46Z","lastTransitionTime":"2026-01-31T07:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.803892 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.803927 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.803937 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.803953 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.803963 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:46Z","lastTransitionTime":"2026-01-31T07:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.906318 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.906360 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.906369 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.906385 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.906396 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:46Z","lastTransitionTime":"2026-01-31T07:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:46 crc kubenswrapper[4908]: I0131 07:22:46.939414 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:22:46 crc kubenswrapper[4908]: E0131 07:22:46.939527 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.008199 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.008229 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.008237 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.008250 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.008259 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:47Z","lastTransitionTime":"2026-01-31T07:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.110274 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.110310 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.110318 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.110334 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.110343 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:47Z","lastTransitionTime":"2026-01-31T07:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.213258 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.213305 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.213319 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.213339 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.213351 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:47Z","lastTransitionTime":"2026-01-31T07:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.315866 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.315903 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.315914 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.315931 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.315943 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:47Z","lastTransitionTime":"2026-01-31T07:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.407585 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 00:17:12.804057701 +0000 UTC Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.418357 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.418378 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.418387 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.418436 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.418449 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:47Z","lastTransitionTime":"2026-01-31T07:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.520664 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.520702 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.520713 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.520729 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.520741 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:47Z","lastTransitionTime":"2026-01-31T07:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.622585 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.622634 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.622647 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.622666 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.622677 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:47Z","lastTransitionTime":"2026-01-31T07:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.726319 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.726371 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.726382 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.726402 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.726415 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:47Z","lastTransitionTime":"2026-01-31T07:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.828718 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.828763 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.828774 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.828792 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.828806 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:47Z","lastTransitionTime":"2026-01-31T07:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.930965 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.931050 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.931070 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.931088 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.931103 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:47Z","lastTransitionTime":"2026-01-31T07:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.939183 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.939228 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.939273 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:47 crc kubenswrapper[4908]: E0131 07:22:47.939374 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:22:47 crc kubenswrapper[4908]: E0131 07:22:47.939471 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:22:47 crc kubenswrapper[4908]: E0131 07:22:47.939536 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.951933 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:47Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.964491 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:47Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.977753 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:47Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:47 crc kubenswrapper[4908]: I0131 07:22:47.989415 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:47Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.002207 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.012650 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.025697 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde508a2a81cd89c4b62aef1f00bf38bd16df44670ae52b0402b240c013819c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.034022 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.034101 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.034116 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.034136 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.034148 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:48Z","lastTransitionTime":"2026-01-31T07:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.040164 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.048349 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2cg54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1242d7b7-ba0b-4084-88f1-fedf57d84b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2cg54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.058205 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.068790 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.079062 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db6ec852-e95e-45de-ad44-ddc38907c9a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745a991c9c5f319a2963caaf508b01491692c3325e6b709376570b0fd6d874b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d985bf0c21fdaa280e6e3001a5ccdf36afc39a6ad0446f25d96eb13186d69ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://760487d3653d8039bb961bd2aface36198eeea534849f94840957f6f86e3f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.089635 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.099624 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.114691 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3f10bbbeb862ad1f6bfa586f98e588ee502868a66e341f65877c2e7e872808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e3f10bbbeb862ad1f6bfa586f98e588ee502868a66e341f65877c2e7e872808\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:22:36Z\\\",\\\"message\\\":\\\"alse, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:1936, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0131 07:22:36.106270 6549 lb_config.go:1031] Cluster endpoints for default/kubernetes for network=default are: map[TCP/https:{6443 [192.168.126.11] []}]\\\\nI0131 07:22:36.106285 6549 services_controller.go:452] Built service openshift-ingress/router-internal-default per-node LB for network=default: []services.LB{}\\\\nI0131 07:22:36.106274 6549 port_cache.go:96] port-cache(openshift-network-console_networking-console-plugin-85b44fc459-gdk6g): added port \\\\u0026{name:openshift-network-console_networking-console-plugin-85b44fc459-gdk6g uuid:c94130be-172c-477c-88c4-40cc7eba30fe logicalSwitch:crc ips:[0xc009063e30] mac:[10 88 10 217 0 92] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.92/23] and MAC: 0a:58:0a:d9:00:5c\\\\nI0131 07:22:36.106292 6549 services_controller.go:443] Built s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xkd4f_openshift-ovn-kubernetes(d0d1945f-bd78-48c9-89be-35b3f2908dab)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.126102 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.136194 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.136235 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.136247 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.136264 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.136275 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:48Z","lastTransitionTime":"2026-01-31T07:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.137439 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85b723d6-2526-40a1-9e55-05487affbda0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc31b24b6cfd1400d56d1db7b6c204037f55d607e1f1d03c78c0cc61ec38bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abce21a8cb5d8563627e3b86718c101d521671c14af9463131aa9d3777565d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49tqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.238529 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.238564 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.238574 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.238590 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.238601 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:48Z","lastTransitionTime":"2026-01-31T07:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.340719 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.340751 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.340762 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.340777 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.340787 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:48Z","lastTransitionTime":"2026-01-31T07:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.408710 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 08:31:15.968511289 +0000 UTC Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.442744 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.442818 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.442839 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.443255 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.443494 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:48Z","lastTransitionTime":"2026-01-31T07:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.546226 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.546269 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.546280 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.546296 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.546310 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:48Z","lastTransitionTime":"2026-01-31T07:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.648305 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.648355 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.648370 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.648387 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.648398 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:48Z","lastTransitionTime":"2026-01-31T07:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.751021 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.751337 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.751443 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.751539 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.751627 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:48Z","lastTransitionTime":"2026-01-31T07:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.854548 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.854640 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.854664 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.854688 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.854708 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:48Z","lastTransitionTime":"2026-01-31T07:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.940082 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:22:48 crc kubenswrapper[4908]: E0131 07:22:48.940289 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.956668 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.957587 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.957618 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.957647 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:48 crc kubenswrapper[4908]: I0131 07:22:48.957674 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:48Z","lastTransitionTime":"2026-01-31T07:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.059751 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.059793 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.059802 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.059817 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.059828 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:49Z","lastTransitionTime":"2026-01-31T07:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.162693 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.162723 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.162732 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.162744 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.162753 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:49Z","lastTransitionTime":"2026-01-31T07:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.266285 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.266332 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.266344 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.266361 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.266372 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:49Z","lastTransitionTime":"2026-01-31T07:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.369450 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.369509 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.369525 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.369549 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.369566 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:49Z","lastTransitionTime":"2026-01-31T07:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.409135 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 05:39:46.889914702 +0000 UTC Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.478835 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.478884 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.478899 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.478918 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.478929 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:49Z","lastTransitionTime":"2026-01-31T07:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.581600 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.581653 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.581669 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.581694 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.581713 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:49Z","lastTransitionTime":"2026-01-31T07:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.684529 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.684572 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.684584 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.684603 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.684615 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:49Z","lastTransitionTime":"2026-01-31T07:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.787950 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.787999 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.788009 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.788026 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.788036 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:49Z","lastTransitionTime":"2026-01-31T07:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.890326 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.890434 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.890446 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.890463 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.890475 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:49Z","lastTransitionTime":"2026-01-31T07:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.939724 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.939758 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:49 crc kubenswrapper[4908]: E0131 07:22:49.939918 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.940046 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:49 crc kubenswrapper[4908]: E0131 07:22:49.940040 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:22:49 crc kubenswrapper[4908]: E0131 07:22:49.940129 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.940849 4908 scope.go:117] "RemoveContainer" containerID="7e3f10bbbeb862ad1f6bfa586f98e588ee502868a66e341f65877c2e7e872808" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.993842 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.993882 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.993895 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.993917 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:49 crc kubenswrapper[4908]: I0131 07:22:49.993934 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:49Z","lastTransitionTime":"2026-01-31T07:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.030705 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.031152 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.031167 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.031190 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.031205 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:50Z","lastTransitionTime":"2026-01-31T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:50 crc kubenswrapper[4908]: E0131 07:22:50.043142 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:50Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.047740 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.047761 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.047774 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.047787 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.047796 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:50Z","lastTransitionTime":"2026-01-31T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:50 crc kubenswrapper[4908]: E0131 07:22:50.060632 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:50Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.064133 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.064167 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.064177 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.064196 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.064207 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:50Z","lastTransitionTime":"2026-01-31T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:50 crc kubenswrapper[4908]: E0131 07:22:50.075449 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:50Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.079108 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.079200 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.079294 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.079360 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.079418 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:50Z","lastTransitionTime":"2026-01-31T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:50 crc kubenswrapper[4908]: E0131 07:22:50.090758 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:50Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.094278 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.094315 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.094348 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.094365 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.094378 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:50Z","lastTransitionTime":"2026-01-31T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:50 crc kubenswrapper[4908]: E0131 07:22:50.105528 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:50Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:50 crc kubenswrapper[4908]: E0131 07:22:50.105629 4908 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.106937 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.106956 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.106963 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.106988 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.106997 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:50Z","lastTransitionTime":"2026-01-31T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.210059 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.210113 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.210121 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.210134 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.210144 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:50Z","lastTransitionTime":"2026-01-31T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.312832 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.313210 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.313347 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.313438 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.313522 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:50Z","lastTransitionTime":"2026-01-31T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.410297 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 22:01:46.631612744 +0000 UTC Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.417520 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.417550 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.417558 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.417572 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.417581 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:50Z","lastTransitionTime":"2026-01-31T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.519551 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.519613 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.519640 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.519661 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.519676 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:50Z","lastTransitionTime":"2026-01-31T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.622720 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.622773 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.622783 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.622798 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.622808 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:50Z","lastTransitionTime":"2026-01-31T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.725304 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.725334 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.725343 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.725357 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.725366 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:50Z","lastTransitionTime":"2026-01-31T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.827613 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.827644 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.827653 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.827666 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.827675 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:50Z","lastTransitionTime":"2026-01-31T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.931287 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.931351 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.931369 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.931393 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.931406 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:50Z","lastTransitionTime":"2026-01-31T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:50 crc kubenswrapper[4908]: I0131 07:22:50.939528 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:22:50 crc kubenswrapper[4908]: E0131 07:22:50.939690 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.038222 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.038260 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.038271 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.038287 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.038299 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:51Z","lastTransitionTime":"2026-01-31T07:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.140521 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.140564 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.140576 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.140593 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.140606 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:51Z","lastTransitionTime":"2026-01-31T07:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.243606 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.243662 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.243675 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.243886 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.243898 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:51Z","lastTransitionTime":"2026-01-31T07:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.347195 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.347229 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.347239 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.347256 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.347266 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:51Z","lastTransitionTime":"2026-01-31T07:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.374649 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xkd4f_d0d1945f-bd78-48c9-89be-35b3f2908dab/ovnkube-controller/1.log" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.377350 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" event={"ID":"d0d1945f-bd78-48c9-89be-35b3f2908dab","Type":"ContainerStarted","Data":"672696c61ad1d8e7167c28e9940641ce167a277deb637ad71e7b2af45fe4d6d8"} Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.377768 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.390317 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:51Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.401857 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:51Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.411190 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 17:22:42.260573542 +0000 UTC Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.424851 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:51Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.446701 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:51Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.449256 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.449306 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.449317 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.449342 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.449353 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:51Z","lastTransitionTime":"2026-01-31T07:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.463848 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2cg54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1242d7b7-ba0b-4084-88f1-fedf57d84b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2cg54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:51Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.484270 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:51Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.500905 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:51Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.518314 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:51Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.529664 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:51Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.551069 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde508a2a81cd89c4b62aef1f00bf38bd16df44670ae52b0402b240c013819c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:51Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.552091 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.552138 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.552146 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.552164 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.552178 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:51Z","lastTransitionTime":"2026-01-31T07:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.574616 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672696c61ad1d8e7167c28e9940641ce167a277deb637ad71e7b2af45fe4d6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e3f10bbbeb862ad1f6bfa586f98e588ee502868a66e341f65877c2e7e872808\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:22:36Z\\\",\\\"message\\\":\\\"alse, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:1936, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0131 07:22:36.106270 6549 lb_config.go:1031] Cluster endpoints for default/kubernetes for network=default are: map[TCP/https:{6443 [192.168.126.11] []}]\\\\nI0131 07:22:36.106285 6549 services_controller.go:452] Built service openshift-ingress/router-internal-default per-node LB for network=default: []services.LB{}\\\\nI0131 07:22:36.106274 6549 port_cache.go:96] port-cache(openshift-network-console_networking-console-plugin-85b44fc459-gdk6g): added port \\\\u0026{name:openshift-network-console_networking-console-plugin-85b44fc459-gdk6g uuid:c94130be-172c-477c-88c4-40cc7eba30fe logicalSwitch:crc ips:[0xc009063e30] mac:[10 88 10 217 0 92] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.92/23] and MAC: 0a:58:0a:d9:00:5c\\\\nI0131 07:22:36.106292 6549 services_controller.go:443] Built s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:51Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.587805 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:51Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.597764 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db6ec852-e95e-45de-ad44-ddc38907c9a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745a991c9c5f319a2963caaf508b01491692c3325e6b709376570b0fd6d874b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d985bf0c21fdaa280e6e3001a5ccdf36afc39a6ad0446f25d96eb13186d69ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://760487d3653d8039bb961bd2aface36198eeea534849f94840957f6f86e3f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:51Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.611516 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:51Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.623194 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:51Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.634384 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:51Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.644923 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85b723d6-2526-40a1-9e55-05487affbda0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc31b24b6cfd1400d56d1db7b6c204037f55d607e1f1d03c78c0cc61ec38bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abce21a8cb5d8563627e3b86718c101d521671c14af9463131aa9d3777565d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49tqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:51Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.655012 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.655070 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.655083 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.655105 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.655117 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:51Z","lastTransitionTime":"2026-01-31T07:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.757957 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.758025 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.758038 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.758055 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.758069 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:51Z","lastTransitionTime":"2026-01-31T07:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.860272 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.860312 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.860323 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.860337 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.860347 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:51Z","lastTransitionTime":"2026-01-31T07:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.939825 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.939852 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:51 crc kubenswrapper[4908]: E0131 07:22:51.939944 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.940049 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:51 crc kubenswrapper[4908]: E0131 07:22:51.940161 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:22:51 crc kubenswrapper[4908]: E0131 07:22:51.940372 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.963671 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.963896 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.963906 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.963922 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:51 crc kubenswrapper[4908]: I0131 07:22:51.963933 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:51Z","lastTransitionTime":"2026-01-31T07:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.065814 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.065858 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.065868 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.065883 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.065893 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:52Z","lastTransitionTime":"2026-01-31T07:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.168284 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.168330 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.168341 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.168360 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.168373 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:52Z","lastTransitionTime":"2026-01-31T07:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.270815 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.270883 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.270902 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.270925 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.270943 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:52Z","lastTransitionTime":"2026-01-31T07:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.373178 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.373216 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.373225 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.373237 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.373248 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:52Z","lastTransitionTime":"2026-01-31T07:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.380435 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xkd4f_d0d1945f-bd78-48c9-89be-35b3f2908dab/ovnkube-controller/2.log" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.380927 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xkd4f_d0d1945f-bd78-48c9-89be-35b3f2908dab/ovnkube-controller/1.log" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.383181 4908 generic.go:334] "Generic (PLEG): container finished" podID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerID="672696c61ad1d8e7167c28e9940641ce167a277deb637ad71e7b2af45fe4d6d8" exitCode=1 Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.383216 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" event={"ID":"d0d1945f-bd78-48c9-89be-35b3f2908dab","Type":"ContainerDied","Data":"672696c61ad1d8e7167c28e9940641ce167a277deb637ad71e7b2af45fe4d6d8"} Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.383257 4908 scope.go:117] "RemoveContainer" containerID="7e3f10bbbeb862ad1f6bfa586f98e588ee502868a66e341f65877c2e7e872808" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.387230 4908 scope.go:117] "RemoveContainer" containerID="672696c61ad1d8e7167c28e9940641ce167a277deb637ad71e7b2af45fe4d6d8" Jan 31 07:22:52 crc kubenswrapper[4908]: E0131 07:22:52.387447 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xkd4f_openshift-ovn-kubernetes(d0d1945f-bd78-48c9-89be-35b3f2908dab)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.396253 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:52Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.406818 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:52Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.412363 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 08:49:43.932069979 +0000 UTC Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.419912 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:52Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.432119 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:52Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.442925 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:52Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.453433 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:52Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.462643 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:52Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.475340 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.475366 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.475374 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.475387 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.475395 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:52Z","lastTransitionTime":"2026-01-31T07:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.475502 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde508a2a81cd89c4b62aef1f00bf38bd16df44670ae52b0402b240c013819c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:52Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.485264 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:52Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.494148 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2cg54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1242d7b7-ba0b-4084-88f1-fedf57d84b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2cg54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:52Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.505855 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:52Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.516558 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db6ec852-e95e-45de-ad44-ddc38907c9a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745a991c9c5f319a2963caaf508b01491692c3325e6b709376570b0fd6d874b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d985bf0c21fdaa280e6e3001a5ccdf36afc39a6ad0446f25d96eb13186d69ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://760487d3653d8039bb961bd2aface36198eeea534849f94840957f6f86e3f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:52Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.529058 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:52Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.541744 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:52Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.560553 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672696c61ad1d8e7167c28e9940641ce167a277deb637ad71e7b2af45fe4d6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e3f10bbbeb862ad1f6bfa586f98e588ee502868a66e341f65877c2e7e872808\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:22:36Z\\\",\\\"message\\\":\\\"alse, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:1936, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0131 07:22:36.106270 6549 lb_config.go:1031] Cluster endpoints for default/kubernetes for network=default are: map[TCP/https:{6443 [192.168.126.11] []}]\\\\nI0131 07:22:36.106285 6549 services_controller.go:452] Built service openshift-ingress/router-internal-default per-node LB for network=default: []services.LB{}\\\\nI0131 07:22:36.106274 6549 port_cache.go:96] port-cache(openshift-network-console_networking-console-plugin-85b44fc459-gdk6g): added port \\\\u0026{name:openshift-network-console_networking-console-plugin-85b44fc459-gdk6g uuid:c94130be-172c-477c-88c4-40cc7eba30fe logicalSwitch:crc ips:[0xc009063e30] mac:[10 88 10 217 0 92] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.92/23] and MAC: 0a:58:0a:d9:00:5c\\\\nI0131 07:22:36.106292 6549 services_controller.go:443] Built s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672696c61ad1d8e7167c28e9940641ce167a277deb637ad71e7b2af45fe4d6d8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:22:51Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 07:22:51.568084 6832 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:22:51.568479 6832 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:22:51.568646 6832 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 07:22:51.568673 6832 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 07:22:51.568776 6832 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 07:22:51.568853 6832 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 07:22:51.568866 6832 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 07:22:51.568873 6832 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 07:22:51.568890 6832 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 07:22:51.568970 6832 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 07:22:51.569016 6832 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 07:22:51.569121 6832 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 07:22:51.569187 6832 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 07:22:51.569541 6832 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 07:22:51.569574 6832 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:52Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.573817 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:52Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.577386 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.577408 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.577416 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.577429 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.577437 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:52Z","lastTransitionTime":"2026-01-31T07:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.586681 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85b723d6-2526-40a1-9e55-05487affbda0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc31b24b6cfd1400d56d1db7b6c204037f55d607e1f1d03c78c0cc61ec38bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abce21a8cb5d8563627e3b86718c101d521671c14af9463131aa9d3777565d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49tqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:52Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.679535 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.679581 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.679592 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.679609 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.679624 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:52Z","lastTransitionTime":"2026-01-31T07:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.783261 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.783297 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.783306 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.783321 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.783330 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:52Z","lastTransitionTime":"2026-01-31T07:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.885678 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.885715 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.885909 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.885926 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.885938 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:52Z","lastTransitionTime":"2026-01-31T07:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.939626 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:22:52 crc kubenswrapper[4908]: E0131 07:22:52.940003 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.988437 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.988496 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.988515 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.988538 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:52 crc kubenswrapper[4908]: I0131 07:22:52.988555 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:52Z","lastTransitionTime":"2026-01-31T07:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.091169 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.091226 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.091238 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.091258 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.091269 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:53Z","lastTransitionTime":"2026-01-31T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.194098 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.194152 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.194175 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.194199 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.194215 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:53Z","lastTransitionTime":"2026-01-31T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.296595 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.296632 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.296641 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.296655 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.296665 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:53Z","lastTransitionTime":"2026-01-31T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.389895 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xkd4f_d0d1945f-bd78-48c9-89be-35b3f2908dab/ovnkube-controller/2.log" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.395028 4908 scope.go:117] "RemoveContainer" containerID="672696c61ad1d8e7167c28e9940641ce167a277deb637ad71e7b2af45fe4d6d8" Jan 31 07:22:53 crc kubenswrapper[4908]: E0131 07:22:53.395171 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xkd4f_openshift-ovn-kubernetes(d0d1945f-bd78-48c9-89be-35b3f2908dab)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.398948 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.399089 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.399149 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.399246 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.399315 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:53Z","lastTransitionTime":"2026-01-31T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.412473 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 07:24:57.767565366 +0000 UTC Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.412621 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85b723d6-2526-40a1-9e55-05487affbda0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc31b24b6cfd1400d56d1db7b6c204037f55d607e1f1d03c78c0cc61ec38bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abce21a8cb5d8563627e3b86718c101d521671c14af9463131aa9d3777565d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49tqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:53Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.429313 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:53Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.444014 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:53Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.461720 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:53Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.473521 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:53Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.484857 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:53Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.494539 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:53Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.501839 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.501873 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.501883 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.501898 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.501909 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:53Z","lastTransitionTime":"2026-01-31T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.512213 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde508a2a81cd89c4b62aef1f00bf38bd16df44670ae52b0402b240c013819c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:53Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.523025 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:53Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.533138 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2cg54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1242d7b7-ba0b-4084-88f1-fedf57d84b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2cg54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:53Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.547848 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:53Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.558104 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:53Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.572800 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db6ec852-e95e-45de-ad44-ddc38907c9a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745a991c9c5f319a2963caaf508b01491692c3325e6b709376570b0fd6d874b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d985bf0c21fdaa280e6e3001a5ccdf36afc39a6ad0446f25d96eb13186d69ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://760487d3653d8039bb961bd2aface36198eeea534849f94840957f6f86e3f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:53Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.583054 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:53Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.594183 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:53Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.604644 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.604683 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.604693 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.604712 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.604725 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:53Z","lastTransitionTime":"2026-01-31T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.614261 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672696c61ad1d8e7167c28e9940641ce167a277deb637ad71e7b2af45fe4d6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672696c61ad1d8e7167c28e9940641ce167a277deb637ad71e7b2af45fe4d6d8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:22:51Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 07:22:51.568084 6832 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:22:51.568479 6832 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:22:51.568646 6832 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 07:22:51.568673 6832 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 07:22:51.568776 6832 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 07:22:51.568853 6832 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 07:22:51.568866 6832 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 07:22:51.568873 6832 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 07:22:51.568890 6832 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 07:22:51.568970 6832 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 07:22:51.569016 6832 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 07:22:51.569121 6832 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 07:22:51.569187 6832 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 07:22:51.569541 6832 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 07:22:51.569574 6832 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xkd4f_openshift-ovn-kubernetes(d0d1945f-bd78-48c9-89be-35b3f2908dab)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:53Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.628567 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:53Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.707462 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.707504 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.707516 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.707536 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.707552 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:53Z","lastTransitionTime":"2026-01-31T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.809958 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.810036 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.810045 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.810058 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.810067 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:53Z","lastTransitionTime":"2026-01-31T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.912923 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.913211 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.913439 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.913536 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.913626 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:53Z","lastTransitionTime":"2026-01-31T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.939669 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.939782 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:53 crc kubenswrapper[4908]: I0131 07:22:53.939669 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:53 crc kubenswrapper[4908]: E0131 07:22:53.939799 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:22:53 crc kubenswrapper[4908]: E0131 07:22:53.940046 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:22:53 crc kubenswrapper[4908]: E0131 07:22:53.940113 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.016536 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.016584 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.016598 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.016615 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.016628 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:54Z","lastTransitionTime":"2026-01-31T07:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.119951 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.120005 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.120017 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.120037 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.120049 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:54Z","lastTransitionTime":"2026-01-31T07:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.223382 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.223452 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.223466 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.223516 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.223532 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:54Z","lastTransitionTime":"2026-01-31T07:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.326027 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.326075 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.326088 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.326105 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.326120 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:54Z","lastTransitionTime":"2026-01-31T07:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.412913 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 21:07:26.995286021 +0000 UTC Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.428594 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.428636 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.428649 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.428669 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.428680 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:54Z","lastTransitionTime":"2026-01-31T07:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.530903 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.530930 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.530939 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.530952 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.530960 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:54Z","lastTransitionTime":"2026-01-31T07:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.634138 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.634182 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.634198 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.634219 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.634235 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:54Z","lastTransitionTime":"2026-01-31T07:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.738396 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.738449 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.738466 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.738489 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.738507 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:54Z","lastTransitionTime":"2026-01-31T07:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.841143 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.841238 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.841255 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.841280 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.841298 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:54Z","lastTransitionTime":"2026-01-31T07:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.939646 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:22:54 crc kubenswrapper[4908]: E0131 07:22:54.939757 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.943519 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.943545 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.943561 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.943575 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:54 crc kubenswrapper[4908]: I0131 07:22:54.943584 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:54Z","lastTransitionTime":"2026-01-31T07:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.045501 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.045537 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.045553 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.045569 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.045578 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:55Z","lastTransitionTime":"2026-01-31T07:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.148166 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.148204 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.148241 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.148255 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.148264 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:55Z","lastTransitionTime":"2026-01-31T07:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.251011 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.251079 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.251107 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.251135 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.251168 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:55Z","lastTransitionTime":"2026-01-31T07:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.353863 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.353903 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.353914 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.353931 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.353942 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:55Z","lastTransitionTime":"2026-01-31T07:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.413819 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 01:27:44.62095314 +0000 UTC Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.456119 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.456159 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.456200 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.456221 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.456232 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:55Z","lastTransitionTime":"2026-01-31T07:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.558854 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.558894 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.558906 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.558923 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.558933 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:55Z","lastTransitionTime":"2026-01-31T07:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.664915 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.664946 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.664954 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.664967 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.665000 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:55Z","lastTransitionTime":"2026-01-31T07:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.745231 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1242d7b7-ba0b-4084-88f1-fedf57d84b11-metrics-certs\") pod \"network-metrics-daemon-2cg54\" (UID: \"1242d7b7-ba0b-4084-88f1-fedf57d84b11\") " pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:22:55 crc kubenswrapper[4908]: E0131 07:22:55.745509 4908 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:22:55 crc kubenswrapper[4908]: E0131 07:22:55.745603 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1242d7b7-ba0b-4084-88f1-fedf57d84b11-metrics-certs podName:1242d7b7-ba0b-4084-88f1-fedf57d84b11 nodeName:}" failed. No retries permitted until 2026-01-31 07:23:27.745571266 +0000 UTC m=+114.361515960 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1242d7b7-ba0b-4084-88f1-fedf57d84b11-metrics-certs") pod "network-metrics-daemon-2cg54" (UID: "1242d7b7-ba0b-4084-88f1-fedf57d84b11") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.768845 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.768913 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.768936 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.768964 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.769024 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:55Z","lastTransitionTime":"2026-01-31T07:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.872187 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.872249 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.872261 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.872498 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.872508 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:55Z","lastTransitionTime":"2026-01-31T07:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.939519 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.939519 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:55 crc kubenswrapper[4908]: E0131 07:22:55.940313 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.939579 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:55 crc kubenswrapper[4908]: E0131 07:22:55.940596 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:22:55 crc kubenswrapper[4908]: E0131 07:22:55.940773 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.975147 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.975176 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.975183 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.975195 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:55 crc kubenswrapper[4908]: I0131 07:22:55.975204 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:55Z","lastTransitionTime":"2026-01-31T07:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.077942 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.078034 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.078056 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.078081 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.078099 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:56Z","lastTransitionTime":"2026-01-31T07:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.181053 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.181135 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.181148 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.181182 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.181195 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:56Z","lastTransitionTime":"2026-01-31T07:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.284112 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.284179 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.284197 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.284221 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.284237 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:56Z","lastTransitionTime":"2026-01-31T07:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.387471 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.387580 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.387601 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.387627 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.387645 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:56Z","lastTransitionTime":"2026-01-31T07:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.414484 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 14:42:02.638845279 +0000 UTC Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.490199 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.490238 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.490249 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.490265 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.490276 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:56Z","lastTransitionTime":"2026-01-31T07:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.593656 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.593699 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.593709 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.593724 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.593736 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:56Z","lastTransitionTime":"2026-01-31T07:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.696339 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.696384 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.696400 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.696423 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.696441 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:56Z","lastTransitionTime":"2026-01-31T07:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.799617 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.799704 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.799743 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.799777 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.799801 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:56Z","lastTransitionTime":"2026-01-31T07:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.902594 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.902642 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.902654 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.902671 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.902685 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:56Z","lastTransitionTime":"2026-01-31T07:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:56 crc kubenswrapper[4908]: I0131 07:22:56.939389 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:22:56 crc kubenswrapper[4908]: E0131 07:22:56.939526 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.005584 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.005634 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.005646 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.005662 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.005671 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:57Z","lastTransitionTime":"2026-01-31T07:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.107861 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.107916 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.107932 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.107955 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.107993 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:57Z","lastTransitionTime":"2026-01-31T07:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.210856 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.210896 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.210907 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.210923 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.210936 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:57Z","lastTransitionTime":"2026-01-31T07:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.313553 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.313605 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.313625 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.313646 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.313661 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:57Z","lastTransitionTime":"2026-01-31T07:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.414594 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 14:40:48.930733864 +0000 UTC Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.416027 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.416060 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.416072 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.416088 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.416099 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:57Z","lastTransitionTime":"2026-01-31T07:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.518579 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.518618 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.518627 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.518641 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.518651 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:57Z","lastTransitionTime":"2026-01-31T07:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.620964 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.621025 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.621038 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.621054 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.621068 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:57Z","lastTransitionTime":"2026-01-31T07:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.723524 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.723571 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.723585 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.723604 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.723619 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:57Z","lastTransitionTime":"2026-01-31T07:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.827403 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.827507 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.827525 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.827548 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.827564 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:57Z","lastTransitionTime":"2026-01-31T07:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.930038 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.930117 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.930134 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.930157 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.930174 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:57Z","lastTransitionTime":"2026-01-31T07:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.939423 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.939508 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:57 crc kubenswrapper[4908]: E0131 07:22:57.939583 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:22:57 crc kubenswrapper[4908]: E0131 07:22:57.939764 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.940483 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:57 crc kubenswrapper[4908]: E0131 07:22:57.941155 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.953949 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:57Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.965356 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85b723d6-2526-40a1-9e55-05487affbda0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc31b24b6cfd1400d56d1db7b6c204037f55d607e1f1d03c78c0cc61ec38bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abce21a8cb5d8563627e3b86718c101d521671c14af9463131aa9d3777565d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49tqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:57Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.980758 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:57Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:57 crc kubenswrapper[4908]: I0131 07:22:57.991351 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:57Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.004773 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:58Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.014792 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2cg54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1242d7b7-ba0b-4084-88f1-fedf57d84b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2cg54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:58Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.025149 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:58Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.032818 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.032879 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.032897 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.032973 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.033081 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:58Z","lastTransitionTime":"2026-01-31T07:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.037287 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:58Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.047802 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:58Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.057805 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:58Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.071768 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde508a2a81cd89c4b62aef1f00bf38bd16df44670ae52b0402b240c013819c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:58Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.081728 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:58Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.095119 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:58Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.106806 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db6ec852-e95e-45de-ad44-ddc38907c9a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745a991c9c5f319a2963caaf508b01491692c3325e6b709376570b0fd6d874b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d985bf0c21fdaa280e6e3001a5ccdf36afc39a6ad0446f25d96eb13186d69ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://760487d3653d8039bb961bd2aface36198eeea534849f94840957f6f86e3f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:58Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.119637 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:58Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.129163 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:58Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.135523 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.135569 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.135580 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.135597 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.135608 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:58Z","lastTransitionTime":"2026-01-31T07:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.145652 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672696c61ad1d8e7167c28e9940641ce167a277deb637ad71e7b2af45fe4d6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672696c61ad1d8e7167c28e9940641ce167a277deb637ad71e7b2af45fe4d6d8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:22:51Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 07:22:51.568084 6832 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:22:51.568479 6832 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:22:51.568646 6832 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 07:22:51.568673 6832 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 07:22:51.568776 6832 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 07:22:51.568853 6832 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 07:22:51.568866 6832 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 07:22:51.568873 6832 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 07:22:51.568890 6832 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 07:22:51.568970 6832 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 07:22:51.569016 6832 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 07:22:51.569121 6832 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 07:22:51.569187 6832 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 07:22:51.569541 6832 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 07:22:51.569574 6832 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xkd4f_openshift-ovn-kubernetes(d0d1945f-bd78-48c9-89be-35b3f2908dab)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:22:58Z is after 2025-08-24T17:21:41Z" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.238765 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.238803 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.238816 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.238831 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.238840 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:58Z","lastTransitionTime":"2026-01-31T07:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.342834 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.342927 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.342951 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.343037 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.343101 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:58Z","lastTransitionTime":"2026-01-31T07:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.415188 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 10:58:38.841149499 +0000 UTC Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.446391 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.446452 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.446475 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.446508 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.446528 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:58Z","lastTransitionTime":"2026-01-31T07:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.548597 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.548632 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.548642 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.548655 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.548665 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:58Z","lastTransitionTime":"2026-01-31T07:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.651399 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.651462 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.651475 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.651494 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.651506 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:58Z","lastTransitionTime":"2026-01-31T07:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.753530 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.753579 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.753590 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.753607 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.753617 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:58Z","lastTransitionTime":"2026-01-31T07:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.856463 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.856512 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.856523 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.856541 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.856552 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:58Z","lastTransitionTime":"2026-01-31T07:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.939207 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:22:58 crc kubenswrapper[4908]: E0131 07:22:58.939445 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.958805 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.958857 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.958870 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.958885 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:58 crc kubenswrapper[4908]: I0131 07:22:58.958898 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:58Z","lastTransitionTime":"2026-01-31T07:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.061397 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.061451 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.061468 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.061492 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.061510 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:59Z","lastTransitionTime":"2026-01-31T07:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.163646 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.163696 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.163712 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.163733 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.163749 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:59Z","lastTransitionTime":"2026-01-31T07:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.266401 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.266666 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.266783 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.266888 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.267011 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:59Z","lastTransitionTime":"2026-01-31T07:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.369355 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.369658 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.369762 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.369861 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.369945 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:59Z","lastTransitionTime":"2026-01-31T07:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.415589 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 14:38:32.495228236 +0000 UTC Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.473865 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.473961 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.474012 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.474038 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.474057 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:59Z","lastTransitionTime":"2026-01-31T07:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.577223 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.577261 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.577271 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.577287 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.577296 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:59Z","lastTransitionTime":"2026-01-31T07:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.679360 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.679405 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.679416 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.679431 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.679440 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:59Z","lastTransitionTime":"2026-01-31T07:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.782051 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.782112 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.782125 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.782142 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.782154 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:59Z","lastTransitionTime":"2026-01-31T07:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.884869 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.884990 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.885005 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.885020 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.885033 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:59Z","lastTransitionTime":"2026-01-31T07:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.939440 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.939541 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:22:59 crc kubenswrapper[4908]: E0131 07:22:59.939630 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.939722 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:22:59 crc kubenswrapper[4908]: E0131 07:22:59.939891 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:22:59 crc kubenswrapper[4908]: E0131 07:22:59.940012 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.990212 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.990256 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.990265 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.990299 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:22:59 crc kubenswrapper[4908]: I0131 07:22:59.990311 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:22:59Z","lastTransitionTime":"2026-01-31T07:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.092766 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.092809 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.092826 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.092841 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.092853 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:00Z","lastTransitionTime":"2026-01-31T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.196199 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.196236 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.196245 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.196258 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.196269 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:00Z","lastTransitionTime":"2026-01-31T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.298162 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.298200 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.298211 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.298229 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.298239 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:00Z","lastTransitionTime":"2026-01-31T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.352349 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.353001 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.353017 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.353035 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.353048 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:00Z","lastTransitionTime":"2026-01-31T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:00 crc kubenswrapper[4908]: E0131 07:23:00.364538 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:00Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.367888 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.367923 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.367933 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.367949 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.367960 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:00Z","lastTransitionTime":"2026-01-31T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:00 crc kubenswrapper[4908]: E0131 07:23:00.378810 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:00Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.381955 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.382019 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.382033 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.382047 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.382055 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:00Z","lastTransitionTime":"2026-01-31T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:00 crc kubenswrapper[4908]: E0131 07:23:00.392078 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:00Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.395030 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.395074 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.395086 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.395104 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.395136 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:00Z","lastTransitionTime":"2026-01-31T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.415705 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 11:23:11.79439493 +0000 UTC Jan 31 07:23:00 crc kubenswrapper[4908]: E0131 07:23:00.444209 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:00Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.447712 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.447744 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.447753 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.447767 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.447777 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:00Z","lastTransitionTime":"2026-01-31T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:00 crc kubenswrapper[4908]: E0131 07:23:00.460133 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:00Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:00 crc kubenswrapper[4908]: E0131 07:23:00.460259 4908 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.461938 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.461966 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.461974 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.461999 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.462007 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:00Z","lastTransitionTime":"2026-01-31T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.567557 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.567603 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.567617 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.567635 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.567648 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:00Z","lastTransitionTime":"2026-01-31T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.670468 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.670520 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.670530 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.670544 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.670555 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:00Z","lastTransitionTime":"2026-01-31T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.772797 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.772857 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.772868 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.772885 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.772896 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:00Z","lastTransitionTime":"2026-01-31T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.875886 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.875938 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.875948 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.875965 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.876020 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:00Z","lastTransitionTime":"2026-01-31T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.939386 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:00 crc kubenswrapper[4908]: E0131 07:23:00.939517 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.978528 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.978581 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.978592 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.978609 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:00 crc kubenswrapper[4908]: I0131 07:23:00.978620 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:00Z","lastTransitionTime":"2026-01-31T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.081193 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.081241 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.081253 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.081267 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.081278 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:01Z","lastTransitionTime":"2026-01-31T07:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.183661 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.183694 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.183703 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.183717 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.183729 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:01Z","lastTransitionTime":"2026-01-31T07:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.285611 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.285657 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.285670 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.285687 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.285702 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:01Z","lastTransitionTime":"2026-01-31T07:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.388056 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.388100 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.388111 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.388128 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.388139 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:01Z","lastTransitionTime":"2026-01-31T07:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.416006 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 09:49:17.922271633 +0000 UTC Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.490742 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.490784 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.490798 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.490813 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.490824 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:01Z","lastTransitionTime":"2026-01-31T07:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.594145 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.594213 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.594225 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.594242 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.594255 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:01Z","lastTransitionTime":"2026-01-31T07:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.696189 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.696225 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.696240 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.696256 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.696265 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:01Z","lastTransitionTime":"2026-01-31T07:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.798146 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.798193 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.798202 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.798217 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.798227 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:01Z","lastTransitionTime":"2026-01-31T07:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.900611 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.900653 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.900666 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.900682 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.900693 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:01Z","lastTransitionTime":"2026-01-31T07:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.940147 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.940206 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:01 crc kubenswrapper[4908]: I0131 07:23:01.940242 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:01 crc kubenswrapper[4908]: E0131 07:23:01.940295 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:23:01 crc kubenswrapper[4908]: E0131 07:23:01.940397 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:23:01 crc kubenswrapper[4908]: E0131 07:23:01.940480 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.002636 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.002678 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.002691 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.002707 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.002719 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:02Z","lastTransitionTime":"2026-01-31T07:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.104761 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.104800 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.104809 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.104823 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.104832 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:02Z","lastTransitionTime":"2026-01-31T07:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.207226 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.207285 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.207310 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.207331 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.207347 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:02Z","lastTransitionTime":"2026-01-31T07:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.309669 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.309692 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.309699 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.309713 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.309722 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:02Z","lastTransitionTime":"2026-01-31T07:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.412389 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.412429 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.412437 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.412451 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.412461 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:02Z","lastTransitionTime":"2026-01-31T07:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.416628 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 20:01:28.856637916 +0000 UTC Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.514677 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.514726 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.514740 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.514754 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.514764 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:02Z","lastTransitionTime":"2026-01-31T07:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.616828 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.616879 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.616895 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.616920 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.616943 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:02Z","lastTransitionTime":"2026-01-31T07:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.719346 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.719385 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.719394 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.719408 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.719418 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:02Z","lastTransitionTime":"2026-01-31T07:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.821801 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.821849 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.821862 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.821880 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.821891 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:02Z","lastTransitionTime":"2026-01-31T07:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.925885 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.925965 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.926027 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.926059 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.926083 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:02Z","lastTransitionTime":"2026-01-31T07:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:02 crc kubenswrapper[4908]: I0131 07:23:02.939076 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:02 crc kubenswrapper[4908]: E0131 07:23:02.939215 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.028843 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.028907 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.028925 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.028950 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.028968 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:03Z","lastTransitionTime":"2026-01-31T07:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.130766 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.130817 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.130826 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.130840 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.130849 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:03Z","lastTransitionTime":"2026-01-31T07:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.232890 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.232928 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.232937 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.232952 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.232964 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:03Z","lastTransitionTime":"2026-01-31T07:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.335684 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.335720 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.335727 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.335741 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.335751 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:03Z","lastTransitionTime":"2026-01-31T07:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.416878 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 20:07:07.893429429 +0000 UTC Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.437416 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.437455 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.437463 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.437481 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.437491 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:03Z","lastTransitionTime":"2026-01-31T07:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.540174 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.540225 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.540242 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.540263 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.540278 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:03Z","lastTransitionTime":"2026-01-31T07:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.642830 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.642880 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.642892 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.642909 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.642921 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:03Z","lastTransitionTime":"2026-01-31T07:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.745555 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.745618 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.745630 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.745651 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.745664 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:03Z","lastTransitionTime":"2026-01-31T07:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.848701 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.848753 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.848770 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.848787 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.848800 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:03Z","lastTransitionTime":"2026-01-31T07:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.939988 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.939994 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:03 crc kubenswrapper[4908]: E0131 07:23:03.940119 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.940198 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:03 crc kubenswrapper[4908]: E0131 07:23:03.940328 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:23:03 crc kubenswrapper[4908]: E0131 07:23:03.940390 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.950238 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.950276 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.950285 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.950300 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:03 crc kubenswrapper[4908]: I0131 07:23:03.950314 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:03Z","lastTransitionTime":"2026-01-31T07:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.052404 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.052445 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.052457 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.052473 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.052484 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:04Z","lastTransitionTime":"2026-01-31T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.154804 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.154850 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.154861 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.154876 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.154889 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:04Z","lastTransitionTime":"2026-01-31T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.257290 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.257328 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.257337 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.257352 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.257363 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:04Z","lastTransitionTime":"2026-01-31T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.359950 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.360020 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.360036 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.360052 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.360064 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:04Z","lastTransitionTime":"2026-01-31T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.417387 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 21:08:26.054601348 +0000 UTC Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.462703 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.462756 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.462768 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.462786 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.462797 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:04Z","lastTransitionTime":"2026-01-31T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.564701 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.564750 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.564758 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.564771 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.564780 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:04Z","lastTransitionTime":"2026-01-31T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.667028 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.667098 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.667113 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.667134 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.667513 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:04Z","lastTransitionTime":"2026-01-31T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.769940 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.770055 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.770074 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.770093 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.770104 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:04Z","lastTransitionTime":"2026-01-31T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.872264 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.872322 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.872332 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.872346 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.872355 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:04Z","lastTransitionTime":"2026-01-31T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.939140 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:04 crc kubenswrapper[4908]: E0131 07:23:04.939303 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.982361 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.982397 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.982413 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.982429 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:04 crc kubenswrapper[4908]: I0131 07:23:04.982441 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:04Z","lastTransitionTime":"2026-01-31T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.085243 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.085283 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.085294 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.085309 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.085319 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:05Z","lastTransitionTime":"2026-01-31T07:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.187302 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.187333 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.187341 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.187354 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.187362 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:05Z","lastTransitionTime":"2026-01-31T07:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.290433 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.290477 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.290487 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.290501 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.290510 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:05Z","lastTransitionTime":"2026-01-31T07:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.393155 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.393225 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.393242 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.393268 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.393285 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:05Z","lastTransitionTime":"2026-01-31T07:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.417563 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 05:00:27.014679912 +0000 UTC Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.495693 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.495751 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.495770 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.495790 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.495804 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:05Z","lastTransitionTime":"2026-01-31T07:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.597893 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.597957 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.597969 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.598009 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.598022 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:05Z","lastTransitionTime":"2026-01-31T07:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.701207 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.701255 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.701266 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.701284 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.701296 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:05Z","lastTransitionTime":"2026-01-31T07:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.802971 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.803039 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.803050 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.803064 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.803073 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:05Z","lastTransitionTime":"2026-01-31T07:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.905158 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.905218 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.905228 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.905241 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.905251 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:05Z","lastTransitionTime":"2026-01-31T07:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.939506 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.939544 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:05 crc kubenswrapper[4908]: E0131 07:23:05.939627 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:23:05 crc kubenswrapper[4908]: E0131 07:23:05.939849 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.940038 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:05 crc kubenswrapper[4908]: E0131 07:23:05.940115 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:23:05 crc kubenswrapper[4908]: I0131 07:23:05.940777 4908 scope.go:117] "RemoveContainer" containerID="672696c61ad1d8e7167c28e9940641ce167a277deb637ad71e7b2af45fe4d6d8" Jan 31 07:23:05 crc kubenswrapper[4908]: E0131 07:23:05.940910 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xkd4f_openshift-ovn-kubernetes(d0d1945f-bd78-48c9-89be-35b3f2908dab)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.009155 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.009195 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.009210 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.009234 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.009249 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:06Z","lastTransitionTime":"2026-01-31T07:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.112216 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.112256 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.112264 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.112278 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.112290 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:06Z","lastTransitionTime":"2026-01-31T07:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.216213 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.216284 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.216306 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.216334 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.216357 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:06Z","lastTransitionTime":"2026-01-31T07:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.318867 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.318912 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.318923 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.318940 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.318950 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:06Z","lastTransitionTime":"2026-01-31T07:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.354467 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:23:06 crc kubenswrapper[4908]: E0131 07:23:06.354599 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:10.354575618 +0000 UTC m=+156.970520272 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.354664 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.354707 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.354734 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.354753 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:06 crc kubenswrapper[4908]: E0131 07:23:06.354814 4908 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:23:06 crc kubenswrapper[4908]: E0131 07:23:06.354871 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:24:10.354856415 +0000 UTC m=+156.970801069 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:23:06 crc kubenswrapper[4908]: E0131 07:23:06.354865 4908 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:23:06 crc kubenswrapper[4908]: E0131 07:23:06.354909 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:23:06 crc kubenswrapper[4908]: E0131 07:23:06.354928 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:23:06 crc kubenswrapper[4908]: E0131 07:23:06.354943 4908 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:23:06 crc kubenswrapper[4908]: E0131 07:23:06.354962 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:24:10.354937477 +0000 UTC m=+156.970882171 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:23:06 crc kubenswrapper[4908]: E0131 07:23:06.354892 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:23:06 crc kubenswrapper[4908]: E0131 07:23:06.355011 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 07:24:10.354971168 +0000 UTC m=+156.970915922 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:23:06 crc kubenswrapper[4908]: E0131 07:23:06.355014 4908 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:23:06 crc kubenswrapper[4908]: E0131 07:23:06.355030 4908 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:23:06 crc kubenswrapper[4908]: E0131 07:23:06.355073 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 07:24:10.35506032 +0000 UTC m=+156.971005034 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.418364 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 02:06:26.602067459 +0000 UTC Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.420773 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.420812 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.420821 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.420838 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.420850 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:06Z","lastTransitionTime":"2026-01-31T07:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.523331 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.523372 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.523395 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.523416 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.523427 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:06Z","lastTransitionTime":"2026-01-31T07:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.625932 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.625999 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.626011 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.626024 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.626036 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:06Z","lastTransitionTime":"2026-01-31T07:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.728140 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.728185 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.728194 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.728209 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.728230 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:06Z","lastTransitionTime":"2026-01-31T07:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.831396 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.831452 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.831476 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.831505 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.831524 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:06Z","lastTransitionTime":"2026-01-31T07:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.934111 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.934173 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.934192 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.934268 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.934465 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:06Z","lastTransitionTime":"2026-01-31T07:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.939380 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:06 crc kubenswrapper[4908]: E0131 07:23:06.939743 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:23:06 crc kubenswrapper[4908]: I0131 07:23:06.952302 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.037331 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.037367 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.037375 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.037388 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.037397 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:07Z","lastTransitionTime":"2026-01-31T07:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.139789 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.139824 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.139833 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.139846 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.139856 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:07Z","lastTransitionTime":"2026-01-31T07:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.242420 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.242463 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.242474 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.242489 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.242500 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:07Z","lastTransitionTime":"2026-01-31T07:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.344960 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.345004 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.345013 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.345026 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.345034 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:07Z","lastTransitionTime":"2026-01-31T07:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.418909 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 04:20:47.271294227 +0000 UTC Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.441038 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-944z2_c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b/kube-multus/0.log" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.441084 4908 generic.go:334] "Generic (PLEG): container finished" podID="c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b" containerID="1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a" exitCode=1 Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.441179 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-944z2" event={"ID":"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b","Type":"ContainerDied","Data":"1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a"} Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.441596 4908 scope.go:117] "RemoveContainer" containerID="1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.446711 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.446744 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.446756 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.446772 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.446783 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:07Z","lastTransitionTime":"2026-01-31T07:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.454073 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:07Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.465590 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db6ec852-e95e-45de-ad44-ddc38907c9a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745a991c9c5f319a2963caaf508b01491692c3325e6b709376570b0fd6d874b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d985bf0c21fdaa280e6e3001a5ccdf36afc39a6ad0446f25d96eb13186d69ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://760487d3653d8039bb961bd2aface36198eeea534849f94840957f6f86e3f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:07Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.483290 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:07Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.496339 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:07Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.511816 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672696c61ad1d8e7167c28e9940641ce167a277deb637ad71e7b2af45fe4d6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672696c61ad1d8e7167c28e9940641ce167a277deb637ad71e7b2af45fe4d6d8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:22:51Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 07:22:51.568084 6832 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:22:51.568479 6832 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:22:51.568646 6832 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 07:22:51.568673 6832 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 07:22:51.568776 6832 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 07:22:51.568853 6832 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 07:22:51.568866 6832 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 07:22:51.568873 6832 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 07:22:51.568890 6832 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 07:22:51.568970 6832 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 07:22:51.569016 6832 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 07:22:51.569121 6832 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 07:22:51.569187 6832 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 07:22:51.569541 6832 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 07:22:51.569574 6832 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xkd4f_openshift-ovn-kubernetes(d0d1945f-bd78-48c9-89be-35b3f2908dab)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:07Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.525365 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c76ed8-3e12-4f03-8a4c-fcce2f383736\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d140ab766db4e2a205322df77c92dfca143606ef2ad6eef8ebe7824a1fb2ce1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fccdf59a4e50afe2f52463a10951ed2c28bf70698c214522758116c2a4af56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fccdf59a4e50afe2f52463a10951ed2c28bf70698c214522758116c2a4af56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:07Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.536817 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:07Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.549230 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85b723d6-2526-40a1-9e55-05487affbda0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc31b24b6cfd1400d56d1db7b6c204037f55d607e1f1d03c78c0cc61ec38bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abce21a8cb5d8563627e3b86718c101d521671c14af9463131aa9d3777565d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49tqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:07Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.549507 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.549527 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.549537 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.549552 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.549562 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:07Z","lastTransitionTime":"2026-01-31T07:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.562010 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:07Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.574293 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:07Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.588161 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:23:07Z\\\",\\\"message\\\":\\\"2026-01-31T07:22:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1a695681-0dcc-4fd1-9797-5cfdad934270\\\\n2026-01-31T07:22:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1a695681-0dcc-4fd1-9797-5cfdad934270 to /host/opt/cni/bin/\\\\n2026-01-31T07:22:22Z [verbose] multus-daemon started\\\\n2026-01-31T07:22:22Z [verbose] Readiness Indicator file check\\\\n2026-01-31T07:23:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:07Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.603417 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2cg54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1242d7b7-ba0b-4084-88f1-fedf57d84b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2cg54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:07Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.617048 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:07Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.636874 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:07Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.652440 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.652477 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.652487 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.652501 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.652511 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:07Z","lastTransitionTime":"2026-01-31T07:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.652639 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:07Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.664798 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:07Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.679013 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde508a2a81cd89c4b62aef1f00bf38bd16df44670ae52b0402b240c013819c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:07Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.690006 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:07Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.755058 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.755101 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.755113 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.755132 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.755327 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:07Z","lastTransitionTime":"2026-01-31T07:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.858229 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.858270 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.858279 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.858293 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.858302 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:07Z","lastTransitionTime":"2026-01-31T07:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.940124 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.940125 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:07 crc kubenswrapper[4908]: E0131 07:23:07.940288 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.940148 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:07 crc kubenswrapper[4908]: E0131 07:23:07.940766 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:23:07 crc kubenswrapper[4908]: E0131 07:23:07.941000 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.956368 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.959437 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:07Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.961729 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.961759 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.961768 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.961782 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.961792 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:07Z","lastTransitionTime":"2026-01-31T07:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.974112 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:07Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.988120 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:23:07Z\\\",\\\"message\\\":\\\"2026-01-31T07:22:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1a695681-0dcc-4fd1-9797-5cfdad934270\\\\n2026-01-31T07:22:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1a695681-0dcc-4fd1-9797-5cfdad934270 to /host/opt/cni/bin/\\\\n2026-01-31T07:22:22Z [verbose] multus-daemon started\\\\n2026-01-31T07:22:22Z [verbose] Readiness Indicator file check\\\\n2026-01-31T07:23:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:07Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:07 crc kubenswrapper[4908]: I0131 07:23:07.998960 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:07Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.010321 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2cg54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1242d7b7-ba0b-4084-88f1-fedf57d84b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2cg54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.023815 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.036719 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.048502 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.058620 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.063804 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.063839 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.063851 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.063865 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.063877 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:08Z","lastTransitionTime":"2026-01-31T07:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.076508 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde508a2a81cd89c4b62aef1f00bf38bd16df44670ae52b0402b240c013819c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.094119 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672696c61ad1d8e7167c28e9940641ce167a277deb637ad71e7b2af45fe4d6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672696c61ad1d8e7167c28e9940641ce167a277deb637ad71e7b2af45fe4d6d8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:22:51Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 07:22:51.568084 6832 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:22:51.568479 6832 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:22:51.568646 6832 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 07:22:51.568673 6832 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 07:22:51.568776 6832 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 07:22:51.568853 6832 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 07:22:51.568866 6832 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 07:22:51.568873 6832 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 07:22:51.568890 6832 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 07:22:51.568970 6832 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 07:22:51.569016 6832 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 07:22:51.569121 6832 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 07:22:51.569187 6832 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 07:22:51.569541 6832 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 07:22:51.569574 6832 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xkd4f_openshift-ovn-kubernetes(d0d1945f-bd78-48c9-89be-35b3f2908dab)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.106970 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.120062 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db6ec852-e95e-45de-ad44-ddc38907c9a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745a991c9c5f319a2963caaf508b01491692c3325e6b709376570b0fd6d874b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d985bf0c21fdaa280e6e3001a5ccdf36afc39a6ad0446f25d96eb13186d69ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://760487d3653d8039bb961bd2aface36198eeea534849f94840957f6f86e3f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.134464 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.146140 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.154474 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c76ed8-3e12-4f03-8a4c-fcce2f383736\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d140ab766db4e2a205322df77c92dfca143606ef2ad6eef8ebe7824a1fb2ce1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fccdf59a4e50afe2f52463a10951ed2c28bf70698c214522758116c2a4af56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fccdf59a4e50afe2f52463a10951ed2c28bf70698c214522758116c2a4af56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.164878 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.166076 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.166103 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.166114 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.166130 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.166140 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:08Z","lastTransitionTime":"2026-01-31T07:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.174324 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85b723d6-2526-40a1-9e55-05487affbda0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc31b24b6cfd1400d56d1db7b6c204037f55d607e1f1d03c78c0cc61ec38bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abce21a8cb5d8563627e3b86718c101d521671c14af9463131aa9d3777565d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49tqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.268848 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.268881 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.268891 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.268906 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.268915 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:08Z","lastTransitionTime":"2026-01-31T07:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.371098 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.371142 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.371153 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.371173 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.371185 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:08Z","lastTransitionTime":"2026-01-31T07:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.419176 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 02:46:09.621488967 +0000 UTC Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.446520 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-944z2_c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b/kube-multus/0.log" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.446788 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-944z2" event={"ID":"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b","Type":"ContainerStarted","Data":"194cdbb2201c22be4445330e908c269d66f69edaee49bad860a1ba85d7425ded"} Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.469929 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d9dc1fe-2c7d-4b28-a3d7-5fcecd5ba167\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://419e72d748701a7cbac4f41b93088ebcbc495919d370d4585bea16173abf59c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d636e4b4a957c18d504e9a7dbea1c8de2f361c6d62f43440153fd3c3a9fd114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb565d0bb0f53ec00448d23eae755be1c0b61a3ac48cf4a2f3c41b31f68309e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7fa42d943a9fdfe418570df65b1f7cb0283986be5c708de3249a4437813074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427ceefa14da0dfd762ea13ae64a686ef3c4b543370be7c3678a326a21de8b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d60d5422cd204d352318b4b40c2dad90a82b5e0284bae925192368eb101e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d60d5422cd204d352318b4b40c2dad90a82b5e0284bae925192368eb101e5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee484db38d105d78e0dc3f2d203acfbc9eaaef217406b69efd0b08f1cd373a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee484db38d105d78e0dc3f2d203acfbc9eaaef217406b69efd0b08f1cd373a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0197c69c2bd270e05b11cbc71918c152abe2c3d0448b14c5e9cd570f4b927a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0197c69c2bd270e05b11cbc71918c152abe2c3d0448b14c5e9cd570f4b927a40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.473693 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.473757 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.473772 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.473795 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.473813 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:08Z","lastTransitionTime":"2026-01-31T07:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.486510 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.499582 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db6ec852-e95e-45de-ad44-ddc38907c9a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745a991c9c5f319a2963caaf508b01491692c3325e6b709376570b0fd6d874b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d985bf0c21fdaa280e6e3001a5ccdf36afc39a6ad0446f25d96eb13186d69ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://760487d3653d8039bb961bd2aface36198eeea534849f94840957f6f86e3f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.513590 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.525711 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.548062 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672696c61ad1d8e7167c28e9940641ce167a277deb637ad71e7b2af45fe4d6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672696c61ad1d8e7167c28e9940641ce167a277deb637ad71e7b2af45fe4d6d8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:22:51Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 07:22:51.568084 6832 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:22:51.568479 6832 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:22:51.568646 6832 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 07:22:51.568673 6832 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 07:22:51.568776 6832 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 07:22:51.568853 6832 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 07:22:51.568866 6832 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 07:22:51.568873 6832 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 07:22:51.568890 6832 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 07:22:51.568970 6832 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 07:22:51.569016 6832 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 07:22:51.569121 6832 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 07:22:51.569187 6832 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 07:22:51.569541 6832 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 07:22:51.569574 6832 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xkd4f_openshift-ovn-kubernetes(d0d1945f-bd78-48c9-89be-35b3f2908dab)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.562236 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c76ed8-3e12-4f03-8a4c-fcce2f383736\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d140ab766db4e2a205322df77c92dfca143606ef2ad6eef8ebe7824a1fb2ce1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fccdf59a4e50afe2f52463a10951ed2c28bf70698c214522758116c2a4af56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fccdf59a4e50afe2f52463a10951ed2c28bf70698c214522758116c2a4af56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.574336 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.575969 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.576021 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.576035 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.576059 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.576074 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:08Z","lastTransitionTime":"2026-01-31T07:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.585207 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85b723d6-2526-40a1-9e55-05487affbda0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc31b24b6cfd1400d56d1db7b6c204037f55d607e1f1d03c78c0cc61ec38bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abce21a8cb5d8563627e3b86718c101d521671c14af9463131aa9d3777565d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49tqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.598883 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.613036 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.627789 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://194cdbb2201c22be4445330e908c269d66f69edaee49bad860a1ba85d7425ded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:23:07Z\\\",\\\"message\\\":\\\"2026-01-31T07:22:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1a695681-0dcc-4fd1-9797-5cfdad934270\\\\n2026-01-31T07:22:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1a695681-0dcc-4fd1-9797-5cfdad934270 to /host/opt/cni/bin/\\\\n2026-01-31T07:22:22Z [verbose] multus-daemon started\\\\n2026-01-31T07:22:22Z [verbose] Readiness Indicator file check\\\\n2026-01-31T07:23:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.639378 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2cg54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1242d7b7-ba0b-4084-88f1-fedf57d84b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2cg54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.651518 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.666123 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.678524 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.679030 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.679082 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.679094 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.679111 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.679120 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:08Z","lastTransitionTime":"2026-01-31T07:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.689031 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.704525 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde508a2a81cd89c4b62aef1f00bf38bd16df44670ae52b0402b240c013819c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.715578 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.780990 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.781033 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.781047 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.781064 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.781075 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:08Z","lastTransitionTime":"2026-01-31T07:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.882568 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.882601 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.882611 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.882624 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.882634 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:08Z","lastTransitionTime":"2026-01-31T07:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.939164 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:08 crc kubenswrapper[4908]: E0131 07:23:08.939289 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.985369 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.985407 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.985417 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.985432 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:08 crc kubenswrapper[4908]: I0131 07:23:08.985441 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:08Z","lastTransitionTime":"2026-01-31T07:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.088937 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.088998 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.089009 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.089029 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.089041 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:09Z","lastTransitionTime":"2026-01-31T07:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.191843 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.192159 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.192175 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.192192 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.192204 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:09Z","lastTransitionTime":"2026-01-31T07:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.294148 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.294192 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.294201 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.294216 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.294225 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:09Z","lastTransitionTime":"2026-01-31T07:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.396705 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.396762 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.396786 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.396813 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.396846 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:09Z","lastTransitionTime":"2026-01-31T07:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.420412 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 03:44:05.77247405 +0000 UTC Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.499272 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.499333 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.499345 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.499363 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.499373 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:09Z","lastTransitionTime":"2026-01-31T07:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.602306 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.602355 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.602364 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.602381 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.602391 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:09Z","lastTransitionTime":"2026-01-31T07:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.705325 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.705386 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.705403 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.705431 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.705448 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:09Z","lastTransitionTime":"2026-01-31T07:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.807943 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.808015 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.808031 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.808048 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.808060 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:09Z","lastTransitionTime":"2026-01-31T07:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.910817 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.910859 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.910868 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.910883 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.910893 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:09Z","lastTransitionTime":"2026-01-31T07:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.939471 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.939498 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:09 crc kubenswrapper[4908]: I0131 07:23:09.939531 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:09 crc kubenswrapper[4908]: E0131 07:23:09.939589 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:23:09 crc kubenswrapper[4908]: E0131 07:23:09.939688 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:23:09 crc kubenswrapper[4908]: E0131 07:23:09.939734 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.012818 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.012919 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.012933 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.012948 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.012959 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:10Z","lastTransitionTime":"2026-01-31T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.115530 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.115562 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.115571 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.115585 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.115597 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:10Z","lastTransitionTime":"2026-01-31T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.217935 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.218285 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.218306 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.218340 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.218356 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:10Z","lastTransitionTime":"2026-01-31T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.320821 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.320856 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.320867 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.320883 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.320894 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:10Z","lastTransitionTime":"2026-01-31T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.421500 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 11:06:57.761304572 +0000 UTC Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.423389 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.423436 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.423450 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.423467 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.423479 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:10Z","lastTransitionTime":"2026-01-31T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.526358 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.526411 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.526421 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.526435 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.526445 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:10Z","lastTransitionTime":"2026-01-31T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.576951 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.577025 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.577034 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.577047 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.577056 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:10Z","lastTransitionTime":"2026-01-31T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:10 crc kubenswrapper[4908]: E0131 07:23:10.592525 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.596394 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.596432 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.596444 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.596459 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.596470 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:10Z","lastTransitionTime":"2026-01-31T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:10 crc kubenswrapper[4908]: E0131 07:23:10.608851 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.613149 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.613181 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.613189 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.613205 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.613214 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:10Z","lastTransitionTime":"2026-01-31T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:10 crc kubenswrapper[4908]: E0131 07:23:10.648639 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.655736 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.655771 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.655779 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.655794 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.655804 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:10Z","lastTransitionTime":"2026-01-31T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:10 crc kubenswrapper[4908]: E0131 07:23:10.674875 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.679691 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.679732 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.679740 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.679753 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.679762 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:10Z","lastTransitionTime":"2026-01-31T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:10 crc kubenswrapper[4908]: E0131 07:23:10.694746 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:10Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:10 crc kubenswrapper[4908]: E0131 07:23:10.694922 4908 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.696230 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.696259 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.696271 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.696286 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.696299 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:10Z","lastTransitionTime":"2026-01-31T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.798177 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.798536 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.798691 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.798833 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.799007 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:10Z","lastTransitionTime":"2026-01-31T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.901782 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.901835 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.901854 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.901903 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.901920 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:10Z","lastTransitionTime":"2026-01-31T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:10 crc kubenswrapper[4908]: I0131 07:23:10.940040 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:10 crc kubenswrapper[4908]: E0131 07:23:10.940168 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.004247 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.004288 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.004298 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.004318 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.004329 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:11Z","lastTransitionTime":"2026-01-31T07:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.106633 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.106678 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.106688 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.106705 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.106715 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:11Z","lastTransitionTime":"2026-01-31T07:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.208966 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.209027 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.209039 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.209059 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.209070 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:11Z","lastTransitionTime":"2026-01-31T07:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.310929 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.310970 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.310993 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.311008 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.311017 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:11Z","lastTransitionTime":"2026-01-31T07:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.413888 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.413939 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.413951 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.413966 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.414022 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:11Z","lastTransitionTime":"2026-01-31T07:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.422539 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 17:40:10.478215935 +0000 UTC Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.517307 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.517376 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.517403 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.517429 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.517450 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:11Z","lastTransitionTime":"2026-01-31T07:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.620150 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.620186 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.620196 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.620212 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.620224 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:11Z","lastTransitionTime":"2026-01-31T07:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.723394 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.723446 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.723463 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.723487 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.723504 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:11Z","lastTransitionTime":"2026-01-31T07:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.826894 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.826956 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.827011 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.827041 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.827064 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:11Z","lastTransitionTime":"2026-01-31T07:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.929042 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.929115 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.929135 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.929161 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.929177 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:11Z","lastTransitionTime":"2026-01-31T07:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.939472 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.939587 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:11 crc kubenswrapper[4908]: E0131 07:23:11.939701 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:23:11 crc kubenswrapper[4908]: I0131 07:23:11.939796 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:11 crc kubenswrapper[4908]: E0131 07:23:11.939880 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:23:11 crc kubenswrapper[4908]: E0131 07:23:11.939964 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.030955 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.031214 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.031285 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.031347 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.031428 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:12Z","lastTransitionTime":"2026-01-31T07:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.134374 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.134689 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.134914 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.135042 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.135140 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:12Z","lastTransitionTime":"2026-01-31T07:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.238235 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.238629 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.238975 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.239243 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.239390 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:12Z","lastTransitionTime":"2026-01-31T07:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.341717 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.341757 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.341774 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.341796 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.341810 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:12Z","lastTransitionTime":"2026-01-31T07:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.423247 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 13:16:07.026733725 +0000 UTC Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.443944 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.444008 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.444016 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.444030 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.444041 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:12Z","lastTransitionTime":"2026-01-31T07:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.545591 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.545846 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.545933 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.546057 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.546148 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:12Z","lastTransitionTime":"2026-01-31T07:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.648094 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.648124 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.648132 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.648145 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.648154 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:12Z","lastTransitionTime":"2026-01-31T07:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.750215 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.750471 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.750549 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.750625 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.750694 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:12Z","lastTransitionTime":"2026-01-31T07:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.853293 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.853355 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.853374 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.853401 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.853418 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:12Z","lastTransitionTime":"2026-01-31T07:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.939197 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:12 crc kubenswrapper[4908]: E0131 07:23:12.940292 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.955861 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.955891 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.955902 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.955919 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:12 crc kubenswrapper[4908]: I0131 07:23:12.955929 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:12Z","lastTransitionTime":"2026-01-31T07:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.058657 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.058694 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.058708 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.058726 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.058744 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:13Z","lastTransitionTime":"2026-01-31T07:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.161247 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.161311 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.161327 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.161349 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.161365 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:13Z","lastTransitionTime":"2026-01-31T07:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.263873 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.263925 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.263942 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.263968 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.264053 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:13Z","lastTransitionTime":"2026-01-31T07:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.368316 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.368347 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.368356 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.368369 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.368379 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:13Z","lastTransitionTime":"2026-01-31T07:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.424458 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 11:13:28.218137303 +0000 UTC Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.469992 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.470029 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.470037 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.470050 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.470061 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:13Z","lastTransitionTime":"2026-01-31T07:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.572427 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.572475 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.572488 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.572505 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.572516 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:13Z","lastTransitionTime":"2026-01-31T07:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.674603 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.674644 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.674657 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.674672 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.674683 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:13Z","lastTransitionTime":"2026-01-31T07:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.777014 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.777069 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.777081 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.777098 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.777114 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:13Z","lastTransitionTime":"2026-01-31T07:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.880581 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.880694 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.880708 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.880728 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.880740 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:13Z","lastTransitionTime":"2026-01-31T07:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.939149 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:13 crc kubenswrapper[4908]: E0131 07:23:13.939278 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.939332 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.939444 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:13 crc kubenswrapper[4908]: E0131 07:23:13.939689 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:23:13 crc kubenswrapper[4908]: E0131 07:23:13.939877 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.983842 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.983882 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.983893 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.983909 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:13 crc kubenswrapper[4908]: I0131 07:23:13.983923 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:13Z","lastTransitionTime":"2026-01-31T07:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.098804 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.098851 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.098866 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.098888 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.098902 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:14Z","lastTransitionTime":"2026-01-31T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.201130 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.201166 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.201174 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.201192 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.201202 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:14Z","lastTransitionTime":"2026-01-31T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.304624 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.304693 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.304719 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.304750 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.304771 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:14Z","lastTransitionTime":"2026-01-31T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.407042 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.407080 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.407091 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.407104 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.407113 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:14Z","lastTransitionTime":"2026-01-31T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.425541 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 16:44:52.833777013 +0000 UTC Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.509486 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.509560 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.509572 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.509611 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.509626 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:14Z","lastTransitionTime":"2026-01-31T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.612594 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.612663 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.612675 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.612693 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.612708 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:14Z","lastTransitionTime":"2026-01-31T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.715551 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.715827 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.715839 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.715854 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.715863 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:14Z","lastTransitionTime":"2026-01-31T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.819344 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.819417 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.819439 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.819466 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.819487 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:14Z","lastTransitionTime":"2026-01-31T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.922596 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.922651 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.922664 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.922683 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.922698 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:14Z","lastTransitionTime":"2026-01-31T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:14 crc kubenswrapper[4908]: I0131 07:23:14.940041 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:14 crc kubenswrapper[4908]: E0131 07:23:14.940314 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.026056 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.026147 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.026165 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.026192 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.026212 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:15Z","lastTransitionTime":"2026-01-31T07:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.129491 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.129527 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.129539 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.129560 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.129572 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:15Z","lastTransitionTime":"2026-01-31T07:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.232206 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.232454 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.232527 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.232620 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.232703 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:15Z","lastTransitionTime":"2026-01-31T07:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.336329 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.336409 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.336437 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.336468 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.336488 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:15Z","lastTransitionTime":"2026-01-31T07:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.426110 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 07:35:23.819954188 +0000 UTC Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.438681 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.438904 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.439012 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.439134 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.439206 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:15Z","lastTransitionTime":"2026-01-31T07:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.541654 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.541704 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.541713 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.541725 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.541734 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:15Z","lastTransitionTime":"2026-01-31T07:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.644350 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.644377 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.644386 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.644398 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.644408 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:15Z","lastTransitionTime":"2026-01-31T07:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.747165 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.747220 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.747237 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.747260 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.747278 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:15Z","lastTransitionTime":"2026-01-31T07:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.849445 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.849482 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.849493 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.849511 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.849521 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:15Z","lastTransitionTime":"2026-01-31T07:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.940088 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.940118 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.940150 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:15 crc kubenswrapper[4908]: E0131 07:23:15.940228 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:23:15 crc kubenswrapper[4908]: E0131 07:23:15.940313 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:23:15 crc kubenswrapper[4908]: E0131 07:23:15.940362 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.951353 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.951577 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.951670 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.951809 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:15 crc kubenswrapper[4908]: I0131 07:23:15.951890 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:15Z","lastTransitionTime":"2026-01-31T07:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.053958 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.054028 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.054038 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.054051 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.054062 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:16Z","lastTransitionTime":"2026-01-31T07:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.156477 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.156516 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.156525 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.156539 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.156548 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:16Z","lastTransitionTime":"2026-01-31T07:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.258380 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.258623 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.258693 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.258771 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.258849 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:16Z","lastTransitionTime":"2026-01-31T07:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.361173 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.361440 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.361526 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.361614 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.361708 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:16Z","lastTransitionTime":"2026-01-31T07:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.426909 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 00:49:15.174516452 +0000 UTC Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.464167 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.464225 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.464248 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.464276 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.464295 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:16Z","lastTransitionTime":"2026-01-31T07:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.566931 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.567016 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.567034 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.567056 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.567072 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:16Z","lastTransitionTime":"2026-01-31T07:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.669317 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.669351 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.669362 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.669377 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.669388 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:16Z","lastTransitionTime":"2026-01-31T07:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.774499 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.774548 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.774584 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.774612 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.774660 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:16Z","lastTransitionTime":"2026-01-31T07:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.876758 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.876800 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.876814 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.876835 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.876849 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:16Z","lastTransitionTime":"2026-01-31T07:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.939336 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:16 crc kubenswrapper[4908]: E0131 07:23:16.939493 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.978944 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.978972 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.979004 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.979019 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:16 crc kubenswrapper[4908]: I0131 07:23:16.979031 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:16Z","lastTransitionTime":"2026-01-31T07:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.081583 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.081659 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.081676 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.081699 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.081715 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:17Z","lastTransitionTime":"2026-01-31T07:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.185507 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.185570 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.185646 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.185678 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.185699 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:17Z","lastTransitionTime":"2026-01-31T07:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.288893 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.288940 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.288953 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.288973 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.289004 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:17Z","lastTransitionTime":"2026-01-31T07:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.391652 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.391744 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.391758 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.391783 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.391801 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:17Z","lastTransitionTime":"2026-01-31T07:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.427017 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 14:54:27.397083853 +0000 UTC Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.494662 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.494719 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.494781 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.494810 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.494824 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:17Z","lastTransitionTime":"2026-01-31T07:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.598110 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.598176 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.598189 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.598216 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.598234 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:17Z","lastTransitionTime":"2026-01-31T07:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.702422 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.702512 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.702538 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.702573 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.702596 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:17Z","lastTransitionTime":"2026-01-31T07:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.806619 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.806666 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.806681 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.806701 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.806717 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:17Z","lastTransitionTime":"2026-01-31T07:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.909515 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.909594 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.909609 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.909635 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.909650 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:17Z","lastTransitionTime":"2026-01-31T07:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.939459 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:17 crc kubenswrapper[4908]: E0131 07:23:17.939607 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.939458 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:17 crc kubenswrapper[4908]: E0131 07:23:17.939842 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.940023 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:17 crc kubenswrapper[4908]: E0131 07:23:17.940100 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.952254 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:17Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.962005 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:17Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.974738 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://194cdbb2201c22be4445330e908c269d66f69edaee49bad860a1ba85d7425ded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:23:07Z\\\",\\\"message\\\":\\\"2026-01-31T07:22:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1a695681-0dcc-4fd1-9797-5cfdad934270\\\\n2026-01-31T07:22:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1a695681-0dcc-4fd1-9797-5cfdad934270 to /host/opt/cni/bin/\\\\n2026-01-31T07:22:22Z [verbose] multus-daemon started\\\\n2026-01-31T07:22:22Z [verbose] Readiness Indicator file check\\\\n2026-01-31T07:23:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:17Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:17 crc kubenswrapper[4908]: I0131 07:23:17.991852 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde508a2a81cd89c4b62aef1f00bf38bd16df44670ae52b0402b240c013819c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:17Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.006588 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.012434 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.012473 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.012488 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.012511 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.012526 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:18Z","lastTransitionTime":"2026-01-31T07:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.018839 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2cg54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1242d7b7-ba0b-4084-88f1-fedf57d84b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2cg54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.033261 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.048633 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.062242 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.075064 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.089134 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.115485 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672696c61ad1d8e7167c28e9940641ce167a277deb637ad71e7b2af45fe4d6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672696c61ad1d8e7167c28e9940641ce167a277deb637ad71e7b2af45fe4d6d8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:22:51Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 07:22:51.568084 6832 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:22:51.568479 6832 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:22:51.568646 6832 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 07:22:51.568673 6832 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 07:22:51.568776 6832 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 07:22:51.568853 6832 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 07:22:51.568866 6832 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 07:22:51.568873 6832 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 07:22:51.568890 6832 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 07:22:51.568970 6832 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 07:22:51.569016 6832 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 07:22:51.569121 6832 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 07:22:51.569187 6832 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 07:22:51.569541 6832 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 07:22:51.569574 6832 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xkd4f_openshift-ovn-kubernetes(d0d1945f-bd78-48c9-89be-35b3f2908dab)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.115800 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.116088 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.116119 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.116151 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.116178 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:18Z","lastTransitionTime":"2026-01-31T07:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.141677 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d9dc1fe-2c7d-4b28-a3d7-5fcecd5ba167\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://419e72d748701a7cbac4f41b93088ebcbc495919d370d4585bea16173abf59c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d636e4b4a957c18d504e9a7dbea1c8de2f361c6d62f43440153fd3c3a9fd114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb565d0bb0f53ec00448d23eae755be1c0b61a3ac48cf4a2f3c41b31f68309e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7fa42d943a9fdfe418570df65b1f7cb0283986be5c708de3249a4437813074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427ceefa14da0dfd762ea13ae64a686ef3c4b543370be7c3678a326a21de8b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d60d5422cd204d352318b4b40c2dad90a82b5e0284bae925192368eb101e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d60d5422cd204d352318b4b40c2dad90a82b5e0284bae925192368eb101e5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee484db38d105d78e0dc3f2d203acfbc9eaaef217406b69efd0b08f1cd373a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee484db38d105d78e0dc3f2d203acfbc9eaaef217406b69efd0b08f1cd373a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0197c69c2bd270e05b11cbc71918c152abe2c3d0448b14c5e9cd570f4b927a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0197c69c2bd270e05b11cbc71918c152abe2c3d0448b14c5e9cd570f4b927a40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.165071 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.179955 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db6ec852-e95e-45de-ad44-ddc38907c9a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745a991c9c5f319a2963caaf508b01491692c3325e6b709376570b0fd6d874b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d985bf0c21fdaa280e6e3001a5ccdf36afc39a6ad0446f25d96eb13186d69ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://760487d3653d8039bb961bd2aface36198eeea534849f94840957f6f86e3f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.200265 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.210521 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c76ed8-3e12-4f03-8a4c-fcce2f383736\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d140ab766db4e2a205322df77c92dfca143606ef2ad6eef8ebe7824a1fb2ce1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fccdf59a4e50afe2f52463a10951ed2c28bf70698c214522758116c2a4af56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fccdf59a4e50afe2f52463a10951ed2c28bf70698c214522758116c2a4af56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.218352 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.218398 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.218416 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.218439 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.218456 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:18Z","lastTransitionTime":"2026-01-31T07:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.226662 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.240441 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85b723d6-2526-40a1-9e55-05487affbda0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc31b24b6cfd1400d56d1db7b6c204037f55d607e1f1d03c78c0cc61ec38bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abce21a8cb5d8563627e3b86718c101d521671c14af9463131aa9d3777565d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49tqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.321554 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.321594 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.321602 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.321616 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.321627 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:18Z","lastTransitionTime":"2026-01-31T07:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.424180 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.424217 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.424248 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.424262 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.424270 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:18Z","lastTransitionTime":"2026-01-31T07:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.428002 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 20:04:06.977390307 +0000 UTC Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.526727 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.526759 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.526767 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.526780 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.526789 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:18Z","lastTransitionTime":"2026-01-31T07:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.629735 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.629798 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.629811 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.629828 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.629841 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:18Z","lastTransitionTime":"2026-01-31T07:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.731534 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.731608 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.731630 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.731658 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.731682 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:18Z","lastTransitionTime":"2026-01-31T07:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.833974 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.834034 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.834046 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.834064 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.834074 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:18Z","lastTransitionTime":"2026-01-31T07:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.936767 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.936809 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.936822 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.936839 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.936850 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:18Z","lastTransitionTime":"2026-01-31T07:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:18 crc kubenswrapper[4908]: I0131 07:23:18.940125 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:18 crc kubenswrapper[4908]: E0131 07:23:18.940376 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.039941 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.040031 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.040050 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.040073 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.040091 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:19Z","lastTransitionTime":"2026-01-31T07:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.142919 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.143017 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.143030 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.143047 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.143055 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:19Z","lastTransitionTime":"2026-01-31T07:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.245703 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.245758 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.245771 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.245792 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.245807 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:19Z","lastTransitionTime":"2026-01-31T07:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.348398 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.348438 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.348447 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.348463 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.348473 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:19Z","lastTransitionTime":"2026-01-31T07:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.428396 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 22:14:08.238892145 +0000 UTC Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.450610 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.450674 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.450692 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.450720 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.450748 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:19Z","lastTransitionTime":"2026-01-31T07:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.553000 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.553041 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.553049 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.553062 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.553071 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:19Z","lastTransitionTime":"2026-01-31T07:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.655380 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.655412 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.655423 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.655457 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.655467 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:19Z","lastTransitionTime":"2026-01-31T07:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.757605 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.757641 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.757649 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.757663 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.757672 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:19Z","lastTransitionTime":"2026-01-31T07:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.860250 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.860277 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.860285 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.860297 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.860306 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:19Z","lastTransitionTime":"2026-01-31T07:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.939541 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:19 crc kubenswrapper[4908]: E0131 07:23:19.939662 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.940174 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.940260 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.940425 4908 scope.go:117] "RemoveContainer" containerID="672696c61ad1d8e7167c28e9940641ce167a277deb637ad71e7b2af45fe4d6d8" Jan 31 07:23:19 crc kubenswrapper[4908]: E0131 07:23:19.943165 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:23:19 crc kubenswrapper[4908]: E0131 07:23:19.940661 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.962901 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.962950 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.962962 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.962996 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:19 crc kubenswrapper[4908]: I0131 07:23:19.963010 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:19Z","lastTransitionTime":"2026-01-31T07:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.066216 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.066251 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.066259 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.066272 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.066283 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:20Z","lastTransitionTime":"2026-01-31T07:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.168463 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.168829 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.168840 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.168860 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.168871 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:20Z","lastTransitionTime":"2026-01-31T07:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.271794 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.271825 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.271833 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.271847 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.271887 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:20Z","lastTransitionTime":"2026-01-31T07:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.375417 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.375461 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.375474 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.375492 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.375503 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:20Z","lastTransitionTime":"2026-01-31T07:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.429260 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 03:52:27.950460784 +0000 UTC Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.477676 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.477820 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.477840 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.477867 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.477885 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:20Z","lastTransitionTime":"2026-01-31T07:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.580244 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.580286 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.580297 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.580313 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.580325 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:20Z","lastTransitionTime":"2026-01-31T07:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.682476 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.682511 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.682519 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.682532 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.682543 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:20Z","lastTransitionTime":"2026-01-31T07:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.783949 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.784007 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.784023 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.784037 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.784047 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:20Z","lastTransitionTime":"2026-01-31T07:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.896275 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.896341 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.896362 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.896396 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.896418 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:20Z","lastTransitionTime":"2026-01-31T07:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:20 crc kubenswrapper[4908]: I0131 07:23:20.939551 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:20 crc kubenswrapper[4908]: E0131 07:23:20.939813 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.000108 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.000198 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.000223 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.000254 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.000277 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:21Z","lastTransitionTime":"2026-01-31T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.066872 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.066931 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.066946 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.066966 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.066996 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:21Z","lastTransitionTime":"2026-01-31T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:21 crc kubenswrapper[4908]: E0131 07:23:21.079381 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.083237 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.083278 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.083288 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.083303 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.083315 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:21Z","lastTransitionTime":"2026-01-31T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:21 crc kubenswrapper[4908]: E0131 07:23:21.095510 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.099711 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.099756 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.099768 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.099989 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.099999 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:21Z","lastTransitionTime":"2026-01-31T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:21 crc kubenswrapper[4908]: E0131 07:23:21.112109 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.115612 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.115694 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.115715 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.115739 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.115754 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:21Z","lastTransitionTime":"2026-01-31T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:21 crc kubenswrapper[4908]: E0131 07:23:21.127331 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.130422 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.130481 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.130493 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.130513 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.130525 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:21Z","lastTransitionTime":"2026-01-31T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:21 crc kubenswrapper[4908]: E0131 07:23:21.142354 4908 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1f9ea-64bc-4ee6-b73e-d54792ad39f9\\\",\\\"systemUUID\\\":\\\"3a1d33fb-cc50-40c4-b06d-abd3cdc211c1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:21 crc kubenswrapper[4908]: E0131 07:23:21.142477 4908 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.143761 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.143784 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.143793 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.143805 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.143815 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:21Z","lastTransitionTime":"2026-01-31T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.246453 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.246475 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.246482 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.246496 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.246506 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:21Z","lastTransitionTime":"2026-01-31T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.347998 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.348024 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.348031 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.348044 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.348052 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:21Z","lastTransitionTime":"2026-01-31T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.429640 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 18:55:09.117975686 +0000 UTC Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.450646 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.450678 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.450687 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.450700 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.450709 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:21Z","lastTransitionTime":"2026-01-31T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.487764 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xkd4f_d0d1945f-bd78-48c9-89be-35b3f2908dab/ovnkube-controller/2.log" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.490521 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" event={"ID":"d0d1945f-bd78-48c9-89be-35b3f2908dab","Type":"ContainerStarted","Data":"4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07"} Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.490871 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.503061 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.516733 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.535860 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672696c61ad1d8e7167c28e9940641ce167a277deb637ad71e7b2af45fe4d6d8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:22:51Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 07:22:51.568084 6832 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:22:51.568479 6832 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:22:51.568646 6832 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 07:22:51.568673 6832 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 07:22:51.568776 6832 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 07:22:51.568853 6832 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 07:22:51.568866 6832 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 07:22:51.568873 6832 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 07:22:51.568890 6832 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 07:22:51.568970 6832 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 07:22:51.569016 6832 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 07:22:51.569121 6832 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 07:22:51.569187 6832 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 07:22:51.569541 6832 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 07:22:51.569574 6832 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.552472 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.552531 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.552545 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.552568 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.552581 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:21Z","lastTransitionTime":"2026-01-31T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.561446 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d9dc1fe-2c7d-4b28-a3d7-5fcecd5ba167\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://419e72d748701a7cbac4f41b93088ebcbc495919d370d4585bea16173abf59c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d636e4b4a957c18d504e9a7dbea1c8de2f361c6d62f43440153fd3c3a9fd114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb565d0bb0f53ec00448d23eae755be1c0b61a3ac48cf4a2f3c41b31f68309e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7fa42d943a9fdfe418570df65b1f7cb0283986be5c708de3249a4437813074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427ceefa14da0dfd762ea13ae64a686ef3c4b543370be7c3678a326a21de8b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d60d5422cd204d352318b4b40c2dad90a82b5e0284bae925192368eb101e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d60d5422cd204d352318b4b40c2dad90a82b5e0284bae925192368eb101e5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee484db38d105d78e0dc3f2d203acfbc9eaaef217406b69efd0b08f1cd373a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee484db38d105d78e0dc3f2d203acfbc9eaaef217406b69efd0b08f1cd373a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0197c69c2bd270e05b11cbc71918c152abe2c3d0448b14c5e9cd570f4b927a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0197c69c2bd270e05b11cbc71918c152abe2c3d0448b14c5e9cd570f4b927a40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.581686 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.595501 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db6ec852-e95e-45de-ad44-ddc38907c9a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745a991c9c5f319a2963caaf508b01491692c3325e6b709376570b0fd6d874b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d985bf0c21fdaa280e6e3001a5ccdf36afc39a6ad0446f25d96eb13186d69ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://760487d3653d8039bb961bd2aface36198eeea534849f94840957f6f86e3f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.607806 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c76ed8-3e12-4f03-8a4c-fcce2f383736\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d140ab766db4e2a205322df77c92dfca143606ef2ad6eef8ebe7824a1fb2ce1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fccdf59a4e50afe2f52463a10951ed2c28bf70698c214522758116c2a4af56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fccdf59a4e50afe2f52463a10951ed2c28bf70698c214522758116c2a4af56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.627598 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.642846 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85b723d6-2526-40a1-9e55-05487affbda0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc31b24b6cfd1400d56d1db7b6c204037f55d607e1f1d03c78c0cc61ec38bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abce21a8cb5d8563627e3b86718c101d521671c14af9463131aa9d3777565d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49tqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.655494 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.655532 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.655540 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.655553 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.655563 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:21Z","lastTransitionTime":"2026-01-31T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.657061 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.668247 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.684265 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://194cdbb2201c22be4445330e908c269d66f69edaee49bad860a1ba85d7425ded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:23:07Z\\\",\\\"message\\\":\\\"2026-01-31T07:22:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1a695681-0dcc-4fd1-9797-5cfdad934270\\\\n2026-01-31T07:22:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1a695681-0dcc-4fd1-9797-5cfdad934270 to /host/opt/cni/bin/\\\\n2026-01-31T07:22:22Z [verbose] multus-daemon started\\\\n2026-01-31T07:22:22Z [verbose] Readiness Indicator file check\\\\n2026-01-31T07:23:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.697583 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.713355 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde508a2a81cd89c4b62aef1f00bf38bd16df44670ae52b0402b240c013819c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.724329 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.738576 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2cg54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1242d7b7-ba0b-4084-88f1-fedf57d84b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2cg54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.756560 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.757744 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.757783 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.757798 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.757818 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.757832 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:21Z","lastTransitionTime":"2026-01-31T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.773740 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.785269 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:21Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.860666 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.860797 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.860816 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.860838 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.860849 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:21Z","lastTransitionTime":"2026-01-31T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.939614 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:21 crc kubenswrapper[4908]: E0131 07:23:21.939746 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.939918 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:21 crc kubenswrapper[4908]: E0131 07:23:21.940006 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.940133 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:21 crc kubenswrapper[4908]: E0131 07:23:21.940205 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.963261 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.963308 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.963319 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.963347 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:21 crc kubenswrapper[4908]: I0131 07:23:21.963364 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:21Z","lastTransitionTime":"2026-01-31T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.066085 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.066156 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.066178 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.066207 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.066234 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:22Z","lastTransitionTime":"2026-01-31T07:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.168177 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.168213 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.168224 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.168241 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.168253 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:22Z","lastTransitionTime":"2026-01-31T07:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.270889 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.270933 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.270943 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.270958 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.270967 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:22Z","lastTransitionTime":"2026-01-31T07:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.373151 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.373184 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.373194 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.373207 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.373216 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:22Z","lastTransitionTime":"2026-01-31T07:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.430668 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 11:02:33.406376683 +0000 UTC Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.475148 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.475189 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.475199 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.475213 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.475222 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:22Z","lastTransitionTime":"2026-01-31T07:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.496315 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xkd4f_d0d1945f-bd78-48c9-89be-35b3f2908dab/ovnkube-controller/3.log" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.497289 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xkd4f_d0d1945f-bd78-48c9-89be-35b3f2908dab/ovnkube-controller/2.log" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.500127 4908 generic.go:334] "Generic (PLEG): container finished" podID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerID="4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07" exitCode=1 Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.500172 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" event={"ID":"d0d1945f-bd78-48c9-89be-35b3f2908dab","Type":"ContainerDied","Data":"4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07"} Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.500222 4908 scope.go:117] "RemoveContainer" containerID="672696c61ad1d8e7167c28e9940641ce167a277deb637ad71e7b2af45fe4d6d8" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.501591 4908 scope.go:117] "RemoveContainer" containerID="4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07" Jan 31 07:23:22 crc kubenswrapper[4908]: E0131 07:23:22.501838 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xkd4f_openshift-ovn-kubernetes(d0d1945f-bd78-48c9-89be-35b3f2908dab)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.514376 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.527357 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.540551 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://194cdbb2201c22be4445330e908c269d66f69edaee49bad860a1ba85d7425ded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:23:07Z\\\",\\\"message\\\":\\\"2026-01-31T07:22:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1a695681-0dcc-4fd1-9797-5cfdad934270\\\\n2026-01-31T07:22:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1a695681-0dcc-4fd1-9797-5cfdad934270 to /host/opt/cni/bin/\\\\n2026-01-31T07:22:22Z [verbose] multus-daemon started\\\\n2026-01-31T07:22:22Z [verbose] Readiness Indicator file check\\\\n2026-01-31T07:23:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.553231 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.564602 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2cg54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1242d7b7-ba0b-4084-88f1-fedf57d84b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2cg54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.577409 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.577457 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.577468 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.577485 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.577497 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:22Z","lastTransitionTime":"2026-01-31T07:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.578333 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.590753 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.601585 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.612049 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.626640 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde508a2a81cd89c4b62aef1f00bf38bd16df44670ae52b0402b240c013819c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.648813 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672696c61ad1d8e7167c28e9940641ce167a277deb637ad71e7b2af45fe4d6d8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:22:51Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 07:22:51.568084 6832 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:22:51.568479 6832 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:22:51.568646 6832 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 07:22:51.568673 6832 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 07:22:51.568776 6832 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 07:22:51.568853 6832 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 07:22:51.568866 6832 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 07:22:51.568873 6832 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 07:22:51.568890 6832 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 07:22:51.568970 6832 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 07:22:51.569016 6832 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 07:22:51.569121 6832 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 07:22:51.569187 6832 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 07:22:51.569541 6832 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 07:22:51.569574 6832 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"message\\\":\\\"1 07:23:21.754839 7187 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 07:23:21.754868 7187 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 07:23:21.754883 7187 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 07:23:21.754890 7187 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 07:23:21.754904 7187 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 07:23:21.754903 7187 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 07:23:21.754912 7187 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 07:23:21.754937 7187 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 07:23:21.754956 7187 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 07:23:21.754966 7187 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 07:23:21.755014 7187 factory.go:656] Stopping watch factory\\\\nI0131 07:23:21.755039 7187 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 07:23:21.755063 7187 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 07:23:21.755091 7187 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 07:23:21.755185 7187 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.670517 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d9dc1fe-2c7d-4b28-a3d7-5fcecd5ba167\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://419e72d748701a7cbac4f41b93088ebcbc495919d370d4585bea16173abf59c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d636e4b4a957c18d504e9a7dbea1c8de2f361c6d62f43440153fd3c3a9fd114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb565d0bb0f53ec00448d23eae755be1c0b61a3ac48cf4a2f3c41b31f68309e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7fa42d943a9fdfe418570df65b1f7cb0283986be5c708de3249a4437813074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427ceefa14da0dfd762ea13ae64a686ef3c4b543370be7c3678a326a21de8b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d60d5422cd204d352318b4b40c2dad90a82b5e0284bae925192368eb101e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d60d5422cd204d352318b4b40c2dad90a82b5e0284bae925192368eb101e5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee484db38d105d78e0dc3f2d203acfbc9eaaef217406b69efd0b08f1cd373a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee484db38d105d78e0dc3f2d203acfbc9eaaef217406b69efd0b08f1cd373a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0197c69c2bd270e05b11cbc71918c152abe2c3d0448b14c5e9cd570f4b927a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0197c69c2bd270e05b11cbc71918c152abe2c3d0448b14c5e9cd570f4b927a40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.679744 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.679789 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.679799 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.679817 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.679827 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:22Z","lastTransitionTime":"2026-01-31T07:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.684103 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.694744 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db6ec852-e95e-45de-ad44-ddc38907c9a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745a991c9c5f319a2963caaf508b01491692c3325e6b709376570b0fd6d874b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d985bf0c21fdaa280e6e3001a5ccdf36afc39a6ad0446f25d96eb13186d69ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://760487d3653d8039bb961bd2aface36198eeea534849f94840957f6f86e3f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.706068 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.720201 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.735252 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c76ed8-3e12-4f03-8a4c-fcce2f383736\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d140ab766db4e2a205322df77c92dfca143606ef2ad6eef8ebe7824a1fb2ce1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fccdf59a4e50afe2f52463a10951ed2c28bf70698c214522758116c2a4af56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fccdf59a4e50afe2f52463a10951ed2c28bf70698c214522758116c2a4af56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.749648 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.762517 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85b723d6-2526-40a1-9e55-05487affbda0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc31b24b6cfd1400d56d1db7b6c204037f55d607e1f1d03c78c0cc61ec38bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abce21a8cb5d8563627e3b86718c101d521671c14af9463131aa9d3777565d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49tqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:22Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.782296 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.782346 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.782361 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.782376 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.782387 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:22Z","lastTransitionTime":"2026-01-31T07:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.885298 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.885346 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.885359 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.885375 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.885387 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:22Z","lastTransitionTime":"2026-01-31T07:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.940056 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:22 crc kubenswrapper[4908]: E0131 07:23:22.940197 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.988519 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.988572 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.988584 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.988637 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:22 crc kubenswrapper[4908]: I0131 07:23:22.988651 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:22Z","lastTransitionTime":"2026-01-31T07:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.091529 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.091592 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.091603 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.091621 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.091631 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:23Z","lastTransitionTime":"2026-01-31T07:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.194629 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.194678 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.194708 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.194758 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.194801 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:23Z","lastTransitionTime":"2026-01-31T07:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.298468 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.298520 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.298531 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.298545 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.298554 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:23Z","lastTransitionTime":"2026-01-31T07:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.400760 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.400807 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.400820 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.400836 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.400849 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:23Z","lastTransitionTime":"2026-01-31T07:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.431603 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 19:38:20.608736386 +0000 UTC Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.503094 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.503133 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.503144 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.503161 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.503179 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:23Z","lastTransitionTime":"2026-01-31T07:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.505831 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xkd4f_d0d1945f-bd78-48c9-89be-35b3f2908dab/ovnkube-controller/3.log" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.509517 4908 scope.go:117] "RemoveContainer" containerID="4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07" Jan 31 07:23:23 crc kubenswrapper[4908]: E0131 07:23:23.509719 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xkd4f_openshift-ovn-kubernetes(d0d1945f-bd78-48c9-89be-35b3f2908dab)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.527235 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d9dc1fe-2c7d-4b28-a3d7-5fcecd5ba167\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://419e72d748701a7cbac4f41b93088ebcbc495919d370d4585bea16173abf59c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d636e4b4a957c18d504e9a7dbea1c8de2f361c6d62f43440153fd3c3a9fd114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb565d0bb0f53ec00448d23eae755be1c0b61a3ac48cf4a2f3c41b31f68309e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7fa42d943a9fdfe418570df65b1f7cb0283986be5c708de3249a4437813074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427ceefa14da0dfd762ea13ae64a686ef3c4b543370be7c3678a326a21de8b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d60d5422cd204d352318b4b40c2dad90a82b5e0284bae925192368eb101e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d60d5422cd204d352318b4b40c2dad90a82b5e0284bae925192368eb101e5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee484db38d105d78e0dc3f2d203acfbc9eaaef217406b69efd0b08f1cd373a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee484db38d105d78e0dc3f2d203acfbc9eaaef217406b69efd0b08f1cd373a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0197c69c2bd270e05b11cbc71918c152abe2c3d0448b14c5e9cd570f4b927a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0197c69c2bd270e05b11cbc71918c152abe2c3d0448b14c5e9cd570f4b927a40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.537621 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deb7dd11-7d10-45e2-a561-0d6941c51c43\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:21:49Z\\\",\\\"message\\\":\\\"W0131 07:21:48.533919 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 07:21:48.534289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769844108 cert, and key in /tmp/serving-cert-671030808/serving-signer.crt, /tmp/serving-cert-671030808/serving-signer.key\\\\nI0131 07:21:48.781531 1 observer_polling.go:159] Starting file observer\\\\nW0131 07:21:48.783287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 07:21:48.783433 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:21:48.784111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-671030808/tls.crt::/tmp/serving-cert-671030808/tls.key\\\\\\\"\\\\nF0131 07:21:49.049736 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.548735 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db6ec852-e95e-45de-ad44-ddc38907c9a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745a991c9c5f319a2963caaf508b01491692c3325e6b709376570b0fd6d874b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d985bf0c21fdaa280e6e3001a5ccdf36afc39a6ad0446f25d96eb13186d69ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://760487d3653d8039bb961bd2aface36198eeea534849f94840957f6f86e3f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dde54d7a60c6a4109cae1b89cb08b34ae90e2d81e73bff8db88cd3a445274b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.560479 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f6000f53750e79eb938360bc8ef3bc2c624441ab2a1c68cd2643414ce4e6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.570215 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e21704-e401-411f-99c0-4b4afe2bcf9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d0cbc588ee01f4447d91a34df212eb175141c2757af92f3651683a2990dfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7vgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.586462 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d1945f-bd78-48c9-89be-35b3f2908dab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:23:21Z\\\",\\\"message\\\":\\\"1 07:23:21.754839 7187 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 07:23:21.754868 7187 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 07:23:21.754883 7187 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 07:23:21.754890 7187 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 07:23:21.754904 7187 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 07:23:21.754903 7187 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 07:23:21.754912 7187 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 07:23:21.754937 7187 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 07:23:21.754956 7187 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 07:23:21.754966 7187 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 07:23:21.755014 7187 factory.go:656] Stopping watch factory\\\\nI0131 07:23:21.755039 7187 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 07:23:21.755063 7187 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 07:23:21.755091 7187 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 07:23:21.755185 7187 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:23:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xkd4f_openshift-ovn-kubernetes(d0d1945f-bd78-48c9-89be-35b3f2908dab)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdzpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xkd4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.595738 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c76ed8-3e12-4f03-8a4c-fcce2f383736\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d140ab766db4e2a205322df77c92dfca143606ef2ad6eef8ebe7824a1fb2ce1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fccdf59a4e50afe2f52463a10951ed2c28bf70698c214522758116c2a4af56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fccdf59a4e50afe2f52463a10951ed2c28bf70698c214522758116c2a4af56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.610251 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c22f2be6-44d5-4c8b-b4aa-80c7d20cf116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7013748de8a7d9a7d4ec96c347bf75943e60092b5257fbeb463fef780d82afd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e998a890b532cfa256192bef08c844b9da92c1e227869aa170e95f535454e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd37b7c3982a1040f39c44149f391a9f699f998593dfed07f65c6c697103e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:21:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.611218 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.611246 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.611256 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.611269 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.611279 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:23Z","lastTransitionTime":"2026-01-31T07:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.623209 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85b723d6-2526-40a1-9e55-05487affbda0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc31b24b6cfd1400d56d1db7b6c204037f55d607e1f1d03c78c0cc61ec38bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abce21a8cb5d8563627e3b86718c101d521671c14af9463131aa9d3777565d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcd7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49tqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.634763 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.644937 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0034a95f6e2456fb2ec4edad89c0eae1de055aeacd024505e9290a92b2e6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.658580 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-944z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://194cdbb2201c22be4445330e908c269d66f69edaee49bad860a1ba85d7425ded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:23:07Z\\\",\\\"message\\\":\\\"2026-01-31T07:22:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1a695681-0dcc-4fd1-9797-5cfdad934270\\\\n2026-01-31T07:22:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1a695681-0dcc-4fd1-9797-5cfdad934270 to /host/opt/cni/bin/\\\\n2026-01-31T07:22:22Z [verbose] multus-daemon started\\\\n2026-01-31T07:22:22Z [verbose] Readiness Indicator file check\\\\n2026-01-31T07:23:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qsgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-944z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.669187 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.680551 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49c2d9d8a448e75c2dbf23feda1d55a39be693a7de9b9a6b20795862d5637f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b5ca7a5f0749d4f519e8cc194ab73fdb80413157a1562d60c23a4b7839edd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.690551 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.699719 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nxc4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ae0245-683c-4bd0-b14f-10d048e5db01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484564c3dd89d6e3e0a0fcdc73dbf5992309fceffdd06400188cf1ac221018e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nhw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nxc4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.711033 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e2a4089-bcb9-4be0-bfbc-30ca54029e9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde508a2a81cd89c4b62aef1f00bf38bd16df44670ae52b0402b240c013819c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9209f4f6633d6e9e90e6181af00126e431c3a0d97247bb5339635e2ce2ff297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb1dca056f22aa6116985d0087dbf92681869506d8ca9bff541b2d8698e943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429387e95c40d68bfa0dfe00c246d63d1f6c945778e222358b625d7ca4bc34ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1addbadb79b87193bedf5f40f15dfdc99a81a4430a7e2a9520891abc299d0482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b28e8068ca800805d340d17bbef204a2a840538bf430d31a1f843f590dfba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f817a715cba3a232fd0f9eaeef2ff32fdb63c402053f0156738f2f31370b17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfplt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fwlxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.713544 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.713577 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.713585 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.713598 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.713607 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:23Z","lastTransitionTime":"2026-01-31T07:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.720445 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kk2t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425085fb-8558-4dca-814f-38c080bc3672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51054dec127ba98b39e8719afd994523b2579642e2b65a94f7f6492cc5c28de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8jr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kk2t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.729160 4908 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2cg54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1242d7b7-ba0b-4084-88f1-fedf57d84b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn5gh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:22:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2cg54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:23:23Z is after 2025-08-24T17:21:41Z" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.815449 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.815490 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.815502 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.815518 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.815531 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:23Z","lastTransitionTime":"2026-01-31T07:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.917895 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.917948 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.917960 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.917995 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.918010 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:23Z","lastTransitionTime":"2026-01-31T07:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.939188 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.939247 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:23 crc kubenswrapper[4908]: I0131 07:23:23.939256 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:23 crc kubenswrapper[4908]: E0131 07:23:23.939349 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:23:23 crc kubenswrapper[4908]: E0131 07:23:23.939478 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:23:23 crc kubenswrapper[4908]: E0131 07:23:23.939725 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.020331 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.020377 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.020389 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.020405 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.020416 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:24Z","lastTransitionTime":"2026-01-31T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.122150 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.122181 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.122189 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.122203 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.122212 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:24Z","lastTransitionTime":"2026-01-31T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.224219 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.224285 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.224308 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.224337 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.224359 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:24Z","lastTransitionTime":"2026-01-31T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.326510 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.326555 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.326566 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.326580 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.326591 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:24Z","lastTransitionTime":"2026-01-31T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.429782 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.429826 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.429837 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.429852 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.429866 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:24Z","lastTransitionTime":"2026-01-31T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.433065 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 16:42:27.333675309 +0000 UTC Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.532641 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.532678 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.532686 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.532701 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.532710 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:24Z","lastTransitionTime":"2026-01-31T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.635427 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.635502 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.635527 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.635556 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.635574 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:24Z","lastTransitionTime":"2026-01-31T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.738446 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.738487 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.738498 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.738516 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.738529 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:24Z","lastTransitionTime":"2026-01-31T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.840549 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.840607 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.840684 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.840737 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.840808 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:24Z","lastTransitionTime":"2026-01-31T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.940162 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:24 crc kubenswrapper[4908]: E0131 07:23:24.940386 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.942931 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.942962 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.942973 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.943013 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:24 crc kubenswrapper[4908]: I0131 07:23:24.943026 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:24Z","lastTransitionTime":"2026-01-31T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.045078 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.045116 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.045126 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.045142 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.045151 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:25Z","lastTransitionTime":"2026-01-31T07:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.147926 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.147968 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.147993 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.148009 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.148020 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:25Z","lastTransitionTime":"2026-01-31T07:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.250899 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.250955 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.251001 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.251026 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.251043 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:25Z","lastTransitionTime":"2026-01-31T07:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.353603 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.353650 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.353661 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.353678 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.353690 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:25Z","lastTransitionTime":"2026-01-31T07:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.434826 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 19:57:24.21047298 +0000 UTC Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.455486 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.455561 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.455574 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.455598 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.455611 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:25Z","lastTransitionTime":"2026-01-31T07:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.558501 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.558532 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.558542 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.558565 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.558578 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:25Z","lastTransitionTime":"2026-01-31T07:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.661554 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.661582 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.661590 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.661603 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.661611 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:25Z","lastTransitionTime":"2026-01-31T07:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.763767 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.763818 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.763829 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.763844 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.763853 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:25Z","lastTransitionTime":"2026-01-31T07:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.866568 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.866612 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.866621 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.866637 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.866646 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:25Z","lastTransitionTime":"2026-01-31T07:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.939640 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.939707 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:25 crc kubenswrapper[4908]: E0131 07:23:25.940125 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:23:25 crc kubenswrapper[4908]: E0131 07:23:25.940260 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.940442 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:25 crc kubenswrapper[4908]: E0131 07:23:25.940546 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.968249 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.968284 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.968292 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.968306 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:25 crc kubenswrapper[4908]: I0131 07:23:25.968316 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:25Z","lastTransitionTime":"2026-01-31T07:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.070971 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.071123 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.071157 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.071181 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.071198 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:26Z","lastTransitionTime":"2026-01-31T07:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.173468 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.173529 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.173544 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.173565 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.173580 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:26Z","lastTransitionTime":"2026-01-31T07:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.276048 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.276127 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.276145 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.276168 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.276184 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:26Z","lastTransitionTime":"2026-01-31T07:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.378664 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.378703 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.378730 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.378749 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.378761 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:26Z","lastTransitionTime":"2026-01-31T07:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.435208 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 15:26:06.148456013 +0000 UTC Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.482521 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.482607 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.482623 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.482646 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.482662 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:26Z","lastTransitionTime":"2026-01-31T07:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.585015 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.585060 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.585072 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.585090 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.585101 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:26Z","lastTransitionTime":"2026-01-31T07:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.687998 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.688063 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.688075 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.688099 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.688117 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:26Z","lastTransitionTime":"2026-01-31T07:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.791133 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.791191 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.791202 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.791220 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.791232 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:26Z","lastTransitionTime":"2026-01-31T07:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.893830 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.893877 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.893889 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.893909 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.893924 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:26Z","lastTransitionTime":"2026-01-31T07:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.939359 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:26 crc kubenswrapper[4908]: E0131 07:23:26.939506 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.996269 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.996313 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.996328 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.996349 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:26 crc kubenswrapper[4908]: I0131 07:23:26.996364 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:26Z","lastTransitionTime":"2026-01-31T07:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.099890 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.099962 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.100019 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.100051 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.100074 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:27Z","lastTransitionTime":"2026-01-31T07:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.203661 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.203732 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.203741 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.203757 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.203771 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:27Z","lastTransitionTime":"2026-01-31T07:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.305505 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.305559 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.305572 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.305589 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.305602 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:27Z","lastTransitionTime":"2026-01-31T07:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.408693 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.408760 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.408775 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.408801 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.408818 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:27Z","lastTransitionTime":"2026-01-31T07:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.436403 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 05:27:56.52546891 +0000 UTC Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.511863 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.511931 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.511944 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.511962 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.511997 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:27Z","lastTransitionTime":"2026-01-31T07:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.614843 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.614910 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.614920 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.614936 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.614948 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:27Z","lastTransitionTime":"2026-01-31T07:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.718071 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.718112 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.718121 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.718136 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.718146 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:27Z","lastTransitionTime":"2026-01-31T07:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.792381 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1242d7b7-ba0b-4084-88f1-fedf57d84b11-metrics-certs\") pod \"network-metrics-daemon-2cg54\" (UID: \"1242d7b7-ba0b-4084-88f1-fedf57d84b11\") " pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:27 crc kubenswrapper[4908]: E0131 07:23:27.792568 4908 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:23:27 crc kubenswrapper[4908]: E0131 07:23:27.792677 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1242d7b7-ba0b-4084-88f1-fedf57d84b11-metrics-certs podName:1242d7b7-ba0b-4084-88f1-fedf57d84b11 nodeName:}" failed. No retries permitted until 2026-01-31 07:24:31.792651796 +0000 UTC m=+178.408596510 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1242d7b7-ba0b-4084-88f1-fedf57d84b11-metrics-certs") pod "network-metrics-daemon-2cg54" (UID: "1242d7b7-ba0b-4084-88f1-fedf57d84b11") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.820379 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.820420 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.820435 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.820451 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.820463 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:27Z","lastTransitionTime":"2026-01-31T07:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.923458 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.923500 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.923511 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.923525 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.923536 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:27Z","lastTransitionTime":"2026-01-31T07:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.940123 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:27 crc kubenswrapper[4908]: E0131 07:23:27.940265 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.940475 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:27 crc kubenswrapper[4908]: E0131 07:23:27.940547 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.940682 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:27 crc kubenswrapper[4908]: E0131 07:23:27.940740 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:23:27 crc kubenswrapper[4908]: I0131 07:23:27.993850 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-944z2" podStartSLOduration=78.993827513 podStartE2EDuration="1m18.993827513s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:23:27.981850121 +0000 UTC m=+114.597794775" watchObservedRunningTime="2026-01-31 07:23:27.993827513 +0000 UTC m=+114.609772167" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.014552 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-nxc4t" podStartSLOduration=79.014529042 podStartE2EDuration="1m19.014529042s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:23:27.99409402 +0000 UTC m=+114.610038674" watchObservedRunningTime="2026-01-31 07:23:28.014529042 +0000 UTC m=+114.630473696" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.030394 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.030459 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.030473 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.030494 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.030507 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:28Z","lastTransitionTime":"2026-01-31T07:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.034402 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-fwlxr" podStartSLOduration=79.034334127 podStartE2EDuration="1m19.034334127s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:23:28.014748808 +0000 UTC m=+114.630693462" watchObservedRunningTime="2026-01-31 07:23:28.034334127 +0000 UTC m=+114.650278781" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.048062 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-kk2t9" podStartSLOduration=79.048040214 podStartE2EDuration="1m19.048040214s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:23:28.035161159 +0000 UTC m=+114.651105813" watchObservedRunningTime="2026-01-31 07:23:28.048040214 +0000 UTC m=+114.663984868" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.132951 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.133000 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.133015 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.133040 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.133052 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:28Z","lastTransitionTime":"2026-01-31T07:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.154813 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podStartSLOduration=79.154791863 podStartE2EDuration="1m19.154791863s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:23:28.127914873 +0000 UTC m=+114.743859547" watchObservedRunningTime="2026-01-31 07:23:28.154791863 +0000 UTC m=+114.770736527" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.179677 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=21.17965811 podStartE2EDuration="21.17965811s" podCreationTimestamp="2026-01-31 07:23:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:23:28.178258474 +0000 UTC m=+114.794203138" watchObservedRunningTime="2026-01-31 07:23:28.17965811 +0000 UTC m=+114.795602764" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.206037 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=75.206019936 podStartE2EDuration="1m15.206019936s" podCreationTimestamp="2026-01-31 07:22:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:23:28.192860584 +0000 UTC m=+114.808805238" watchObservedRunningTime="2026-01-31 07:23:28.206019936 +0000 UTC m=+114.821964590" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.217255 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=22.217238209 podStartE2EDuration="22.217238209s" podCreationTimestamp="2026-01-31 07:23:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:23:28.217096765 +0000 UTC m=+114.833041429" watchObservedRunningTime="2026-01-31 07:23:28.217238209 +0000 UTC m=+114.833182863" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.217709 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=61.217703631 podStartE2EDuration="1m1.217703631s" podCreationTimestamp="2026-01-31 07:22:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:23:28.206945091 +0000 UTC m=+114.822889745" watchObservedRunningTime="2026-01-31 07:23:28.217703631 +0000 UTC m=+114.833648285" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.235720 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.235770 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.235786 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.235813 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.235827 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:28Z","lastTransitionTime":"2026-01-31T07:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.250526 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=82.250505434 podStartE2EDuration="1m22.250505434s" podCreationTimestamp="2026-01-31 07:22:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:23:28.232891566 +0000 UTC m=+114.848836230" watchObservedRunningTime="2026-01-31 07:23:28.250505434 +0000 UTC m=+114.866450088" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.251153 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49tqp" podStartSLOduration=79.251148821 podStartE2EDuration="1m19.251148821s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:23:28.250612887 +0000 UTC m=+114.866557541" watchObservedRunningTime="2026-01-31 07:23:28.251148821 +0000 UTC m=+114.867093465" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.338453 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.338498 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.338517 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.338542 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.338558 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:28Z","lastTransitionTime":"2026-01-31T07:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.436621 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 07:51:14.132804968 +0000 UTC Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.441666 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.441736 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.441746 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.441768 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.441782 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:28Z","lastTransitionTime":"2026-01-31T07:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.544734 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.544784 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.544802 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.544824 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.544842 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:28Z","lastTransitionTime":"2026-01-31T07:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.647817 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.647866 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.647876 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.647890 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.647899 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:28Z","lastTransitionTime":"2026-01-31T07:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.750809 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.750846 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.750857 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.750874 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.750884 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:28Z","lastTransitionTime":"2026-01-31T07:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.854440 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.854492 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.854507 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.854524 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.854538 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:28Z","lastTransitionTime":"2026-01-31T07:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.939910 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:28 crc kubenswrapper[4908]: E0131 07:23:28.940222 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.957692 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.957765 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.957789 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.957825 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:28 crc kubenswrapper[4908]: I0131 07:23:28.957850 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:28Z","lastTransitionTime":"2026-01-31T07:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.061573 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.061653 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.061675 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.061706 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.061729 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:29Z","lastTransitionTime":"2026-01-31T07:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.165067 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.165134 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.165150 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.165174 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.165191 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:29Z","lastTransitionTime":"2026-01-31T07:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.268591 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.268641 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.268663 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.268687 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.268702 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:29Z","lastTransitionTime":"2026-01-31T07:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.371127 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.371182 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.371205 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.371229 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.371243 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:29Z","lastTransitionTime":"2026-01-31T07:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.437179 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 10:15:26.257527854 +0000 UTC Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.474495 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.474553 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.474564 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.474586 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.474602 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:29Z","lastTransitionTime":"2026-01-31T07:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.578414 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.578585 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.578598 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.578614 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.578624 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:29Z","lastTransitionTime":"2026-01-31T07:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.681477 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.681550 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.681562 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.681579 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.681592 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:29Z","lastTransitionTime":"2026-01-31T07:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.784422 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.784468 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.784478 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.784498 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.784510 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:29Z","lastTransitionTime":"2026-01-31T07:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.887616 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.887688 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.887713 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.887747 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.887770 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:29Z","lastTransitionTime":"2026-01-31T07:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.939692 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.939746 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.939815 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:29 crc kubenswrapper[4908]: E0131 07:23:29.939935 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:23:29 crc kubenswrapper[4908]: E0131 07:23:29.940146 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:23:29 crc kubenswrapper[4908]: E0131 07:23:29.940316 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.990568 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.990621 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.990633 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.990657 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:29 crc kubenswrapper[4908]: I0131 07:23:29.990671 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:29Z","lastTransitionTime":"2026-01-31T07:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.093607 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.093671 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.093685 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.093721 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.093738 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:30Z","lastTransitionTime":"2026-01-31T07:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.197329 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.197661 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.197681 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.197703 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.197715 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:30Z","lastTransitionTime":"2026-01-31T07:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.299858 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.299900 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.299911 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.299926 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.299937 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:30Z","lastTransitionTime":"2026-01-31T07:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.402240 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.402282 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.402295 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.402312 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.402322 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:30Z","lastTransitionTime":"2026-01-31T07:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.437881 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 09:40:35.346124197 +0000 UTC Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.505056 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.505104 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.505116 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.505134 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.505146 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:30Z","lastTransitionTime":"2026-01-31T07:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.607728 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.607778 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.607790 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.607807 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.607818 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:30Z","lastTransitionTime":"2026-01-31T07:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.710600 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.710657 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.710678 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.710706 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.710732 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:30Z","lastTransitionTime":"2026-01-31T07:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.813187 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.813215 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.813223 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.813239 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.813248 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:30Z","lastTransitionTime":"2026-01-31T07:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.916062 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.916101 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.916111 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.916126 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.916135 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:30Z","lastTransitionTime":"2026-01-31T07:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:30 crc kubenswrapper[4908]: I0131 07:23:30.939860 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:30 crc kubenswrapper[4908]: E0131 07:23:30.940064 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.019814 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.019882 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.019899 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.019925 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.019940 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:31Z","lastTransitionTime":"2026-01-31T07:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.121797 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.121833 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.121844 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.121856 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.121867 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:31Z","lastTransitionTime":"2026-01-31T07:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.224004 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.224053 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.224063 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.224080 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.224090 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:31Z","lastTransitionTime":"2026-01-31T07:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.326395 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.326441 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.326451 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.326466 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.326476 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:31Z","lastTransitionTime":"2026-01-31T07:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.340318 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.340367 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.340379 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.340395 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.340407 4908 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:23:31Z","lastTransitionTime":"2026-01-31T07:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.380509 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-lp6nx"] Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.380996 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lp6nx" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.382856 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.383102 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.383195 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.384063 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.437070 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20a80016-0e20-4e86-8656-cca5123ff806-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-lp6nx\" (UID: \"20a80016-0e20-4e86-8656-cca5123ff806\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lp6nx" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.437118 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20a80016-0e20-4e86-8656-cca5123ff806-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-lp6nx\" (UID: \"20a80016-0e20-4e86-8656-cca5123ff806\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lp6nx" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.437147 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/20a80016-0e20-4e86-8656-cca5123ff806-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-lp6nx\" (UID: \"20a80016-0e20-4e86-8656-cca5123ff806\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lp6nx" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.437169 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/20a80016-0e20-4e86-8656-cca5123ff806-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-lp6nx\" (UID: \"20a80016-0e20-4e86-8656-cca5123ff806\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lp6nx" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.437184 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/20a80016-0e20-4e86-8656-cca5123ff806-service-ca\") pod \"cluster-version-operator-5c965bbfc6-lp6nx\" (UID: \"20a80016-0e20-4e86-8656-cca5123ff806\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lp6nx" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.438038 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 01:04:08.346795378 +0000 UTC Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.438089 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.445129 4908 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.537599 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/20a80016-0e20-4e86-8656-cca5123ff806-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-lp6nx\" (UID: \"20a80016-0e20-4e86-8656-cca5123ff806\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lp6nx" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.537665 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/20a80016-0e20-4e86-8656-cca5123ff806-service-ca\") pod \"cluster-version-operator-5c965bbfc6-lp6nx\" (UID: \"20a80016-0e20-4e86-8656-cca5123ff806\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lp6nx" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.537735 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20a80016-0e20-4e86-8656-cca5123ff806-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-lp6nx\" (UID: \"20a80016-0e20-4e86-8656-cca5123ff806\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lp6nx" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.537780 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/20a80016-0e20-4e86-8656-cca5123ff806-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-lp6nx\" (UID: \"20a80016-0e20-4e86-8656-cca5123ff806\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lp6nx" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.537793 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20a80016-0e20-4e86-8656-cca5123ff806-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-lp6nx\" (UID: \"20a80016-0e20-4e86-8656-cca5123ff806\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lp6nx" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.537937 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/20a80016-0e20-4e86-8656-cca5123ff806-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-lp6nx\" (UID: \"20a80016-0e20-4e86-8656-cca5123ff806\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lp6nx" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.538052 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/20a80016-0e20-4e86-8656-cca5123ff806-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-lp6nx\" (UID: \"20a80016-0e20-4e86-8656-cca5123ff806\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lp6nx" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.538493 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/20a80016-0e20-4e86-8656-cca5123ff806-service-ca\") pod \"cluster-version-operator-5c965bbfc6-lp6nx\" (UID: \"20a80016-0e20-4e86-8656-cca5123ff806\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lp6nx" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.543727 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20a80016-0e20-4e86-8656-cca5123ff806-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-lp6nx\" (UID: \"20a80016-0e20-4e86-8656-cca5123ff806\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lp6nx" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.558421 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20a80016-0e20-4e86-8656-cca5123ff806-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-lp6nx\" (UID: \"20a80016-0e20-4e86-8656-cca5123ff806\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lp6nx" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.698301 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lp6nx" Jan 31 07:23:31 crc kubenswrapper[4908]: W0131 07:23:31.714260 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20a80016_0e20_4e86_8656_cca5123ff806.slice/crio-b48ed4e73871946239d502ecd1cb8d1ecadc9d203ccd814f1eeba4208ce0d8f0 WatchSource:0}: Error finding container b48ed4e73871946239d502ecd1cb8d1ecadc9d203ccd814f1eeba4208ce0d8f0: Status 404 returned error can't find the container with id b48ed4e73871946239d502ecd1cb8d1ecadc9d203ccd814f1eeba4208ce0d8f0 Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.939518 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:31 crc kubenswrapper[4908]: E0131 07:23:31.939918 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.939573 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:31 crc kubenswrapper[4908]: E0131 07:23:31.940067 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:23:31 crc kubenswrapper[4908]: I0131 07:23:31.939541 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:31 crc kubenswrapper[4908]: E0131 07:23:31.940162 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:23:32 crc kubenswrapper[4908]: I0131 07:23:32.537287 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lp6nx" event={"ID":"20a80016-0e20-4e86-8656-cca5123ff806","Type":"ContainerStarted","Data":"61fac841659298ab22d549877af6cfe6e866aeaa3f120ee6dfe13ed56d97265e"} Jan 31 07:23:32 crc kubenswrapper[4908]: I0131 07:23:32.537340 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lp6nx" event={"ID":"20a80016-0e20-4e86-8656-cca5123ff806","Type":"ContainerStarted","Data":"b48ed4e73871946239d502ecd1cb8d1ecadc9d203ccd814f1eeba4208ce0d8f0"} Jan 31 07:23:32 crc kubenswrapper[4908]: I0131 07:23:32.939681 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:32 crc kubenswrapper[4908]: E0131 07:23:32.939839 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:23:33 crc kubenswrapper[4908]: I0131 07:23:33.939689 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:33 crc kubenswrapper[4908]: I0131 07:23:33.939857 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:33 crc kubenswrapper[4908]: I0131 07:23:33.939687 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:33 crc kubenswrapper[4908]: E0131 07:23:33.939888 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:23:33 crc kubenswrapper[4908]: E0131 07:23:33.940103 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:23:33 crc kubenswrapper[4908]: E0131 07:23:33.940139 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:23:34 crc kubenswrapper[4908]: I0131 07:23:34.939593 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:34 crc kubenswrapper[4908]: E0131 07:23:34.939762 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:23:35 crc kubenswrapper[4908]: I0131 07:23:35.939704 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:35 crc kubenswrapper[4908]: E0131 07:23:35.939841 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:23:35 crc kubenswrapper[4908]: I0131 07:23:35.939905 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:35 crc kubenswrapper[4908]: I0131 07:23:35.939903 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:35 crc kubenswrapper[4908]: E0131 07:23:35.940247 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:23:35 crc kubenswrapper[4908]: E0131 07:23:35.940421 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:23:35 crc kubenswrapper[4908]: I0131 07:23:35.940580 4908 scope.go:117] "RemoveContainer" containerID="4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07" Jan 31 07:23:35 crc kubenswrapper[4908]: E0131 07:23:35.940766 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xkd4f_openshift-ovn-kubernetes(d0d1945f-bd78-48c9-89be-35b3f2908dab)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" Jan 31 07:23:36 crc kubenswrapper[4908]: I0131 07:23:36.939770 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:36 crc kubenswrapper[4908]: E0131 07:23:36.940322 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:23:37 crc kubenswrapper[4908]: E0131 07:23:37.397440 4908 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 31 07:23:37 crc kubenswrapper[4908]: I0131 07:23:37.939173 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:37 crc kubenswrapper[4908]: E0131 07:23:37.941471 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:23:37 crc kubenswrapper[4908]: I0131 07:23:37.941562 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:37 crc kubenswrapper[4908]: E0131 07:23:37.942201 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:23:37 crc kubenswrapper[4908]: I0131 07:23:37.942092 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:37 crc kubenswrapper[4908]: E0131 07:23:37.942430 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:23:38 crc kubenswrapper[4908]: I0131 07:23:38.939223 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:38 crc kubenswrapper[4908]: E0131 07:23:38.939380 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:23:38 crc kubenswrapper[4908]: E0131 07:23:38.990315 4908 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 07:23:39 crc kubenswrapper[4908]: I0131 07:23:39.939536 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:39 crc kubenswrapper[4908]: I0131 07:23:39.939751 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:39 crc kubenswrapper[4908]: E0131 07:23:39.939864 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:23:39 crc kubenswrapper[4908]: E0131 07:23:39.940011 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:23:39 crc kubenswrapper[4908]: I0131 07:23:39.940472 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:39 crc kubenswrapper[4908]: E0131 07:23:39.940654 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:23:40 crc kubenswrapper[4908]: I0131 07:23:40.939117 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:40 crc kubenswrapper[4908]: E0131 07:23:40.939528 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:23:41 crc kubenswrapper[4908]: I0131 07:23:41.940090 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:41 crc kubenswrapper[4908]: I0131 07:23:41.940189 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:41 crc kubenswrapper[4908]: E0131 07:23:41.940299 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:23:41 crc kubenswrapper[4908]: I0131 07:23:41.940392 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:41 crc kubenswrapper[4908]: E0131 07:23:41.940548 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:23:41 crc kubenswrapper[4908]: E0131 07:23:41.940653 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:23:42 crc kubenswrapper[4908]: I0131 07:23:42.939191 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:42 crc kubenswrapper[4908]: E0131 07:23:42.939423 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:23:43 crc kubenswrapper[4908]: I0131 07:23:43.940094 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:43 crc kubenswrapper[4908]: I0131 07:23:43.940176 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:43 crc kubenswrapper[4908]: E0131 07:23:43.940234 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:23:43 crc kubenswrapper[4908]: E0131 07:23:43.940323 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:23:43 crc kubenswrapper[4908]: I0131 07:23:43.940395 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:43 crc kubenswrapper[4908]: E0131 07:23:43.940458 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:23:43 crc kubenswrapper[4908]: E0131 07:23:43.991794 4908 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 07:23:44 crc kubenswrapper[4908]: I0131 07:23:44.939517 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:44 crc kubenswrapper[4908]: E0131 07:23:44.939714 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:23:45 crc kubenswrapper[4908]: I0131 07:23:45.939869 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:45 crc kubenswrapper[4908]: I0131 07:23:45.939959 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:45 crc kubenswrapper[4908]: E0131 07:23:45.940095 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:23:45 crc kubenswrapper[4908]: E0131 07:23:45.940182 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:23:45 crc kubenswrapper[4908]: I0131 07:23:45.940390 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:45 crc kubenswrapper[4908]: E0131 07:23:45.940497 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:23:46 crc kubenswrapper[4908]: I0131 07:23:46.940088 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:46 crc kubenswrapper[4908]: E0131 07:23:46.940826 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:23:47 crc kubenswrapper[4908]: I0131 07:23:47.940241 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:47 crc kubenswrapper[4908]: I0131 07:23:47.940309 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:47 crc kubenswrapper[4908]: I0131 07:23:47.942683 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:47 crc kubenswrapper[4908]: E0131 07:23:47.942793 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:23:47 crc kubenswrapper[4908]: E0131 07:23:47.942939 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:23:47 crc kubenswrapper[4908]: E0131 07:23:47.943109 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:23:47 crc kubenswrapper[4908]: I0131 07:23:47.943520 4908 scope.go:117] "RemoveContainer" containerID="4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07" Jan 31 07:23:47 crc kubenswrapper[4908]: E0131 07:23:47.943895 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xkd4f_openshift-ovn-kubernetes(d0d1945f-bd78-48c9-89be-35b3f2908dab)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" Jan 31 07:23:48 crc kubenswrapper[4908]: I0131 07:23:48.940145 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:48 crc kubenswrapper[4908]: E0131 07:23:48.940512 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:23:48 crc kubenswrapper[4908]: E0131 07:23:48.993720 4908 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 07:23:49 crc kubenswrapper[4908]: I0131 07:23:49.939343 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:49 crc kubenswrapper[4908]: E0131 07:23:49.939736 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:23:49 crc kubenswrapper[4908]: I0131 07:23:49.939881 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:49 crc kubenswrapper[4908]: E0131 07:23:49.940276 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:23:49 crc kubenswrapper[4908]: I0131 07:23:49.940188 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:49 crc kubenswrapper[4908]: E0131 07:23:49.941040 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:23:50 crc kubenswrapper[4908]: I0131 07:23:50.940341 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:50 crc kubenswrapper[4908]: E0131 07:23:50.940652 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:23:51 crc kubenswrapper[4908]: I0131 07:23:51.939714 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:51 crc kubenswrapper[4908]: I0131 07:23:51.939733 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:51 crc kubenswrapper[4908]: E0131 07:23:51.940182 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:23:51 crc kubenswrapper[4908]: E0131 07:23:51.940330 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:23:51 crc kubenswrapper[4908]: I0131 07:23:51.939927 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:51 crc kubenswrapper[4908]: E0131 07:23:51.940549 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:23:52 crc kubenswrapper[4908]: I0131 07:23:52.939641 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:52 crc kubenswrapper[4908]: E0131 07:23:52.939832 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:23:53 crc kubenswrapper[4908]: I0131 07:23:53.625749 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-944z2_c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b/kube-multus/1.log" Jan 31 07:23:53 crc kubenswrapper[4908]: I0131 07:23:53.626565 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-944z2_c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b/kube-multus/0.log" Jan 31 07:23:53 crc kubenswrapper[4908]: I0131 07:23:53.626707 4908 generic.go:334] "Generic (PLEG): container finished" podID="c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b" containerID="194cdbb2201c22be4445330e908c269d66f69edaee49bad860a1ba85d7425ded" exitCode=1 Jan 31 07:23:53 crc kubenswrapper[4908]: I0131 07:23:53.626884 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-944z2" event={"ID":"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b","Type":"ContainerDied","Data":"194cdbb2201c22be4445330e908c269d66f69edaee49bad860a1ba85d7425ded"} Jan 31 07:23:53 crc kubenswrapper[4908]: I0131 07:23:53.627261 4908 scope.go:117] "RemoveContainer" containerID="1c4d913f81570c3a6581703bdae8d4194169d850a18a83e30a17d206a3b0e20a" Jan 31 07:23:53 crc kubenswrapper[4908]: I0131 07:23:53.628839 4908 scope.go:117] "RemoveContainer" containerID="194cdbb2201c22be4445330e908c269d66f69edaee49bad860a1ba85d7425ded" Jan 31 07:23:53 crc kubenswrapper[4908]: E0131 07:23:53.629204 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-944z2_openshift-multus(c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b)\"" pod="openshift-multus/multus-944z2" podUID="c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b" Jan 31 07:23:53 crc kubenswrapper[4908]: I0131 07:23:53.655493 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lp6nx" podStartSLOduration=104.655472157 podStartE2EDuration="1m44.655472157s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:23:32.549476718 +0000 UTC m=+119.165421362" watchObservedRunningTime="2026-01-31 07:23:53.655472157 +0000 UTC m=+140.271416821" Jan 31 07:23:53 crc kubenswrapper[4908]: I0131 07:23:53.940062 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:53 crc kubenswrapper[4908]: E0131 07:23:53.940296 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:23:53 crc kubenswrapper[4908]: I0131 07:23:53.940346 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:53 crc kubenswrapper[4908]: I0131 07:23:53.940368 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:53 crc kubenswrapper[4908]: E0131 07:23:53.940508 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:23:53 crc kubenswrapper[4908]: E0131 07:23:53.940615 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:23:53 crc kubenswrapper[4908]: E0131 07:23:53.995429 4908 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 07:23:54 crc kubenswrapper[4908]: I0131 07:23:54.633422 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-944z2_c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b/kube-multus/1.log" Jan 31 07:23:54 crc kubenswrapper[4908]: I0131 07:23:54.939835 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:54 crc kubenswrapper[4908]: E0131 07:23:54.940563 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:23:55 crc kubenswrapper[4908]: I0131 07:23:55.940026 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:55 crc kubenswrapper[4908]: I0131 07:23:55.940026 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:55 crc kubenswrapper[4908]: E0131 07:23:55.940212 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:23:55 crc kubenswrapper[4908]: I0131 07:23:55.940243 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:55 crc kubenswrapper[4908]: E0131 07:23:55.940340 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:23:55 crc kubenswrapper[4908]: E0131 07:23:55.940596 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:23:56 crc kubenswrapper[4908]: I0131 07:23:56.939358 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:56 crc kubenswrapper[4908]: E0131 07:23:56.939554 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:23:57 crc kubenswrapper[4908]: I0131 07:23:57.940050 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:57 crc kubenswrapper[4908]: I0131 07:23:57.940053 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:57 crc kubenswrapper[4908]: E0131 07:23:57.942865 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:23:57 crc kubenswrapper[4908]: I0131 07:23:57.942882 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:57 crc kubenswrapper[4908]: E0131 07:23:57.943050 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:23:57 crc kubenswrapper[4908]: E0131 07:23:57.943234 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:23:58 crc kubenswrapper[4908]: I0131 07:23:58.939525 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:23:58 crc kubenswrapper[4908]: E0131 07:23:58.939674 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:23:58 crc kubenswrapper[4908]: E0131 07:23:58.996592 4908 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 07:23:59 crc kubenswrapper[4908]: I0131 07:23:59.939609 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:23:59 crc kubenswrapper[4908]: E0131 07:23:59.939749 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:23:59 crc kubenswrapper[4908]: I0131 07:23:59.939794 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:23:59 crc kubenswrapper[4908]: E0131 07:23:59.940074 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:23:59 crc kubenswrapper[4908]: I0131 07:23:59.940359 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:23:59 crc kubenswrapper[4908]: E0131 07:23:59.940605 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:24:00 crc kubenswrapper[4908]: I0131 07:24:00.939709 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:24:00 crc kubenswrapper[4908]: E0131 07:24:00.939849 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:24:01 crc kubenswrapper[4908]: I0131 07:24:01.940051 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:24:01 crc kubenswrapper[4908]: I0131 07:24:01.940159 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:24:01 crc kubenswrapper[4908]: I0131 07:24:01.940162 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:24:01 crc kubenswrapper[4908]: E0131 07:24:01.940902 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:24:01 crc kubenswrapper[4908]: E0131 07:24:01.941623 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:24:01 crc kubenswrapper[4908]: E0131 07:24:01.941780 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:24:02 crc kubenswrapper[4908]: I0131 07:24:02.940121 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:24:02 crc kubenswrapper[4908]: E0131 07:24:02.940279 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:24:02 crc kubenswrapper[4908]: I0131 07:24:02.941096 4908 scope.go:117] "RemoveContainer" containerID="4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07" Jan 31 07:24:03 crc kubenswrapper[4908]: I0131 07:24:03.665951 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xkd4f_d0d1945f-bd78-48c9-89be-35b3f2908dab/ovnkube-controller/3.log" Jan 31 07:24:03 crc kubenswrapper[4908]: I0131 07:24:03.668611 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" event={"ID":"d0d1945f-bd78-48c9-89be-35b3f2908dab","Type":"ContainerStarted","Data":"8bcda58fce5a5726da59287d6554d20780c650951bb906b5cb4f2f0810823b65"} Jan 31 07:24:03 crc kubenswrapper[4908]: I0131 07:24:03.669008 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:24:03 crc kubenswrapper[4908]: I0131 07:24:03.695544 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" podStartSLOduration=114.695529435 podStartE2EDuration="1m54.695529435s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:03.695461284 +0000 UTC m=+150.311405938" watchObservedRunningTime="2026-01-31 07:24:03.695529435 +0000 UTC m=+150.311474089" Jan 31 07:24:03 crc kubenswrapper[4908]: I0131 07:24:03.837937 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2cg54"] Jan 31 07:24:03 crc kubenswrapper[4908]: I0131 07:24:03.838061 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:24:03 crc kubenswrapper[4908]: E0131 07:24:03.838176 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:24:03 crc kubenswrapper[4908]: I0131 07:24:03.940096 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:24:03 crc kubenswrapper[4908]: I0131 07:24:03.940136 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:24:03 crc kubenswrapper[4908]: I0131 07:24:03.940136 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:24:03 crc kubenswrapper[4908]: E0131 07:24:03.940210 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:24:03 crc kubenswrapper[4908]: E0131 07:24:03.940326 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:24:03 crc kubenswrapper[4908]: E0131 07:24:03.940378 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:24:03 crc kubenswrapper[4908]: E0131 07:24:03.998016 4908 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 07:24:04 crc kubenswrapper[4908]: I0131 07:24:04.940111 4908 scope.go:117] "RemoveContainer" containerID="194cdbb2201c22be4445330e908c269d66f69edaee49bad860a1ba85d7425ded" Jan 31 07:24:05 crc kubenswrapper[4908]: I0131 07:24:05.678052 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-944z2_c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b/kube-multus/1.log" Jan 31 07:24:05 crc kubenswrapper[4908]: I0131 07:24:05.678109 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-944z2" event={"ID":"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b","Type":"ContainerStarted","Data":"76448e7eaa60d22190bd09ff8cd2152e42afbd2d9e3afc7635062f436b9000dc"} Jan 31 07:24:05 crc kubenswrapper[4908]: I0131 07:24:05.939790 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:24:05 crc kubenswrapper[4908]: I0131 07:24:05.939838 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:24:05 crc kubenswrapper[4908]: E0131 07:24:05.939927 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:24:05 crc kubenswrapper[4908]: I0131 07:24:05.940039 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:24:05 crc kubenswrapper[4908]: I0131 07:24:05.940070 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:24:05 crc kubenswrapper[4908]: E0131 07:24:05.940240 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:24:05 crc kubenswrapper[4908]: E0131 07:24:05.940398 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:24:05 crc kubenswrapper[4908]: E0131 07:24:05.940590 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:24:07 crc kubenswrapper[4908]: I0131 07:24:07.939390 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:24:07 crc kubenswrapper[4908]: I0131 07:24:07.939428 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:24:07 crc kubenswrapper[4908]: I0131 07:24:07.939496 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:24:07 crc kubenswrapper[4908]: I0131 07:24:07.939696 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:24:07 crc kubenswrapper[4908]: E0131 07:24:07.940593 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:24:07 crc kubenswrapper[4908]: E0131 07:24:07.940705 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:24:07 crc kubenswrapper[4908]: E0131 07:24:07.940860 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:24:07 crc kubenswrapper[4908]: E0131 07:24:07.940961 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2cg54" podUID="1242d7b7-ba0b-4084-88f1-fedf57d84b11" Jan 31 07:24:09 crc kubenswrapper[4908]: I0131 07:24:09.939199 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:24:09 crc kubenswrapper[4908]: I0131 07:24:09.939227 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:24:09 crc kubenswrapper[4908]: I0131 07:24:09.939213 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:24:09 crc kubenswrapper[4908]: I0131 07:24:09.939410 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:24:09 crc kubenswrapper[4908]: I0131 07:24:09.941629 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 31 07:24:09 crc kubenswrapper[4908]: I0131 07:24:09.942384 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 31 07:24:09 crc kubenswrapper[4908]: I0131 07:24:09.942677 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 31 07:24:09 crc kubenswrapper[4908]: I0131 07:24:09.943365 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 31 07:24:09 crc kubenswrapper[4908]: I0131 07:24:09.943452 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 31 07:24:09 crc kubenswrapper[4908]: I0131 07:24:09.943674 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 31 07:24:10 crc kubenswrapper[4908]: I0131 07:24:10.373164 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:10 crc kubenswrapper[4908]: E0131 07:24:10.373405 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:26:12.373373935 +0000 UTC m=+278.989318589 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:10 crc kubenswrapper[4908]: I0131 07:24:10.373637 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:24:10 crc kubenswrapper[4908]: I0131 07:24:10.373687 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:24:10 crc kubenswrapper[4908]: I0131 07:24:10.373782 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:24:10 crc kubenswrapper[4908]: I0131 07:24:10.373837 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:24:10 crc kubenswrapper[4908]: I0131 07:24:10.379269 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:24:10 crc kubenswrapper[4908]: I0131 07:24:10.382776 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:24:10 crc kubenswrapper[4908]: I0131 07:24:10.383803 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:24:10 crc kubenswrapper[4908]: I0131 07:24:10.402548 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:24:10 crc kubenswrapper[4908]: I0131 07:24:10.555182 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:24:10 crc kubenswrapper[4908]: I0131 07:24:10.573999 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:24:10 crc kubenswrapper[4908]: I0131 07:24:10.579911 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:24:10 crc kubenswrapper[4908]: I0131 07:24:10.862836 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:24:11 crc kubenswrapper[4908]: W0131 07:24:11.022651 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-f2c5c80d5c6bf574991e323b21ed688ad51d32ead227ce9e2338eed6412db8f0 WatchSource:0}: Error finding container f2c5c80d5c6bf574991e323b21ed688ad51d32ead227ce9e2338eed6412db8f0: Status 404 returned error can't find the container with id f2c5c80d5c6bf574991e323b21ed688ad51d32ead227ce9e2338eed6412db8f0 Jan 31 07:24:11 crc kubenswrapper[4908]: W0131 07:24:11.023664 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-8bc1f6022807e6c6990806990daec06d61c5a5a60942e8b3f2afed1a3e0fe00b WatchSource:0}: Error finding container 8bc1f6022807e6c6990806990daec06d61c5a5a60942e8b3f2afed1a3e0fe00b: Status 404 returned error can't find the container with id 8bc1f6022807e6c6990806990daec06d61c5a5a60942e8b3f2afed1a3e0fe00b Jan 31 07:24:11 crc kubenswrapper[4908]: W0131 07:24:11.024564 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-e676ccc40307eff9e0438fc879f03ce614fe632d30f996f0c7a93a301db5bfeb WatchSource:0}: Error finding container e676ccc40307eff9e0438fc879f03ce614fe632d30f996f0c7a93a301db5bfeb: Status 404 returned error can't find the container with id e676ccc40307eff9e0438fc879f03ce614fe632d30f996f0c7a93a301db5bfeb Jan 31 07:24:11 crc kubenswrapper[4908]: I0131 07:24:11.699049 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2205dfc4ac6e56b2fb4accdb2f25a05c0353a4a4d61431b0923d4964f7c1de5f"} Jan 31 07:24:11 crc kubenswrapper[4908]: I0131 07:24:11.699097 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f2c5c80d5c6bf574991e323b21ed688ad51d32ead227ce9e2338eed6412db8f0"} Jan 31 07:24:11 crc kubenswrapper[4908]: I0131 07:24:11.701721 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"100224b9ef62a4c04765717bcf3d809c1a8b6be3cb3f3e27038b81bca8ee4bad"} Jan 31 07:24:11 crc kubenswrapper[4908]: I0131 07:24:11.701772 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"8bc1f6022807e6c6990806990daec06d61c5a5a60942e8b3f2afed1a3e0fe00b"} Jan 31 07:24:11 crc kubenswrapper[4908]: I0131 07:24:11.702028 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:24:11 crc kubenswrapper[4908]: I0131 07:24:11.703484 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1567269e16715c020988415cc235dab66a61b8b2adb2360c60e48c9171b54b23"} Jan 31 07:24:11 crc kubenswrapper[4908]: I0131 07:24:11.703519 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e676ccc40307eff9e0438fc879f03ce614fe632d30f996f0c7a93a301db5bfeb"} Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.275914 4908 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.306546 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vgk68"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.306940 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.308228 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4vxx6"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.309205 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.309329 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4vxx6" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.309710 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7tss"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.309731 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.310379 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7tss" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.310872 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-rlxht"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.311547 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4jhvx"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.311710 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rlxht" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.312034 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lxjh4"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.312273 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-4jhvx" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.312570 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lxjh4" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.314072 4908 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4": failed to list *v1.Secret: secrets "machine-approver-sa-dockercfg-nl2j4" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.314124 4908 reflector.go:561] object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.314190 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.314121 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-nl2j4\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-approver-sa-dockercfg-nl2j4\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.315200 4908 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-config": failed to list *v1.ConfigMap: configmaps "machine-approver-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.315231 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-approver-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.315295 4908 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.315311 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.315900 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.316102 4908 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-config": failed to list *v1.ConfigMap: configmaps "authentication-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.316127 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.316146 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"authentication-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.316207 4908 reflector.go:561] object-"openshift-authentication-operator"/"service-ca-bundle": failed to list *v1.ConfigMap: configmaps "service-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.316250 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"service-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.318390 4908 reflector.go:561] object-"openshift-authentication-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.318429 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.318574 4908 reflector.go:561] object-"openshift-controller-manager-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager-operator": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.318593 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.318603 4908 reflector.go:561] object-"openshift-authentication-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.318626 4908 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config": failed to list *v1.ConfigMap: configmaps "openshift-controller-manager-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager-operator": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.318638 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.318640 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-controller-manager-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.318669 4908 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager-operator": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.318681 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.318683 4908 reflector.go:561] object-"openshift-authentication-operator"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.318697 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.318718 4908 reflector.go:561] object-"openshift-route-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.318731 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.318733 4908 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert": failed to list *v1.Secret: secrets "openshift-controller-manager-operator-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager-operator": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.318751 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-controller-manager-operator-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.318821 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.318829 4908 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw": failed to list *v1.Secret: secrets "openshift-controller-manager-operator-dockercfg-vw8fw" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager-operator": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.318884 4908 reflector.go:561] object-"openshift-cluster-samples-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.318919 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.318877 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-vw8fw\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-controller-manager-operator-dockercfg-vw8fw\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.318966 4908 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.319032 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.320047 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.320746 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.321225 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-l4rd4"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.321628 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-l4rd4" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.322281 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ltcmm"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.322802 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.325402 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.326383 4908 reflector.go:561] object-"openshift-config-operator"/"config-operator-serving-cert": failed to list *v1.Secret: secrets "config-operator-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-config-operator": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.326420 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"config-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"config-operator-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.326514 4908 reflector.go:561] object-"openshift-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-config-operator": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.326538 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.326594 4908 reflector.go:561] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2": failed to list *v1.Secret: secrets "route-controller-manager-sa-dockercfg-h2zr2" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.326608 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-h2zr2\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"route-controller-manager-sa-dockercfg-h2zr2\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.326644 4908 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.326654 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.326719 4908 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.326730 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.326771 4908 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.326784 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.326850 4908 reflector.go:561] object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w": failed to list *v1.Secret: secrets "cluster-samples-operator-dockercfg-xpp9w" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.326862 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-xpp9w\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cluster-samples-operator-dockercfg-xpp9w\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.326909 4908 reflector.go:561] object-"openshift-cluster-samples-operator"/"samples-operator-tls": failed to list *v1.Secret: secrets "samples-operator-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.326922 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"samples-operator-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.327142 4908 reflector.go:561] object-"openshift-route-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.327170 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.327227 4908 reflector.go:561] object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.327240 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.327283 4908 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-tls": failed to list *v1.Secret: secrets "machine-approver-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.327299 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-approver-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.328678 4908 reflector.go:561] object-"openshift-authentication-operator"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.328705 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.328742 4908 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj": failed to list *v1.Secret: secrets "authentication-operator-dockercfg-mz9bj" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.328753 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-mz9bj\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"authentication-operator-dockercfg-mz9bj\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.328774 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-8rjct"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.329357 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.329358 4908 reflector.go:561] object-"openshift-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-config-operator": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.329457 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.329849 4908 reflector.go:561] object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z": failed to list *v1.Secret: secrets "openshift-config-operator-dockercfg-7pc5z" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-config-operator": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.329881 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-8rjct" Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.329904 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-7pc5z\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-config-operator-dockercfg-7pc5z\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.332643 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4lhmd"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.333130 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4lhmd" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.334174 4908 reflector.go:561] object-"openshift-oauth-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.334219 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.334320 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.334359 4908 reflector.go:561] object-"openshift-oauth-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.334381 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.334387 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.334419 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.334478 4908 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-client": failed to list *v1.Secret: secrets "etcd-client" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.334538 4908 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-service-ca": failed to list *v1.ConfigMap: configmaps "v4-0-config-system-service-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.334538 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-client\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.334557 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"v4-0-config-system-service-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.334680 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.334798 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.334885 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.334992 4908 reflector.go:561] object-"openshift-oauth-apiserver"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.335022 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.335031 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.335053 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.335066 4908 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-error": failed to list *v1.Secret: secrets "v4-0-config-user-template-error" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.335077 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-user-template-error\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.334890 4908 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template": failed to list *v1.Secret: secrets "v4-0-config-system-ocp-branding-template" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.335111 4908 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "v4-0-config-system-trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.335108 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-system-ocp-branding-template\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.335121 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"v4-0-config-system-trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.335154 4908 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-serving-cert": failed to list *v1.Secret: secrets "v4-0-config-system-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.335161 4908 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-session": failed to list *v1.Secret: secrets "v4-0-config-system-session" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.335183 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-session\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-system-session\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.335165 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-system-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.335262 4908 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-provider-selection": failed to list *v1.Secret: secrets "v4-0-config-user-template-provider-selection" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.335275 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-user-template-provider-selection\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.335524 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.335612 4908 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-login": failed to list *v1.Secret: secrets "v4-0-config-user-template-login" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.335620 4908 reflector.go:561] object-"openshift-oauth-apiserver"/"encryption-config-1": failed to list *v1.Secret: secrets "encryption-config-1" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.335629 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-user-template-login\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.335639 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"encryption-config-1\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.335668 4908 reflector.go:561] object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc": failed to list *v1.Secret: secrets "oauth-openshift-dockercfg-znhcc" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.335678 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-znhcc\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"oauth-openshift-dockercfg-znhcc\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.335748 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.335838 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.338205 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jk7hz"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.338601 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vln8d"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.339018 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-fjlrr"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.339294 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fjlrr" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.339848 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jk7hz" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.340297 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.341895 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7zrxf"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.349393 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4vxx6"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.349543 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-7zrxf" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.350581 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vgk68"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.355001 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lxjh4"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.356160 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-glh6f"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.356676 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.357811 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.358970 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-fhcvf"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.359701 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-fhcvf" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.362479 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.362744 4908 reflector.go:561] object-"openshift-console-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.362786 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.364274 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jt7r7"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.364792 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jt7r7" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.365798 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.365873 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.365949 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.366058 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.366138 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.366273 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.366586 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 31 07:24:12 crc kubenswrapper[4908]: W0131 07:24:12.366709 4908 reflector.go:561] object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq": failed to list *v1.Secret: secrets "oauth-apiserver-sa-dockercfg-6r2bq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Jan 31 07:24:12 crc kubenswrapper[4908]: E0131 07:24:12.366752 4908 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-6r2bq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"oauth-apiserver-sa-dockercfg-6r2bq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.366813 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.366940 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.367074 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.367184 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.367324 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.367450 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.367570 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.367674 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.367781 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.367902 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.368036 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.368198 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.368252 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.368308 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.368386 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.368452 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.368506 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.368593 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.368678 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.368703 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.369172 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.369195 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.370095 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xz8pb"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.370722 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-xz8pb" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.371866 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xptlh"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.375429 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nkm6l"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.375717 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5rwgc"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.375960 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rzszg"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.376357 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hhsgr"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.376771 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hhsgr" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.376912 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nkm6l" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.377071 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5rwgc" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.377243 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xptlh" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.377444 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rzszg" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.377630 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h7c7k"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.378212 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h7c7k" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.379646 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jm4lf"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.380179 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jm4lf" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.381295 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-lnblx"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.381837 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.382219 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-lnblx" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.383919 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nb82q"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.387966 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nb82q" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.388961 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gmh7h"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.390396 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gmh7h" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.390622 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ltcmm"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.391994 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-jwbt2"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.393928 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-jwbt2" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.394625 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.394923 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.395605 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.408416 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.409079 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.410382 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5bzw7"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.412526 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5bzw7" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.418731 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.419172 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.420930 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7tss"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.422730 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7cskt"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.423626 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7cskt" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.423891 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-d4l95"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.425706 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497395-k4x5t"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.428207 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-s2hzs"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.428790 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-5dh6j"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.430182 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d4l95" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.430267 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-k4x5t" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.430354 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-s2hzs" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.431798 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.433662 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-dbx7j"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.434121 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fjlrr"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.434148 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.434269 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-dbx7j" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.434451 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5dh6j" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.437050 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-8rjct"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.438082 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vzfwr"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.438863 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vzfwr" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.441246 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.443472 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xz8pb"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.446665 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-l4rd4"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.448605 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nb82q"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.454595 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-jwbt2"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.456065 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h7c7k"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.457299 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7zrxf"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.457557 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.460072 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xptlh"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.461067 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rzszg"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.471908 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.474569 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jt7r7"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.476053 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4jhvx"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.480790 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vln8d"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.489907 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.493533 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4lhmd"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.495453 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jk7hz"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.496779 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-2rbj4"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.497754 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-2rbj4" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.498701 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-cgsr5"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.499380 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-cgsr5" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.499748 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497395-k4x5t"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.501884 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5bzw7"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.503072 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-glh6f"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.504104 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jm4lf"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.505582 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-fhcvf"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.508841 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8schl\" (UniqueName: \"kubernetes.io/projected/e65ad778-99ee-423a-b0d7-825171576820-kube-api-access-8schl\") pod \"packageserver-d55dfcdfc-jm4lf\" (UID: \"e65ad778-99ee-423a-b0d7-825171576820\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jm4lf" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.508872 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0ae9cc7-27cb-43ca-9543-286306c0272c-serving-cert\") pod \"console-operator-58897d9998-l4rd4\" (UID: \"a0ae9cc7-27cb-43ca-9543-286306c0272c\") " pod="openshift-console-operator/console-operator-58897d9998-l4rd4" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.508916 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/990cab2f-be71-43a6-b496-b37e97cd7156-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4lhmd\" (UID: \"990cab2f-be71-43a6-b496-b37e97cd7156\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4lhmd" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.508938 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b82r\" (UniqueName: \"kubernetes.io/projected/990cab2f-be71-43a6-b496-b37e97cd7156-kube-api-access-4b82r\") pod \"cluster-image-registry-operator-dc59b4c8b-4lhmd\" (UID: \"990cab2f-be71-43a6-b496-b37e97cd7156\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4lhmd" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.508958 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ec2385e-2bf3-44c1-93fe-51f82d425444-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lxjh4\" (UID: \"3ec2385e-2bf3-44c1-93fe-51f82d425444\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lxjh4" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509010 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/787bae26-eaf0-4c74-84a1-4ada053cd05a-audit\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509030 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509051 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dctl6\" (UniqueName: \"kubernetes.io/projected/787bae26-eaf0-4c74-84a1-4ada053cd05a-kube-api-access-dctl6\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509094 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dmg5\" (UniqueName: \"kubernetes.io/projected/ab833be5-a275-4c72-92d4-f6c93dd249a8-kube-api-access-4dmg5\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509112 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509131 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/787bae26-eaf0-4c74-84a1-4ada053cd05a-image-import-ca\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509174 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-audit-policies\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509192 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e65ad778-99ee-423a-b0d7-825171576820-tmpfs\") pod \"packageserver-d55dfcdfc-jm4lf\" (UID: \"e65ad778-99ee-423a-b0d7-825171576820\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jm4lf" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509296 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-audit-policies\") pod \"apiserver-7bbb656c7d-m8jlm\" (UID: \"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509322 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab833be5-a275-4c72-92d4-f6c93dd249a8-audit-dir\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509344 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ccb2de53-ecca-4439-94c0-2b65e5b21789-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-h7tss\" (UID: \"ccb2de53-ecca-4439-94c0-2b65e5b21789\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7tss" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509361 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/232fc61b-967c-45a9-86fc-f9481f555e6e-images\") pod \"machine-api-operator-5694c8668f-7zrxf\" (UID: \"232fc61b-967c-45a9-86fc-f9481f555e6e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7zrxf" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509383 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/853eae0c-97bb-4598-b299-a48780dfac55-profile-collector-cert\") pod \"catalog-operator-68c6474976-nb82q\" (UID: \"853eae0c-97bb-4598-b299-a48780dfac55\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nb82q" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509405 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-m8jlm\" (UID: \"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509423 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/990cab2f-be71-43a6-b496-b37e97cd7156-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4lhmd\" (UID: \"990cab2f-be71-43a6-b496-b37e97cd7156\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4lhmd" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509442 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-m8jlm\" (UID: \"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509460 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-audit-dir\") pod \"apiserver-7bbb656c7d-m8jlm\" (UID: \"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509478 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlvg9\" (UniqueName: \"kubernetes.io/projected/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-kube-api-access-tlvg9\") pod \"apiserver-7bbb656c7d-m8jlm\" (UID: \"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509496 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509518 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/232fc61b-967c-45a9-86fc-f9481f555e6e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7zrxf\" (UID: \"232fc61b-967c-45a9-86fc-f9481f555e6e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7zrxf" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509565 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509586 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20545c56-1cd5-4fcf-a537-f4f2212027c7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jk7hz\" (UID: \"20545c56-1cd5-4fcf-a537-f4f2212027c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jk7hz" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509606 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/787bae26-eaf0-4c74-84a1-4ada053cd05a-node-pullsecrets\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509648 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/787bae26-eaf0-4c74-84a1-4ada053cd05a-etcd-serving-ca\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509668 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zphbl\" (UniqueName: \"kubernetes.io/projected/da4349d8-d046-46ca-86b2-1bd4a8292bec-kube-api-access-zphbl\") pod \"dns-operator-744455d44c-fhcvf\" (UID: \"da4349d8-d046-46ca-86b2-1bd4a8292bec\") " pod="openshift-dns-operator/dns-operator-744455d44c-fhcvf" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509689 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/787bae26-eaf0-4c74-84a1-4ada053cd05a-audit-dir\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509732 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxwgs\" (UniqueName: \"kubernetes.io/projected/3ec2385e-2bf3-44c1-93fe-51f82d425444-kube-api-access-wxwgs\") pod \"openshift-controller-manager-operator-756b6f6bc6-lxjh4\" (UID: \"3ec2385e-2bf3-44c1-93fe-51f82d425444\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lxjh4" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509751 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/232fc61b-967c-45a9-86fc-f9481f555e6e-config\") pod \"machine-api-operator-5694c8668f-7zrxf\" (UID: \"232fc61b-967c-45a9-86fc-f9481f555e6e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7zrxf" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509793 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0f4e037-6ca7-43b0-a8c7-5f029cf833f7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5rwgc\" (UID: \"a0f4e037-6ca7-43b0-a8c7-5f029cf833f7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5rwgc" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509816 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knpp2\" (UniqueName: \"kubernetes.io/projected/853eae0c-97bb-4598-b299-a48780dfac55-kube-api-access-knpp2\") pod \"catalog-operator-68c6474976-nb82q\" (UID: \"853eae0c-97bb-4598-b299-a48780dfac55\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nb82q" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509833 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4znpv\" (UniqueName: \"kubernetes.io/projected/a0ae9cc7-27cb-43ca-9543-286306c0272c-kube-api-access-4znpv\") pod \"console-operator-58897d9998-l4rd4\" (UID: \"a0ae9cc7-27cb-43ca-9543-286306c0272c\") " pod="openshift-console-operator/console-operator-58897d9998-l4rd4" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509874 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da4349d8-d046-46ca-86b2-1bd4a8292bec-metrics-tls\") pod \"dns-operator-744455d44c-fhcvf\" (UID: \"da4349d8-d046-46ca-86b2-1bd4a8292bec\") " pod="openshift-dns-operator/dns-operator-744455d44c-fhcvf" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509893 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d2c04ff3-6875-46f8-a906-efc6ffdd312b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-jwbt2\" (UID: \"d2c04ff3-6875-46f8-a906-efc6ffdd312b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jwbt2" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509911 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/787bae26-eaf0-4c74-84a1-4ada053cd05a-config\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509927 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.509967 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/853eae0c-97bb-4598-b299-a48780dfac55-srv-cert\") pod \"catalog-operator-68c6474976-nb82q\" (UID: \"853eae0c-97bb-4598-b299-a48780dfac55\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nb82q" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.510016 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.510038 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/787bae26-eaf0-4c74-84a1-4ada053cd05a-etcd-client\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.510056 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-etcd-client\") pod \"apiserver-7bbb656c7d-m8jlm\" (UID: \"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.510149 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.510344 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-serving-cert\") pod \"apiserver-7bbb656c7d-m8jlm\" (UID: \"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.510372 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0f4e037-6ca7-43b0-a8c7-5f029cf833f7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5rwgc\" (UID: \"a0f4e037-6ca7-43b0-a8c7-5f029cf833f7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5rwgc" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.510443 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.510484 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/990cab2f-be71-43a6-b496-b37e97cd7156-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4lhmd\" (UID: \"990cab2f-be71-43a6-b496-b37e97cd7156\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4lhmd" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.510508 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqhb2\" (UniqueName: \"kubernetes.io/projected/ccb2de53-ecca-4439-94c0-2b65e5b21789-kube-api-access-gqhb2\") pod \"cluster-samples-operator-665b6dd947-h7tss\" (UID: \"ccb2de53-ecca-4439-94c0-2b65e5b21789\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7tss" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.510554 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/787bae26-eaf0-4c74-84a1-4ada053cd05a-encryption-config\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.510578 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ec2385e-2bf3-44c1-93fe-51f82d425444-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lxjh4\" (UID: \"3ec2385e-2bf3-44c1-93fe-51f82d425444\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lxjh4" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.510630 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0f4e037-6ca7-43b0-a8c7-5f029cf833f7-config\") pod \"kube-apiserver-operator-766d6c64bb-5rwgc\" (UID: \"a0f4e037-6ca7-43b0-a8c7-5f029cf833f7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5rwgc" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.510656 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72lnl\" (UniqueName: \"kubernetes.io/projected/d2c04ff3-6875-46f8-a906-efc6ffdd312b-kube-api-access-72lnl\") pod \"multus-admission-controller-857f4d67dd-jwbt2\" (UID: \"d2c04ff3-6875-46f8-a906-efc6ffdd312b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jwbt2" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.510703 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.510737 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e65ad778-99ee-423a-b0d7-825171576820-apiservice-cert\") pod \"packageserver-d55dfcdfc-jm4lf\" (UID: \"e65ad778-99ee-423a-b0d7-825171576820\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jm4lf" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.510762 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwbhf\" (UniqueName: \"kubernetes.io/projected/20545c56-1cd5-4fcf-a537-f4f2212027c7-kube-api-access-qwbhf\") pod \"openshift-apiserver-operator-796bbdcf4f-jk7hz\" (UID: \"20545c56-1cd5-4fcf-a537-f4f2212027c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jk7hz" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.510797 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/787bae26-eaf0-4c74-84a1-4ada053cd05a-serving-cert\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.510802 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.510822 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67bjr\" (UniqueName: \"kubernetes.io/projected/232fc61b-967c-45a9-86fc-f9481f555e6e-kube-api-access-67bjr\") pod \"machine-api-operator-5694c8668f-7zrxf\" (UID: \"232fc61b-967c-45a9-86fc-f9481f555e6e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7zrxf" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.510844 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20545c56-1cd5-4fcf-a537-f4f2212027c7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jk7hz\" (UID: \"20545c56-1cd5-4fcf-a537-f4f2212027c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jk7hz" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.510884 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0ae9cc7-27cb-43ca-9543-286306c0272c-trusted-ca\") pod \"console-operator-58897d9998-l4rd4\" (UID: \"a0ae9cc7-27cb-43ca-9543-286306c0272c\") " pod="openshift-console-operator/console-operator-58897d9998-l4rd4" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.510917 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/787bae26-eaf0-4c74-84a1-4ada053cd05a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.510939 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-encryption-config\") pod \"apiserver-7bbb656c7d-m8jlm\" (UID: \"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.510959 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e65ad778-99ee-423a-b0d7-825171576820-webhook-cert\") pod \"packageserver-d55dfcdfc-jm4lf\" (UID: \"e65ad778-99ee-423a-b0d7-825171576820\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jm4lf" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.511013 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.511035 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.511102 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0ae9cc7-27cb-43ca-9543-286306c0272c-config\") pod \"console-operator-58897d9998-l4rd4\" (UID: \"a0ae9cc7-27cb-43ca-9543-286306c0272c\") " pod="openshift-console-operator/console-operator-58897d9998-l4rd4" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.522456 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5rwgc"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.522516 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nkm6l"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.522529 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gmh7h"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.528699 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hhsgr"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.530112 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.534736 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vzfwr"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.537833 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-2rbj4"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.539299 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-d4l95"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.540740 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7cskt"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.549823 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.550303 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-dbx7j"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.552369 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5dh6j"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.553662 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-s2hzs"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.555020 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-5djg7"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.555921 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5djg7" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.556327 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5djg7"] Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.569009 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.588463 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.609251 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.611534 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/787bae26-eaf0-4c74-84a1-4ada053cd05a-encryption-config\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.611569 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ec2385e-2bf3-44c1-93fe-51f82d425444-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lxjh4\" (UID: \"3ec2385e-2bf3-44c1-93fe-51f82d425444\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lxjh4" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.611589 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.611604 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0f4e037-6ca7-43b0-a8c7-5f029cf833f7-config\") pod \"kube-apiserver-operator-766d6c64bb-5rwgc\" (UID: \"a0f4e037-6ca7-43b0-a8c7-5f029cf833f7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5rwgc" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.611620 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72lnl\" (UniqueName: \"kubernetes.io/projected/d2c04ff3-6875-46f8-a906-efc6ffdd312b-kube-api-access-72lnl\") pod \"multus-admission-controller-857f4d67dd-jwbt2\" (UID: \"d2c04ff3-6875-46f8-a906-efc6ffdd312b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jwbt2" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.611636 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e65ad778-99ee-423a-b0d7-825171576820-apiservice-cert\") pod \"packageserver-d55dfcdfc-jm4lf\" (UID: \"e65ad778-99ee-423a-b0d7-825171576820\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jm4lf" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.611650 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/787bae26-eaf0-4c74-84a1-4ada053cd05a-serving-cert\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.611665 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67bjr\" (UniqueName: \"kubernetes.io/projected/232fc61b-967c-45a9-86fc-f9481f555e6e-kube-api-access-67bjr\") pod \"machine-api-operator-5694c8668f-7zrxf\" (UID: \"232fc61b-967c-45a9-86fc-f9481f555e6e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7zrxf" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.611680 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20545c56-1cd5-4fcf-a537-f4f2212027c7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jk7hz\" (UID: \"20545c56-1cd5-4fcf-a537-f4f2212027c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jk7hz" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.611693 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwbhf\" (UniqueName: \"kubernetes.io/projected/20545c56-1cd5-4fcf-a537-f4f2212027c7-kube-api-access-qwbhf\") pod \"openshift-apiserver-operator-796bbdcf4f-jk7hz\" (UID: \"20545c56-1cd5-4fcf-a537-f4f2212027c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jk7hz" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.611708 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/787bae26-eaf0-4c74-84a1-4ada053cd05a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.611722 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-encryption-config\") pod \"apiserver-7bbb656c7d-m8jlm\" (UID: \"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.611737 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e65ad778-99ee-423a-b0d7-825171576820-webhook-cert\") pod \"packageserver-d55dfcdfc-jm4lf\" (UID: \"e65ad778-99ee-423a-b0d7-825171576820\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jm4lf" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.611751 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0ae9cc7-27cb-43ca-9543-286306c0272c-trusted-ca\") pod \"console-operator-58897d9998-l4rd4\" (UID: \"a0ae9cc7-27cb-43ca-9543-286306c0272c\") " pod="openshift-console-operator/console-operator-58897d9998-l4rd4" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.611766 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.611780 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0ae9cc7-27cb-43ca-9543-286306c0272c-config\") pod \"console-operator-58897d9998-l4rd4\" (UID: \"a0ae9cc7-27cb-43ca-9543-286306c0272c\") " pod="openshift-console-operator/console-operator-58897d9998-l4rd4" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.611795 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.611814 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8schl\" (UniqueName: \"kubernetes.io/projected/e65ad778-99ee-423a-b0d7-825171576820-kube-api-access-8schl\") pod \"packageserver-d55dfcdfc-jm4lf\" (UID: \"e65ad778-99ee-423a-b0d7-825171576820\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jm4lf" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.611829 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0ae9cc7-27cb-43ca-9543-286306c0272c-serving-cert\") pod \"console-operator-58897d9998-l4rd4\" (UID: \"a0ae9cc7-27cb-43ca-9543-286306c0272c\") " pod="openshift-console-operator/console-operator-58897d9998-l4rd4" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.611845 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/990cab2f-be71-43a6-b496-b37e97cd7156-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4lhmd\" (UID: \"990cab2f-be71-43a6-b496-b37e97cd7156\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4lhmd" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.611862 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/787bae26-eaf0-4c74-84a1-4ada053cd05a-audit\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.611878 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b82r\" (UniqueName: \"kubernetes.io/projected/990cab2f-be71-43a6-b496-b37e97cd7156-kube-api-access-4b82r\") pod \"cluster-image-registry-operator-dc59b4c8b-4lhmd\" (UID: \"990cab2f-be71-43a6-b496-b37e97cd7156\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4lhmd" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.611894 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ec2385e-2bf3-44c1-93fe-51f82d425444-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lxjh4\" (UID: \"3ec2385e-2bf3-44c1-93fe-51f82d425444\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lxjh4" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.611910 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.611925 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dctl6\" (UniqueName: \"kubernetes.io/projected/787bae26-eaf0-4c74-84a1-4ada053cd05a-kube-api-access-dctl6\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.611943 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dmg5\" (UniqueName: \"kubernetes.io/projected/ab833be5-a275-4c72-92d4-f6c93dd249a8-kube-api-access-4dmg5\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.611960 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612015 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/787bae26-eaf0-4c74-84a1-4ada053cd05a-image-import-ca\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612034 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-audit-policies\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612050 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e65ad778-99ee-423a-b0d7-825171576820-tmpfs\") pod \"packageserver-d55dfcdfc-jm4lf\" (UID: \"e65ad778-99ee-423a-b0d7-825171576820\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jm4lf" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612066 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-audit-policies\") pod \"apiserver-7bbb656c7d-m8jlm\" (UID: \"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612081 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab833be5-a275-4c72-92d4-f6c93dd249a8-audit-dir\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612099 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ccb2de53-ecca-4439-94c0-2b65e5b21789-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-h7tss\" (UID: \"ccb2de53-ecca-4439-94c0-2b65e5b21789\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7tss" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612115 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/232fc61b-967c-45a9-86fc-f9481f555e6e-images\") pod \"machine-api-operator-5694c8668f-7zrxf\" (UID: \"232fc61b-967c-45a9-86fc-f9481f555e6e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7zrxf" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612134 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/853eae0c-97bb-4598-b299-a48780dfac55-profile-collector-cert\") pod \"catalog-operator-68c6474976-nb82q\" (UID: \"853eae0c-97bb-4598-b299-a48780dfac55\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nb82q" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612149 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-m8jlm\" (UID: \"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612165 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-m8jlm\" (UID: \"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612178 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-audit-dir\") pod \"apiserver-7bbb656c7d-m8jlm\" (UID: \"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612192 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlvg9\" (UniqueName: \"kubernetes.io/projected/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-kube-api-access-tlvg9\") pod \"apiserver-7bbb656c7d-m8jlm\" (UID: \"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612207 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612224 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/990cab2f-be71-43a6-b496-b37e97cd7156-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4lhmd\" (UID: \"990cab2f-be71-43a6-b496-b37e97cd7156\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4lhmd" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612255 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/232fc61b-967c-45a9-86fc-f9481f555e6e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7zrxf\" (UID: \"232fc61b-967c-45a9-86fc-f9481f555e6e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7zrxf" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612280 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612304 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20545c56-1cd5-4fcf-a537-f4f2212027c7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jk7hz\" (UID: \"20545c56-1cd5-4fcf-a537-f4f2212027c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jk7hz" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612326 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/787bae26-eaf0-4c74-84a1-4ada053cd05a-node-pullsecrets\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612346 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/787bae26-eaf0-4c74-84a1-4ada053cd05a-etcd-serving-ca\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612367 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zphbl\" (UniqueName: \"kubernetes.io/projected/da4349d8-d046-46ca-86b2-1bd4a8292bec-kube-api-access-zphbl\") pod \"dns-operator-744455d44c-fhcvf\" (UID: \"da4349d8-d046-46ca-86b2-1bd4a8292bec\") " pod="openshift-dns-operator/dns-operator-744455d44c-fhcvf" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612385 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/787bae26-eaf0-4c74-84a1-4ada053cd05a-audit-dir\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612407 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxwgs\" (UniqueName: \"kubernetes.io/projected/3ec2385e-2bf3-44c1-93fe-51f82d425444-kube-api-access-wxwgs\") pod \"openshift-controller-manager-operator-756b6f6bc6-lxjh4\" (UID: \"3ec2385e-2bf3-44c1-93fe-51f82d425444\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lxjh4" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612428 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/232fc61b-967c-45a9-86fc-f9481f555e6e-config\") pod \"machine-api-operator-5694c8668f-7zrxf\" (UID: \"232fc61b-967c-45a9-86fc-f9481f555e6e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7zrxf" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612449 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0f4e037-6ca7-43b0-a8c7-5f029cf833f7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5rwgc\" (UID: \"a0f4e037-6ca7-43b0-a8c7-5f029cf833f7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5rwgc" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612471 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knpp2\" (UniqueName: \"kubernetes.io/projected/853eae0c-97bb-4598-b299-a48780dfac55-kube-api-access-knpp2\") pod \"catalog-operator-68c6474976-nb82q\" (UID: \"853eae0c-97bb-4598-b299-a48780dfac55\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nb82q" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612491 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4znpv\" (UniqueName: \"kubernetes.io/projected/a0ae9cc7-27cb-43ca-9543-286306c0272c-kube-api-access-4znpv\") pod \"console-operator-58897d9998-l4rd4\" (UID: \"a0ae9cc7-27cb-43ca-9543-286306c0272c\") " pod="openshift-console-operator/console-operator-58897d9998-l4rd4" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612507 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/787bae26-eaf0-4c74-84a1-4ada053cd05a-config\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612522 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612536 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da4349d8-d046-46ca-86b2-1bd4a8292bec-metrics-tls\") pod \"dns-operator-744455d44c-fhcvf\" (UID: \"da4349d8-d046-46ca-86b2-1bd4a8292bec\") " pod="openshift-dns-operator/dns-operator-744455d44c-fhcvf" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612551 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d2c04ff3-6875-46f8-a906-efc6ffdd312b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-jwbt2\" (UID: \"d2c04ff3-6875-46f8-a906-efc6ffdd312b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jwbt2" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612567 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/853eae0c-97bb-4598-b299-a48780dfac55-srv-cert\") pod \"catalog-operator-68c6474976-nb82q\" (UID: \"853eae0c-97bb-4598-b299-a48780dfac55\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nb82q" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612582 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612597 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-etcd-client\") pod \"apiserver-7bbb656c7d-m8jlm\" (UID: \"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612610 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/787bae26-eaf0-4c74-84a1-4ada053cd05a-etcd-client\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612633 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612653 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-serving-cert\") pod \"apiserver-7bbb656c7d-m8jlm\" (UID: \"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612672 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0f4e037-6ca7-43b0-a8c7-5f029cf833f7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5rwgc\" (UID: \"a0f4e037-6ca7-43b0-a8c7-5f029cf833f7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5rwgc" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612694 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612715 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/990cab2f-be71-43a6-b496-b37e97cd7156-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4lhmd\" (UID: \"990cab2f-be71-43a6-b496-b37e97cd7156\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4lhmd" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612739 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqhb2\" (UniqueName: \"kubernetes.io/projected/ccb2de53-ecca-4439-94c0-2b65e5b21789-kube-api-access-gqhb2\") pod \"cluster-samples-operator-665b6dd947-h7tss\" (UID: \"ccb2de53-ecca-4439-94c0-2b65e5b21789\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7tss" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612895 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/787bae26-eaf0-4c74-84a1-4ada053cd05a-node-pullsecrets\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.612896 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0ae9cc7-27cb-43ca-9543-286306c0272c-config\") pod \"console-operator-58897d9998-l4rd4\" (UID: \"a0ae9cc7-27cb-43ca-9543-286306c0272c\") " pod="openshift-console-operator/console-operator-58897d9998-l4rd4" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.613502 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/787bae26-eaf0-4c74-84a1-4ada053cd05a-etcd-serving-ca\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.613690 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/787bae26-eaf0-4c74-84a1-4ada053cd05a-audit-dir\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.614452 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/232fc61b-967c-45a9-86fc-f9481f555e6e-config\") pod \"machine-api-operator-5694c8668f-7zrxf\" (UID: \"232fc61b-967c-45a9-86fc-f9481f555e6e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7zrxf" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.614808 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/990cab2f-be71-43a6-b496-b37e97cd7156-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4lhmd\" (UID: \"990cab2f-be71-43a6-b496-b37e97cd7156\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4lhmd" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.615139 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-m8jlm\" (UID: \"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.615183 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-audit-dir\") pod \"apiserver-7bbb656c7d-m8jlm\" (UID: \"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.615379 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-audit-policies\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.615534 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/232fc61b-967c-45a9-86fc-f9481f555e6e-images\") pod \"machine-api-operator-5694c8668f-7zrxf\" (UID: \"232fc61b-967c-45a9-86fc-f9481f555e6e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7zrxf" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.615822 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20545c56-1cd5-4fcf-a537-f4f2212027c7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jk7hz\" (UID: \"20545c56-1cd5-4fcf-a537-f4f2212027c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jk7hz" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.615899 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20545c56-1cd5-4fcf-a537-f4f2212027c7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jk7hz\" (UID: \"20545c56-1cd5-4fcf-a537-f4f2212027c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jk7hz" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.616017 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e65ad778-99ee-423a-b0d7-825171576820-tmpfs\") pod \"packageserver-d55dfcdfc-jm4lf\" (UID: \"e65ad778-99ee-423a-b0d7-825171576820\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jm4lf" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.616455 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/787bae26-eaf0-4c74-84a1-4ada053cd05a-image-import-ca\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.616594 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-audit-policies\") pod \"apiserver-7bbb656c7d-m8jlm\" (UID: \"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.616693 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab833be5-a275-4c72-92d4-f6c93dd249a8-audit-dir\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.616809 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/787bae26-eaf0-4c74-84a1-4ada053cd05a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.617034 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/787bae26-eaf0-4c74-84a1-4ada053cd05a-audit\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.617512 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da4349d8-d046-46ca-86b2-1bd4a8292bec-metrics-tls\") pod \"dns-operator-744455d44c-fhcvf\" (UID: \"da4349d8-d046-46ca-86b2-1bd4a8292bec\") " pod="openshift-dns-operator/dns-operator-744455d44c-fhcvf" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.617558 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.617824 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0ae9cc7-27cb-43ca-9543-286306c0272c-trusted-ca\") pod \"console-operator-58897d9998-l4rd4\" (UID: \"a0ae9cc7-27cb-43ca-9543-286306c0272c\") " pod="openshift-console-operator/console-operator-58897d9998-l4rd4" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.618011 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.618071 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/787bae26-eaf0-4c74-84a1-4ada053cd05a-config\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.618209 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/787bae26-eaf0-4c74-84a1-4ada053cd05a-serving-cert\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.618453 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/787bae26-eaf0-4c74-84a1-4ada053cd05a-etcd-client\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.618733 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0ae9cc7-27cb-43ca-9543-286306c0272c-serving-cert\") pod \"console-operator-58897d9998-l4rd4\" (UID: \"a0ae9cc7-27cb-43ca-9543-286306c0272c\") " pod="openshift-console-operator/console-operator-58897d9998-l4rd4" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.618961 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/787bae26-eaf0-4c74-84a1-4ada053cd05a-encryption-config\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.619098 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.620158 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/990cab2f-be71-43a6-b496-b37e97cd7156-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4lhmd\" (UID: \"990cab2f-be71-43a6-b496-b37e97cd7156\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4lhmd" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.623110 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/232fc61b-967c-45a9-86fc-f9481f555e6e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7zrxf\" (UID: \"232fc61b-967c-45a9-86fc-f9481f555e6e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7zrxf" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.628682 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.656640 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.668417 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.709680 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.729279 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.749278 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.769433 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.789187 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.809919 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.829005 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.850557 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.870552 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.889948 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.909061 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.917824 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0f4e037-6ca7-43b0-a8c7-5f029cf833f7-config\") pod \"kube-apiserver-operator-766d6c64bb-5rwgc\" (UID: \"a0f4e037-6ca7-43b0-a8c7-5f029cf833f7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5rwgc" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.929899 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.949444 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.969760 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.989474 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 31 07:24:12 crc kubenswrapper[4908]: I0131 07:24:12.996753 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0f4e037-6ca7-43b0-a8c7-5f029cf833f7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5rwgc\" (UID: \"a0f4e037-6ca7-43b0-a8c7-5f029cf833f7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5rwgc" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.010186 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.028695 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.049142 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.069618 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.089864 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.108823 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.130063 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.149509 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.170458 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.189138 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.209564 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.229475 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.249002 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.269747 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.288697 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.309275 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.329327 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.348425 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.361358 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e65ad778-99ee-423a-b0d7-825171576820-apiservice-cert\") pod \"packageserver-d55dfcdfc-jm4lf\" (UID: \"e65ad778-99ee-423a-b0d7-825171576820\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jm4lf" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.361547 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e65ad778-99ee-423a-b0d7-825171576820-webhook-cert\") pod \"packageserver-d55dfcdfc-jm4lf\" (UID: \"e65ad778-99ee-423a-b0d7-825171576820\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jm4lf" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.369203 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.387705 4908 request.go:700] Waited for 1.005126273s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-stats-default&limit=500&resourceVersion=0 Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.389421 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.408880 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.429288 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.449646 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.468956 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.488760 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.509277 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.531182 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.538126 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/853eae0c-97bb-4598-b299-a48780dfac55-profile-collector-cert\") pod \"catalog-operator-68c6474976-nb82q\" (UID: \"853eae0c-97bb-4598-b299-a48780dfac55\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nb82q" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.549365 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.556921 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/853eae0c-97bb-4598-b299-a48780dfac55-srv-cert\") pod \"catalog-operator-68c6474976-nb82q\" (UID: \"853eae0c-97bb-4598-b299-a48780dfac55\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nb82q" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.569500 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.589898 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.609815 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 31 07:24:13 crc kubenswrapper[4908]: E0131 07:24:13.612690 4908 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:13 crc kubenswrapper[4908]: E0131 07:24:13.612781 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-ocp-branding-template podName:ab833be5-a275-4c72-92d4-f6c93dd249a8 nodeName:}" failed. No retries permitted until 2026-01-31 07:24:14.112760828 +0000 UTC m=+160.728705482 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-558db77b4-ltcmm" (UID: "ab833be5-a275-4c72-92d4-f6c93dd249a8") : failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:13 crc kubenswrapper[4908]: E0131 07:24:13.613056 4908 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:13 crc kubenswrapper[4908]: E0131 07:24:13.613114 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-template-provider-selection podName:ab833be5-a275-4c72-92d4-f6c93dd249a8 nodeName:}" failed. No retries permitted until 2026-01-31 07:24:14.113103127 +0000 UTC m=+160.729047781 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-template-provider-selection") pod "oauth-openshift-558db77b4-ltcmm" (UID: "ab833be5-a275-4c72-92d4-f6c93dd249a8") : failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:13 crc kubenswrapper[4908]: E0131 07:24:13.613874 4908 secret.go:188] Couldn't get secret openshift-oauth-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:13 crc kubenswrapper[4908]: E0131 07:24:13.613969 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-etcd-client podName:d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a nodeName:}" failed. No retries permitted until 2026-01-31 07:24:14.113941508 +0000 UTC m=+160.729886192 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-etcd-client") pod "apiserver-7bbb656c7d-m8jlm" (UID: "d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a") : failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:13 crc kubenswrapper[4908]: E0131 07:24:13.614062 4908 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: failed to sync configmap cache: timed out waiting for the condition Jan 31 07:24:13 crc kubenswrapper[4908]: E0131 07:24:13.614113 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-service-ca podName:ab833be5-a275-4c72-92d4-f6c93dd249a8 nodeName:}" failed. No retries permitted until 2026-01-31 07:24:14.114099342 +0000 UTC m=+160.730044096 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-service-ca") pod "oauth-openshift-558db77b4-ltcmm" (UID: "ab833be5-a275-4c72-92d4-f6c93dd249a8") : failed to sync configmap cache: timed out waiting for the condition Jan 31 07:24:13 crc kubenswrapper[4908]: E0131 07:24:13.614732 4908 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 31 07:24:13 crc kubenswrapper[4908]: E0131 07:24:13.614779 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-trusted-ca-bundle podName:d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a nodeName:}" failed. No retries permitted until 2026-01-31 07:24:14.11476751 +0000 UTC m=+160.730712164 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-trusted-ca-bundle") pod "apiserver-7bbb656c7d-m8jlm" (UID: "d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a") : failed to sync configmap cache: timed out waiting for the condition Jan 31 07:24:13 crc kubenswrapper[4908]: E0131 07:24:13.616703 4908 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:13 crc kubenswrapper[4908]: E0131 07:24:13.616784 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-template-error podName:ab833be5-a275-4c72-92d4-f6c93dd249a8 nodeName:}" failed. No retries permitted until 2026-01-31 07:24:14.116762131 +0000 UTC m=+160.732706855 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-template-error") pod "oauth-openshift-558db77b4-ltcmm" (UID: "ab833be5-a275-4c72-92d4-f6c93dd249a8") : failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:13 crc kubenswrapper[4908]: E0131 07:24:13.616838 4908 secret.go:188] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:13 crc kubenswrapper[4908]: E0131 07:24:13.616894 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ccb2de53-ecca-4439-94c0-2b65e5b21789-samples-operator-tls podName:ccb2de53-ecca-4439-94c0-2b65e5b21789 nodeName:}" failed. No retries permitted until 2026-01-31 07:24:14.116877444 +0000 UTC m=+160.732822178 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/ccb2de53-ecca-4439-94c0-2b65e5b21789-samples-operator-tls") pod "cluster-samples-operator-665b6dd947-h7tss" (UID: "ccb2de53-ecca-4439-94c0-2b65e5b21789") : failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:13 crc kubenswrapper[4908]: E0131 07:24:13.616933 4908 secret.go:188] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:13 crc kubenswrapper[4908]: E0131 07:24:13.616969 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ec2385e-2bf3-44c1-93fe-51f82d425444-serving-cert podName:3ec2385e-2bf3-44c1-93fe-51f82d425444 nodeName:}" failed. No retries permitted until 2026-01-31 07:24:14.116958646 +0000 UTC m=+160.732903340 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3ec2385e-2bf3-44c1-93fe-51f82d425444-serving-cert") pod "openshift-controller-manager-operator-756b6f6bc6-lxjh4" (UID: "3ec2385e-2bf3-44c1-93fe-51f82d425444") : failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:13 crc kubenswrapper[4908]: E0131 07:24:13.617032 4908 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 31 07:24:13 crc kubenswrapper[4908]: E0131 07:24:13.617068 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-trusted-ca-bundle podName:ab833be5-a275-4c72-92d4-f6c93dd249a8 nodeName:}" failed. No retries permitted until 2026-01-31 07:24:14.117057568 +0000 UTC m=+160.733002262 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-558db77b4-ltcmm" (UID: "ab833be5-a275-4c72-92d4-f6c93dd249a8") : failed to sync configmap cache: timed out waiting for the condition Jan 31 07:24:13 crc kubenswrapper[4908]: E0131 07:24:13.617097 4908 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:13 crc kubenswrapper[4908]: E0131 07:24:13.617131 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-template-login podName:ab833be5-a275-4c72-92d4-f6c93dd249a8 nodeName:}" failed. No retries permitted until 2026-01-31 07:24:14.11712088 +0000 UTC m=+160.733065574 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-template-login") pod "oauth-openshift-558db77b4-ltcmm" (UID: "ab833be5-a275-4c72-92d4-f6c93dd249a8") : failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:13 crc kubenswrapper[4908]: E0131 07:24:13.617162 4908 secret.go:188] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:13 crc kubenswrapper[4908]: E0131 07:24:13.617231 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-encryption-config podName:d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a nodeName:}" failed. No retries permitted until 2026-01-31 07:24:14.117207892 +0000 UTC m=+160.733152616 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-encryption-config") pod "apiserver-7bbb656c7d-m8jlm" (UID: "d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a") : failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:13 crc kubenswrapper[4908]: E0131 07:24:13.617310 4908 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 31 07:24:13 crc kubenswrapper[4908]: E0131 07:24:13.617349 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3ec2385e-2bf3-44c1-93fe-51f82d425444-config podName:3ec2385e-2bf3-44c1-93fe-51f82d425444 nodeName:}" failed. No retries permitted until 2026-01-31 07:24:14.117337365 +0000 UTC m=+160.733282059 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/3ec2385e-2bf3-44c1-93fe-51f82d425444-config") pod "openshift-controller-manager-operator-756b6f6bc6-lxjh4" (UID: "3ec2385e-2bf3-44c1-93fe-51f82d425444") : failed to sync configmap cache: timed out waiting for the condition Jan 31 07:24:13 crc kubenswrapper[4908]: E0131 07:24:13.617372 4908 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:13 crc kubenswrapper[4908]: E0131 07:24:13.617408 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-serving-cert podName:ab833be5-a275-4c72-92d4-f6c93dd249a8 nodeName:}" failed. No retries permitted until 2026-01-31 07:24:14.117396247 +0000 UTC m=+160.733340931 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-serving-cert") pod "oauth-openshift-558db77b4-ltcmm" (UID: "ab833be5-a275-4c72-92d4-f6c93dd249a8") : failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:13 crc kubenswrapper[4908]: E0131 07:24:13.617443 4908 secret.go:188] Couldn't get secret openshift-oauth-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:13 crc kubenswrapper[4908]: E0131 07:24:13.617478 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-serving-cert podName:d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a nodeName:}" failed. No retries permitted until 2026-01-31 07:24:14.117467659 +0000 UTC m=+160.733412343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-serving-cert") pod "apiserver-7bbb656c7d-m8jlm" (UID: "d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a") : failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:13 crc kubenswrapper[4908]: E0131 07:24:13.618605 4908 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-system-session: failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:13 crc kubenswrapper[4908]: E0131 07:24:13.618665 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-session podName:ab833be5-a275-4c72-92d4-f6c93dd249a8 nodeName:}" failed. No retries permitted until 2026-01-31 07:24:14.118650379 +0000 UTC m=+160.734595073 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-session") pod "oauth-openshift-558db77b4-ltcmm" (UID: "ab833be5-a275-4c72-92d4-f6c93dd249a8") : failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.621760 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d2c04ff3-6875-46f8-a906-efc6ffdd312b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-jwbt2\" (UID: \"d2c04ff3-6875-46f8-a906-efc6ffdd312b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jwbt2" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.628650 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.648744 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.688957 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.709212 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.740234 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.756034 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.768659 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.789086 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.808905 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.828961 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.848755 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.869878 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.889069 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.909065 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.930213 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.949214 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.969627 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 31 07:24:13 crc kubenswrapper[4908]: I0131 07:24:13.989266 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.008754 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.029426 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.049795 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.069630 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.089300 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.109610 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.129584 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.130063 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ec2385e-2bf3-44c1-93fe-51f82d425444-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lxjh4\" (UID: \"3ec2385e-2bf3-44c1-93fe-51f82d425444\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lxjh4" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.130118 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.130155 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.130182 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ccb2de53-ecca-4439-94c0-2b65e5b21789-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-h7tss\" (UID: \"ccb2de53-ecca-4439-94c0-2b65e5b21789\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7tss" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.130200 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-m8jlm\" (UID: \"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.130223 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.130276 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.130294 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.130311 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-etcd-client\") pod \"apiserver-7bbb656c7d-m8jlm\" (UID: \"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.130332 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.130351 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-serving-cert\") pod \"apiserver-7bbb656c7d-m8jlm\" (UID: \"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.130380 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ec2385e-2bf3-44c1-93fe-51f82d425444-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lxjh4\" (UID: \"3ec2385e-2bf3-44c1-93fe-51f82d425444\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lxjh4" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.130400 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.130437 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-encryption-config\") pod \"apiserver-7bbb656c7d-m8jlm\" (UID: \"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.130457 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.149794 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.169008 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.189885 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.210603 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.229132 4908 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.248563 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.269428 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.289288 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.309897 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.331454 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.349429 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.388095 4908 request.go:700] Waited for 1.775025377s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.407720 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8schl\" (UniqueName: \"kubernetes.io/projected/e65ad778-99ee-423a-b0d7-825171576820-kube-api-access-8schl\") pod \"packageserver-d55dfcdfc-jm4lf\" (UID: \"e65ad778-99ee-423a-b0d7-825171576820\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jm4lf" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.427803 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zphbl\" (UniqueName: \"kubernetes.io/projected/da4349d8-d046-46ca-86b2-1bd4a8292bec-kube-api-access-zphbl\") pod \"dns-operator-744455d44c-fhcvf\" (UID: \"da4349d8-d046-46ca-86b2-1bd4a8292bec\") " pod="openshift-dns-operator/dns-operator-744455d44c-fhcvf" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.462465 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jm4lf" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.485197 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dctl6\" (UniqueName: \"kubernetes.io/projected/787bae26-eaf0-4c74-84a1-4ada053cd05a-kube-api-access-dctl6\") pod \"apiserver-76f77b778f-vln8d\" (UID: \"787bae26-eaf0-4c74-84a1-4ada053cd05a\") " pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.504439 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dmg5\" (UniqueName: \"kubernetes.io/projected/ab833be5-a275-4c72-92d4-f6c93dd249a8-kube-api-access-4dmg5\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.524604 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/990cab2f-be71-43a6-b496-b37e97cd7156-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4lhmd\" (UID: \"990cab2f-be71-43a6-b496-b37e97cd7156\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4lhmd" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.551150 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67bjr\" (UniqueName: \"kubernetes.io/projected/232fc61b-967c-45a9-86fc-f9481f555e6e-kube-api-access-67bjr\") pod \"machine-api-operator-5694c8668f-7zrxf\" (UID: \"232fc61b-967c-45a9-86fc-f9481f555e6e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7zrxf" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.578559 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b82r\" (UniqueName: \"kubernetes.io/projected/990cab2f-be71-43a6-b496-b37e97cd7156-kube-api-access-4b82r\") pod \"cluster-image-registry-operator-dc59b4c8b-4lhmd\" (UID: \"990cab2f-be71-43a6-b496-b37e97cd7156\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4lhmd" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.589051 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwbhf\" (UniqueName: \"kubernetes.io/projected/20545c56-1cd5-4fcf-a537-f4f2212027c7-kube-api-access-qwbhf\") pod \"openshift-apiserver-operator-796bbdcf4f-jk7hz\" (UID: \"20545c56-1cd5-4fcf-a537-f4f2212027c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jk7hz" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.589293 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4lhmd" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.611891 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jk7hz" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.621402 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.626483 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knpp2\" (UniqueName: \"kubernetes.io/projected/853eae0c-97bb-4598-b299-a48780dfac55-kube-api-access-knpp2\") pod \"catalog-operator-68c6474976-nb82q\" (UID: \"853eae0c-97bb-4598-b299-a48780dfac55\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nb82q" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.634030 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-7zrxf" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.650200 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72lnl\" (UniqueName: \"kubernetes.io/projected/d2c04ff3-6875-46f8-a906-efc6ffdd312b-kube-api-access-72lnl\") pod \"multus-admission-controller-857f4d67dd-jwbt2\" (UID: \"d2c04ff3-6875-46f8-a906-efc6ffdd312b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jwbt2" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.663224 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-fhcvf" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.668220 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0f4e037-6ca7-43b0-a8c7-5f029cf833f7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5rwgc\" (UID: \"a0f4e037-6ca7-43b0-a8c7-5f029cf833f7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5rwgc" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.692323 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jm4lf"] Jan 31 07:24:14 crc kubenswrapper[4908]: W0131 07:24:14.702184 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode65ad778_99ee_423a_b0d7_825171576820.slice/crio-6a7efcf3fcaa245bd4f343c1f29ebfadde2d66145c9c20e711b124729263f100 WatchSource:0}: Error finding container 6a7efcf3fcaa245bd4f343c1f29ebfadde2d66145c9c20e711b124729263f100: Status 404 returned error can't find the container with id 6a7efcf3fcaa245bd4f343c1f29ebfadde2d66145c9c20e711b124729263f100 Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.702252 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.705885 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5rwgc" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.708890 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.716236 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlvg9\" (UniqueName: \"kubernetes.io/projected/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-kube-api-access-tlvg9\") pod \"apiserver-7bbb656c7d-m8jlm\" (UID: \"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.716469 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.721087 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jm4lf" event={"ID":"e65ad778-99ee-423a-b0d7-825171576820","Type":"ContainerStarted","Data":"6a7efcf3fcaa245bd4f343c1f29ebfadde2d66145c9c20e711b124729263f100"} Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.729824 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.749009 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.754795 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.769470 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.776094 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4znpv\" (UniqueName: \"kubernetes.io/projected/a0ae9cc7-27cb-43ca-9543-286306c0272c-kube-api-access-4znpv\") pod \"console-operator-58897d9998-l4rd4\" (UID: \"a0ae9cc7-27cb-43ca-9543-286306c0272c\") " pod="openshift-console-operator/console-operator-58897d9998-l4rd4" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.778897 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nb82q" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.795689 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-jwbt2" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.799094 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.809234 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.830474 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.852205 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.872678 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.889762 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.909104 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.911206 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-m8jlm\" (UID: \"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.912668 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7zrxf"] Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.918143 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-fhcvf"] Jan 31 07:24:14 crc kubenswrapper[4908]: W0131 07:24:14.924450 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod232fc61b_967c_45a9_86fc_f9481f555e6e.slice/crio-41838788463b861541442ee2edc8733ec82f184655371307b4f81f5cffa4d454 WatchSource:0}: Error finding container 41838788463b861541442ee2edc8733ec82f184655371307b4f81f5cffa4d454: Status 404 returned error can't find the container with id 41838788463b861541442ee2edc8733ec82f184655371307b4f81f5cffa4d454 Jan 31 07:24:14 crc kubenswrapper[4908]: W0131 07:24:14.925895 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda4349d8_d046_46ca_86b2_1bd4a8292bec.slice/crio-9ee360d71159a88869784a6311d2339716d8f455a211bc954f817b1c194b6b88 WatchSource:0}: Error finding container 9ee360d71159a88869784a6311d2339716d8f455a211bc954f817b1c194b6b88: Status 404 returned error can't find the container with id 9ee360d71159a88869784a6311d2339716d8f455a211bc954f817b1c194b6b88 Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.931201 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.949214 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.970198 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 31 07:24:14 crc kubenswrapper[4908]: I0131 07:24:14.989338 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.012701 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.031094 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.036138 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-encryption-config\") pod \"apiserver-7bbb656c7d-m8jlm\" (UID: \"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.038601 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jk7hz"] Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.040201 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vln8d"] Jan 31 07:24:15 crc kubenswrapper[4908]: W0131 07:24:15.049285 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20545c56_1cd5_4fcf_a537_f4f2212027c7.slice/crio-e5db308b54d5e259866e9c321a096297a5e3845718554f35b956547bf6b61903 WatchSource:0}: Error finding container e5db308b54d5e259866e9c321a096297a5e3845718554f35b956547bf6b61903: Status 404 returned error can't find the container with id e5db308b54d5e259866e9c321a096297a5e3845718554f35b956547bf6b61903 Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.050004 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.051398 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4lhmd"] Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.053218 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:15 crc kubenswrapper[4908]: W0131 07:24:15.055379 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod787bae26_eaf0_4c74_84a1_4ada053cd05a.slice/crio-818a873cdeeb6d1c6c88d40092a12a4af3e97f61dddc7ec6c71d693e2d7ec86f WatchSource:0}: Error finding container 818a873cdeeb6d1c6c88d40092a12a4af3e97f61dddc7ec6c71d693e2d7ec86f: Status 404 returned error can't find the container with id 818a873cdeeb6d1c6c88d40092a12a4af3e97f61dddc7ec6c71d693e2d7ec86f Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.058026 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-l4rd4" Jan 31 07:24:15 crc kubenswrapper[4908]: W0131 07:24:15.058411 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod990cab2f_be71_43a6_b496_b37e97cd7156.slice/crio-3057b46b7df98d31c367b652f9e9b996b96b026595855b6c7fcfba2952c921b2 WatchSource:0}: Error finding container 3057b46b7df98d31c367b652f9e9b996b96b026595855b6c7fcfba2952c921b2: Status 404 returned error can't find the container with id 3057b46b7df98d31c367b652f9e9b996b96b026595855b6c7fcfba2952c921b2 Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.060618 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nb82q"] Jan 31 07:24:15 crc kubenswrapper[4908]: W0131 07:24:15.066905 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod853eae0c_97bb_4598_b299_a48780dfac55.slice/crio-70f0ccc0bf996714c1c1ccb04c19edcfd6965c1e80a088237a34dfc01f13389d WatchSource:0}: Error finding container 70f0ccc0bf996714c1c1ccb04c19edcfd6965c1e80a088237a34dfc01f13389d: Status 404 returned error can't find the container with id 70f0ccc0bf996714c1c1ccb04c19edcfd6965c1e80a088237a34dfc01f13389d Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.068739 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.079179 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxwgs\" (UniqueName: \"kubernetes.io/projected/3ec2385e-2bf3-44c1-93fe-51f82d425444-kube-api-access-wxwgs\") pod \"openshift-controller-manager-operator-756b6f6bc6-lxjh4\" (UID: \"3ec2385e-2bf3-44c1-93fe-51f82d425444\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lxjh4" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.089128 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.096429 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.109697 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.129663 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 07:24:15 crc kubenswrapper[4908]: E0131 07:24:15.130520 4908 secret.go:188] Couldn't get secret openshift-oauth-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:15 crc kubenswrapper[4908]: E0131 07:24:15.130608 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-serving-cert podName:d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a nodeName:}" failed. No retries permitted until 2026-01-31 07:24:16.130588243 +0000 UTC m=+162.746532897 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-serving-cert") pod "apiserver-7bbb656c7d-m8jlm" (UID: "d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a") : failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:15 crc kubenswrapper[4908]: E0131 07:24:15.130880 4908 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: failed to sync configmap cache: timed out waiting for the condition Jan 31 07:24:15 crc kubenswrapper[4908]: E0131 07:24:15.130881 4908 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:15 crc kubenswrapper[4908]: E0131 07:24:15.130908 4908 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-system-session: failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:15 crc kubenswrapper[4908]: E0131 07:24:15.130904 4908 secret.go:188] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:15 crc kubenswrapper[4908]: E0131 07:24:15.130943 4908 secret.go:188] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:15 crc kubenswrapper[4908]: E0131 07:24:15.130921 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-service-ca podName:ab833be5-a275-4c72-92d4-f6c93dd249a8 nodeName:}" failed. No retries permitted until 2026-01-31 07:24:16.130912571 +0000 UTC m=+162.746857225 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-service-ca") pod "oauth-openshift-558db77b4-ltcmm" (UID: "ab833be5-a275-4c72-92d4-f6c93dd249a8") : failed to sync configmap cache: timed out waiting for the condition Jan 31 07:24:15 crc kubenswrapper[4908]: E0131 07:24:15.130995 4908 secret.go:188] Couldn't get secret openshift-oauth-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:15 crc kubenswrapper[4908]: E0131 07:24:15.131001 4908 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 31 07:24:15 crc kubenswrapper[4908]: E0131 07:24:15.130881 4908 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 31 07:24:15 crc kubenswrapper[4908]: E0131 07:24:15.131028 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-template-login podName:ab833be5-a275-4c72-92d4-f6c93dd249a8 nodeName:}" failed. No retries permitted until 2026-01-31 07:24:16.131005073 +0000 UTC m=+162.746949807 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-template-login") pod "oauth-openshift-558db77b4-ltcmm" (UID: "ab833be5-a275-4c72-92d4-f6c93dd249a8") : failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:15 crc kubenswrapper[4908]: E0131 07:24:15.131049 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-session podName:ab833be5-a275-4c72-92d4-f6c93dd249a8 nodeName:}" failed. No retries permitted until 2026-01-31 07:24:16.131040744 +0000 UTC m=+162.746985538 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-session") pod "oauth-openshift-558db77b4-ltcmm" (UID: "ab833be5-a275-4c72-92d4-f6c93dd249a8") : failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:15 crc kubenswrapper[4908]: E0131 07:24:15.131062 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ccb2de53-ecca-4439-94c0-2b65e5b21789-samples-operator-tls podName:ccb2de53-ecca-4439-94c0-2b65e5b21789 nodeName:}" failed. No retries permitted until 2026-01-31 07:24:16.131057465 +0000 UTC m=+162.747002249 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/ccb2de53-ecca-4439-94c0-2b65e5b21789-samples-operator-tls") pod "cluster-samples-operator-665b6dd947-h7tss" (UID: "ccb2de53-ecca-4439-94c0-2b65e5b21789") : failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:15 crc kubenswrapper[4908]: E0131 07:24:15.131076 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-etcd-client podName:d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a nodeName:}" failed. No retries permitted until 2026-01-31 07:24:16.131068625 +0000 UTC m=+162.747013279 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-etcd-client") pod "apiserver-7bbb656c7d-m8jlm" (UID: "d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a") : failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:15 crc kubenswrapper[4908]: E0131 07:24:15.131086 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-trusted-ca-bundle podName:ab833be5-a275-4c72-92d4-f6c93dd249a8 nodeName:}" failed. No retries permitted until 2026-01-31 07:24:16.131080615 +0000 UTC m=+162.747025269 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-558db77b4-ltcmm" (UID: "ab833be5-a275-4c72-92d4-f6c93dd249a8") : failed to sync configmap cache: timed out waiting for the condition Jan 31 07:24:15 crc kubenswrapper[4908]: E0131 07:24:15.131105 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ec2385e-2bf3-44c1-93fe-51f82d425444-serving-cert podName:3ec2385e-2bf3-44c1-93fe-51f82d425444 nodeName:}" failed. No retries permitted until 2026-01-31 07:24:16.131097396 +0000 UTC m=+162.747042180 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3ec2385e-2bf3-44c1-93fe-51f82d425444-serving-cert") pod "openshift-controller-manager-operator-756b6f6bc6-lxjh4" (UID: "3ec2385e-2bf3-44c1-93fe-51f82d425444") : failed to sync secret cache: timed out waiting for the condition Jan 31 07:24:15 crc kubenswrapper[4908]: E0131 07:24:15.131119 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3ec2385e-2bf3-44c1-93fe-51f82d425444-config podName:3ec2385e-2bf3-44c1-93fe-51f82d425444 nodeName:}" failed. No retries permitted until 2026-01-31 07:24:16.131113196 +0000 UTC m=+162.747057980 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/3ec2385e-2bf3-44c1-93fe-51f82d425444-config") pod "openshift-controller-manager-operator-756b6f6bc6-lxjh4" (UID: "3ec2385e-2bf3-44c1-93fe-51f82d425444") : failed to sync configmap cache: timed out waiting for the condition Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.151627 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.170275 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.200039 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.208885 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-jwbt2"] Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.208956 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5rwgc"] Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.212257 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 31 07:24:15 crc kubenswrapper[4908]: W0131 07:24:15.229682 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2c04ff3_6875_46f8_a906_efc6ffdd312b.slice/crio-745ca94cdb97db6c3c88c72957914b02935436b99a2d891947a6bf394b2434da WatchSource:0}: Error finding container 745ca94cdb97db6c3c88c72957914b02935436b99a2d891947a6bf394b2434da: Status 404 returned error can't find the container with id 745ca94cdb97db6c3c88c72957914b02935436b99a2d891947a6bf394b2434da Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.242267 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-l4rd4"] Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.251294 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.254221 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9742dfc9-568c-49f1-89ae-8f3959bf33ad-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4vxx6\" (UID: \"9742dfc9-568c-49f1-89ae-8f3959bf33ad\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4vxx6" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.254263 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/097d2f96-ce86-4d47-a55c-c717d272a8ef-console-config\") pod \"console-f9d7485db-fjlrr\" (UID: \"097d2f96-ce86-4d47-a55c-c717d272a8ef\") " pod="openshift-console/console-f9d7485db-fjlrr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.254291 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d0a4cbb-8e82-42c7-8661-4c4f371699e0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gmh7h\" (UID: \"0d0a4cbb-8e82-42c7-8661-4c4f371699e0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gmh7h" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.254319 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f9b70b6-16d7-48f4-96e9-1ad34f82cac0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-h7c7k\" (UID: \"1f9b70b6-16d7-48f4-96e9-1ad34f82cac0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h7c7k" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.254344 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9742dfc9-568c-49f1-89ae-8f3959bf33ad-serving-cert\") pod \"openshift-config-operator-7777fb866f-4vxx6\" (UID: \"9742dfc9-568c-49f1-89ae-8f3959bf33ad\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4vxx6" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.254365 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/097d2f96-ce86-4d47-a55c-c717d272a8ef-service-ca\") pod \"console-f9d7485db-fjlrr\" (UID: \"097d2f96-ce86-4d47-a55c-c717d272a8ef\") " pod="openshift-console/console-f9d7485db-fjlrr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.254388 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vrj7\" (UniqueName: \"kubernetes.io/projected/0d0a4cbb-8e82-42c7-8661-4c4f371699e0-kube-api-access-8vrj7\") pod \"package-server-manager-789f6589d5-gmh7h\" (UID: \"0d0a4cbb-8e82-42c7-8661-4c4f371699e0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gmh7h" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.254430 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284e4410-86ef-4c86-a9e1-6859dea75ab2-config\") pod \"kube-controller-manager-operator-78b949d7b-nkm6l\" (UID: \"284e4410-86ef-4c86-a9e1-6859dea75ab2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nkm6l" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.254461 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d369cc1-14e7-49ff-b253-bc196840a444-trusted-ca\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.254492 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.254523 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48vh4\" (UniqueName: \"kubernetes.io/projected/345e9f59-f1cc-40f5-97ea-42940f12805c-kube-api-access-48vh4\") pod \"downloads-7954f5f757-8rjct\" (UID: \"345e9f59-f1cc-40f5-97ea-42940f12805c\") " pod="openshift-console/downloads-7954f5f757-8rjct" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.254549 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/312e81dd-7f53-4f53-a0fa-c0ef69a50cd2-proxy-tls\") pod \"machine-config-operator-74547568cd-hhsgr\" (UID: \"312e81dd-7f53-4f53-a0fa-c0ef69a50cd2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hhsgr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.254571 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44skr\" (UniqueName: \"kubernetes.io/projected/36518e3b-feb3-4ca9-8a3d-debecb7e80ca-kube-api-access-44skr\") pod \"etcd-operator-b45778765-xz8pb\" (UID: \"36518e3b-feb3-4ca9-8a3d-debecb7e80ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xz8pb" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.254602 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/097d2f96-ce86-4d47-a55c-c717d272a8ef-trusted-ca-bundle\") pod \"console-f9d7485db-fjlrr\" (UID: \"097d2f96-ce86-4d47-a55c-c717d272a8ef\") " pod="openshift-console/console-f9d7485db-fjlrr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.254621 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61affb6e-e659-45c3-b1bb-f328e073304f-serving-cert\") pod \"controller-manager-879f6c89f-vgk68\" (UID: \"61affb6e-e659-45c3-b1bb-f328e073304f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.254648 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n4jg\" (UniqueName: \"kubernetes.io/projected/a1eb7039-2ff7-48da-85d1-471dfe4f956b-kube-api-access-7n4jg\") pod \"control-plane-machine-set-operator-78cbb6b69f-5bzw7\" (UID: \"a1eb7039-2ff7-48da-85d1-471dfe4f956b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5bzw7" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.254700 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b26392ba-50ff-4652-bbb2-52dc2328effb-config\") pod \"authentication-operator-69f744f599-4jhvx\" (UID: \"b26392ba-50ff-4652-bbb2-52dc2328effb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4jhvx" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.254742 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5v8l\" (UniqueName: \"kubernetes.io/projected/0d369cc1-14e7-49ff-b253-bc196840a444-kube-api-access-j5v8l\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.254766 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/36518e3b-feb3-4ca9-8a3d-debecb7e80ca-etcd-service-ca\") pod \"etcd-operator-b45778765-xz8pb\" (UID: \"36518e3b-feb3-4ca9-8a3d-debecb7e80ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xz8pb" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.254788 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/53de0c2d-11ec-4f56-a585-7497b8c698a2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jt7r7\" (UID: \"53de0c2d-11ec-4f56-a585-7497b8c698a2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jt7r7" Jan 31 07:24:15 crc kubenswrapper[4908]: E0131 07:24:15.254839 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:15.754803501 +0000 UTC m=+162.370748155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.254889 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b18d2de-0b08-43c9-bcbf-1ced621bac08-client-ca\") pod \"route-controller-manager-6576b87f9c-wgj2j\" (UID: \"1b18d2de-0b08-43c9-bcbf-1ced621bac08\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.254927 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lc6h\" (UniqueName: \"kubernetes.io/projected/900fb657-d80e-4887-8144-424a3cf39946-kube-api-access-2lc6h\") pod \"router-default-5444994796-lnblx\" (UID: \"900fb657-d80e-4887-8144-424a3cf39946\") " pod="openshift-ingress/router-default-5444994796-lnblx" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.254962 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b26392ba-50ff-4652-bbb2-52dc2328effb-service-ca-bundle\") pod \"authentication-operator-69f744f599-4jhvx\" (UID: \"b26392ba-50ff-4652-bbb2-52dc2328effb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4jhvx" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255002 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/097d2f96-ce86-4d47-a55c-c717d272a8ef-oauth-serving-cert\") pod \"console-f9d7485db-fjlrr\" (UID: \"097d2f96-ce86-4d47-a55c-c717d272a8ef\") " pod="openshift-console/console-f9d7485db-fjlrr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255041 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ltjq\" (UniqueName: \"kubernetes.io/projected/f913ba98-cee4-48cc-9167-575792819b69-kube-api-access-2ltjq\") pod \"machine-approver-56656f9798-rlxht\" (UID: \"f913ba98-cee4-48cc-9167-575792819b69\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rlxht" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255126 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/53de0c2d-11ec-4f56-a585-7497b8c698a2-metrics-tls\") pod \"ingress-operator-5b745b69d9-jt7r7\" (UID: \"53de0c2d-11ec-4f56-a585-7497b8c698a2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jt7r7" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255151 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/900fb657-d80e-4887-8144-424a3cf39946-metrics-certs\") pod \"router-default-5444994796-lnblx\" (UID: \"900fb657-d80e-4887-8144-424a3cf39946\") " pod="openshift-ingress/router-default-5444994796-lnblx" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255173 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/284e4410-86ef-4c86-a9e1-6859dea75ab2-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nkm6l\" (UID: \"284e4410-86ef-4c86-a9e1-6859dea75ab2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nkm6l" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255207 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61affb6e-e659-45c3-b1bb-f328e073304f-client-ca\") pod \"controller-manager-879f6c89f-vgk68\" (UID: \"61affb6e-e659-45c3-b1bb-f328e073304f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255224 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f9b70b6-16d7-48f4-96e9-1ad34f82cac0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-h7c7k\" (UID: \"1f9b70b6-16d7-48f4-96e9-1ad34f82cac0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h7c7k" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255253 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0d369cc1-14e7-49ff-b253-bc196840a444-registry-tls\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255269 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/900fb657-d80e-4887-8144-424a3cf39946-default-certificate\") pod \"router-default-5444994796-lnblx\" (UID: \"900fb657-d80e-4887-8144-424a3cf39946\") " pod="openshift-ingress/router-default-5444994796-lnblx" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255302 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a1eb7039-2ff7-48da-85d1-471dfe4f956b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5bzw7\" (UID: \"a1eb7039-2ff7-48da-85d1-471dfe4f956b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5bzw7" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255325 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/097d2f96-ce86-4d47-a55c-c717d272a8ef-console-serving-cert\") pod \"console-f9d7485db-fjlrr\" (UID: \"097d2f96-ce86-4d47-a55c-c717d272a8ef\") " pod="openshift-console/console-f9d7485db-fjlrr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255340 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/53de0c2d-11ec-4f56-a585-7497b8c698a2-trusted-ca\") pod \"ingress-operator-5b745b69d9-jt7r7\" (UID: \"53de0c2d-11ec-4f56-a585-7497b8c698a2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jt7r7" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255356 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/312e81dd-7f53-4f53-a0fa-c0ef69a50cd2-images\") pod \"machine-config-operator-74547568cd-hhsgr\" (UID: \"312e81dd-7f53-4f53-a0fa-c0ef69a50cd2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hhsgr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255372 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/284e4410-86ef-4c86-a9e1-6859dea75ab2-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nkm6l\" (UID: \"284e4410-86ef-4c86-a9e1-6859dea75ab2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nkm6l" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255402 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f913ba98-cee4-48cc-9167-575792819b69-config\") pod \"machine-approver-56656f9798-rlxht\" (UID: \"f913ba98-cee4-48cc-9167-575792819b69\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rlxht" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255420 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/900fb657-d80e-4887-8144-424a3cf39946-service-ca-bundle\") pod \"router-default-5444994796-lnblx\" (UID: \"900fb657-d80e-4887-8144-424a3cf39946\") " pod="openshift-ingress/router-default-5444994796-lnblx" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255448 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f913ba98-cee4-48cc-9167-575792819b69-auth-proxy-config\") pod \"machine-approver-56656f9798-rlxht\" (UID: \"f913ba98-cee4-48cc-9167-575792819b69\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rlxht" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255464 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0d369cc1-14e7-49ff-b253-bc196840a444-registry-certificates\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255479 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7qwq\" (UniqueName: \"kubernetes.io/projected/9742dfc9-568c-49f1-89ae-8f3959bf33ad-kube-api-access-b7qwq\") pod \"openshift-config-operator-7777fb866f-4vxx6\" (UID: \"9742dfc9-568c-49f1-89ae-8f3959bf33ad\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4vxx6" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255493 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f9b70b6-16d7-48f4-96e9-1ad34f82cac0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-h7c7k\" (UID: \"1f9b70b6-16d7-48f4-96e9-1ad34f82cac0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h7c7k" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255511 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36518e3b-feb3-4ca9-8a3d-debecb7e80ca-config\") pod \"etcd-operator-b45778765-xz8pb\" (UID: \"36518e3b-feb3-4ca9-8a3d-debecb7e80ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xz8pb" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255527 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61affb6e-e659-45c3-b1bb-f328e073304f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vgk68\" (UID: \"61affb6e-e659-45c3-b1bb-f328e073304f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255555 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5swnd\" (UniqueName: \"kubernetes.io/projected/1b130058-7294-4cc4-8e1d-8dcde09ec947-kube-api-access-5swnd\") pod \"machine-config-controller-84d6567774-rzszg\" (UID: \"1b130058-7294-4cc4-8e1d-8dcde09ec947\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rzszg" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255604 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxvvs\" (UniqueName: \"kubernetes.io/projected/53de0c2d-11ec-4f56-a585-7497b8c698a2-kube-api-access-zxvvs\") pod \"ingress-operator-5b745b69d9-jt7r7\" (UID: \"53de0c2d-11ec-4f56-a585-7497b8c698a2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jt7r7" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255636 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/097d2f96-ce86-4d47-a55c-c717d272a8ef-console-oauth-config\") pod \"console-f9d7485db-fjlrr\" (UID: \"097d2f96-ce86-4d47-a55c-c717d272a8ef\") " pod="openshift-console/console-f9d7485db-fjlrr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255659 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/312e81dd-7f53-4f53-a0fa-c0ef69a50cd2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hhsgr\" (UID: \"312e81dd-7f53-4f53-a0fa-c0ef69a50cd2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hhsgr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255682 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d369cc1-14e7-49ff-b253-bc196840a444-bound-sa-token\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255702 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh776\" (UniqueName: \"kubernetes.io/projected/097d2f96-ce86-4d47-a55c-c717d272a8ef-kube-api-access-mh776\") pod \"console-f9d7485db-fjlrr\" (UID: \"097d2f96-ce86-4d47-a55c-c717d272a8ef\") " pod="openshift-console/console-f9d7485db-fjlrr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255728 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hf7g\" (UniqueName: \"kubernetes.io/projected/1b18d2de-0b08-43c9-bcbf-1ced621bac08-kube-api-access-9hf7g\") pod \"route-controller-manager-6576b87f9c-wgj2j\" (UID: \"1b18d2de-0b08-43c9-bcbf-1ced621bac08\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255754 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbvv6\" (UniqueName: \"kubernetes.io/projected/61affb6e-e659-45c3-b1bb-f328e073304f-kube-api-access-wbvv6\") pod \"controller-manager-879f6c89f-vgk68\" (UID: \"61affb6e-e659-45c3-b1bb-f328e073304f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255782 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shc6c\" (UniqueName: \"kubernetes.io/projected/b26392ba-50ff-4652-bbb2-52dc2328effb-kube-api-access-shc6c\") pod \"authentication-operator-69f744f599-4jhvx\" (UID: \"b26392ba-50ff-4652-bbb2-52dc2328effb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4jhvx" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255804 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/36518e3b-feb3-4ca9-8a3d-debecb7e80ca-etcd-client\") pod \"etcd-operator-b45778765-xz8pb\" (UID: \"36518e3b-feb3-4ca9-8a3d-debecb7e80ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xz8pb" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255828 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd9fc355-495d-408c-9084-b781e3494409-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xptlh\" (UID: \"bd9fc355-495d-408c-9084-b781e3494409\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xptlh" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255921 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/36518e3b-feb3-4ca9-8a3d-debecb7e80ca-etcd-ca\") pod \"etcd-operator-b45778765-xz8pb\" (UID: \"36518e3b-feb3-4ca9-8a3d-debecb7e80ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xz8pb" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.255946 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36518e3b-feb3-4ca9-8a3d-debecb7e80ca-serving-cert\") pod \"etcd-operator-b45778765-xz8pb\" (UID: \"36518e3b-feb3-4ca9-8a3d-debecb7e80ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xz8pb" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.256004 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61affb6e-e659-45c3-b1bb-f328e073304f-config\") pod \"controller-manager-879f6c89f-vgk68\" (UID: \"61affb6e-e659-45c3-b1bb-f328e073304f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.256068 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1b130058-7294-4cc4-8e1d-8dcde09ec947-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rzszg\" (UID: \"1b130058-7294-4cc4-8e1d-8dcde09ec947\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rzszg" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.256122 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1b130058-7294-4cc4-8e1d-8dcde09ec947-proxy-tls\") pod \"machine-config-controller-84d6567774-rzszg\" (UID: \"1b130058-7294-4cc4-8e1d-8dcde09ec947\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rzszg" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.256171 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b26392ba-50ff-4652-bbb2-52dc2328effb-serving-cert\") pod \"authentication-operator-69f744f599-4jhvx\" (UID: \"b26392ba-50ff-4652-bbb2-52dc2328effb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4jhvx" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.256222 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f913ba98-cee4-48cc-9167-575792819b69-machine-approver-tls\") pod \"machine-approver-56656f9798-rlxht\" (UID: \"f913ba98-cee4-48cc-9167-575792819b69\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rlxht" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.256242 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/900fb657-d80e-4887-8144-424a3cf39946-stats-auth\") pod \"router-default-5444994796-lnblx\" (UID: \"900fb657-d80e-4887-8144-424a3cf39946\") " pod="openshift-ingress/router-default-5444994796-lnblx" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.256277 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cft97\" (UniqueName: \"kubernetes.io/projected/bd9fc355-495d-408c-9084-b781e3494409-kube-api-access-cft97\") pod \"kube-storage-version-migrator-operator-b67b599dd-xptlh\" (UID: \"bd9fc355-495d-408c-9084-b781e3494409\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xptlh" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.256307 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b18d2de-0b08-43c9-bcbf-1ced621bac08-serving-cert\") pod \"route-controller-manager-6576b87f9c-wgj2j\" (UID: \"1b18d2de-0b08-43c9-bcbf-1ced621bac08\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.256375 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0d369cc1-14e7-49ff-b253-bc196840a444-installation-pull-secrets\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.256525 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqfcd\" (UniqueName: \"kubernetes.io/projected/312e81dd-7f53-4f53-a0fa-c0ef69a50cd2-kube-api-access-zqfcd\") pod \"machine-config-operator-74547568cd-hhsgr\" (UID: \"312e81dd-7f53-4f53-a0fa-c0ef69a50cd2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hhsgr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.256591 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b26392ba-50ff-4652-bbb2-52dc2328effb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4jhvx\" (UID: \"b26392ba-50ff-4652-bbb2-52dc2328effb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4jhvx" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.256634 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd9fc355-495d-408c-9084-b781e3494409-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xptlh\" (UID: \"bd9fc355-495d-408c-9084-b781e3494409\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xptlh" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.256652 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b18d2de-0b08-43c9-bcbf-1ced621bac08-config\") pod \"route-controller-manager-6576b87f9c-wgj2j\" (UID: \"1b18d2de-0b08-43c9-bcbf-1ced621bac08\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.256671 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0d369cc1-14e7-49ff-b253-bc196840a444-ca-trust-extracted\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.272584 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.289389 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 31 07:24:15 crc kubenswrapper[4908]: W0131 07:24:15.298070 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0ae9cc7_27cb_43ca_9543_286306c0272c.slice/crio-607f2b00dbd0723a3430d0a6fe0fc9344079b333a04a7d91c40ea1e4c4ff7fc0 WatchSource:0}: Error finding container 607f2b00dbd0723a3430d0a6fe0fc9344079b333a04a7d91c40ea1e4c4ff7fc0: Status 404 returned error can't find the container with id 607f2b00dbd0723a3430d0a6fe0fc9344079b333a04a7d91c40ea1e4c4ff7fc0 Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.309284 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.329343 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.343845 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqhb2\" (UniqueName: \"kubernetes.io/projected/ccb2de53-ecca-4439-94c0-2b65e5b21789-kube-api-access-gqhb2\") pod \"cluster-samples-operator-665b6dd947-h7tss\" (UID: \"ccb2de53-ecca-4439-94c0-2b65e5b21789\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7tss" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.349052 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.357476 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.357637 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f9b70b6-16d7-48f4-96e9-1ad34f82cac0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-h7c7k\" (UID: \"1f9b70b6-16d7-48f4-96e9-1ad34f82cac0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h7c7k" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.357674 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/15ac1097-42d7-4ca8-b258-9418f0e0993e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vzfwr\" (UID: \"15ac1097-42d7-4ca8-b258-9418f0e0993e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vzfwr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.357697 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vrj7\" (UniqueName: \"kubernetes.io/projected/0d0a4cbb-8e82-42c7-8661-4c4f371699e0-kube-api-access-8vrj7\") pod \"package-server-manager-789f6589d5-gmh7h\" (UID: \"0d0a4cbb-8e82-42c7-8661-4c4f371699e0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gmh7h" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.357719 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9742dfc9-568c-49f1-89ae-8f3959bf33ad-serving-cert\") pod \"openshift-config-operator-7777fb866f-4vxx6\" (UID: \"9742dfc9-568c-49f1-89ae-8f3959bf33ad\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4vxx6" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.357753 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/097d2f96-ce86-4d47-a55c-c717d272a8ef-service-ca\") pod \"console-f9d7485db-fjlrr\" (UID: \"097d2f96-ce86-4d47-a55c-c717d272a8ef\") " pod="openshift-console/console-f9d7485db-fjlrr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.357787 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/726a0082-0e03-4539-9f62-ee7776d0a7d8-plugins-dir\") pod \"csi-hostpathplugin-2rbj4\" (UID: \"726a0082-0e03-4539-9f62-ee7776d0a7d8\") " pod="hostpath-provisioner/csi-hostpathplugin-2rbj4" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.357838 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284e4410-86ef-4c86-a9e1-6859dea75ab2-config\") pod \"kube-controller-manager-operator-78b949d7b-nkm6l\" (UID: \"284e4410-86ef-4c86-a9e1-6859dea75ab2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nkm6l" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.357873 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d369cc1-14e7-49ff-b253-bc196840a444-trusted-ca\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.357907 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2a40a7d-4ba8-4fd0-863f-53d987b1383c-config\") pod \"service-ca-operator-777779d784-dbx7j\" (UID: \"e2a40a7d-4ba8-4fd0-863f-53d987b1383c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dbx7j" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.357929 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/312e81dd-7f53-4f53-a0fa-c0ef69a50cd2-proxy-tls\") pod \"machine-config-operator-74547568cd-hhsgr\" (UID: \"312e81dd-7f53-4f53-a0fa-c0ef69a50cd2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hhsgr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.357951 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48vh4\" (UniqueName: \"kubernetes.io/projected/345e9f59-f1cc-40f5-97ea-42940f12805c-kube-api-access-48vh4\") pod \"downloads-7954f5f757-8rjct\" (UID: \"345e9f59-f1cc-40f5-97ea-42940f12805c\") " pod="openshift-console/downloads-7954f5f757-8rjct" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358016 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44skr\" (UniqueName: \"kubernetes.io/projected/36518e3b-feb3-4ca9-8a3d-debecb7e80ca-kube-api-access-44skr\") pod \"etcd-operator-b45778765-xz8pb\" (UID: \"36518e3b-feb3-4ca9-8a3d-debecb7e80ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xz8pb" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358040 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbj7b\" (UniqueName: \"kubernetes.io/projected/e2a40a7d-4ba8-4fd0-863f-53d987b1383c-kube-api-access-mbj7b\") pod \"service-ca-operator-777779d784-dbx7j\" (UID: \"e2a40a7d-4ba8-4fd0-863f-53d987b1383c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dbx7j" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358094 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/097d2f96-ce86-4d47-a55c-c717d272a8ef-trusted-ca-bundle\") pod \"console-f9d7485db-fjlrr\" (UID: \"097d2f96-ce86-4d47-a55c-c717d272a8ef\") " pod="openshift-console/console-f9d7485db-fjlrr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358117 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/726a0082-0e03-4539-9f62-ee7776d0a7d8-mountpoint-dir\") pod \"csi-hostpathplugin-2rbj4\" (UID: \"726a0082-0e03-4539-9f62-ee7776d0a7d8\") " pod="hostpath-provisioner/csi-hostpathplugin-2rbj4" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358152 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b26392ba-50ff-4652-bbb2-52dc2328effb-config\") pod \"authentication-operator-69f744f599-4jhvx\" (UID: \"b26392ba-50ff-4652-bbb2-52dc2328effb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4jhvx" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358173 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61affb6e-e659-45c3-b1bb-f328e073304f-serving-cert\") pod \"controller-manager-879f6c89f-vgk68\" (UID: \"61affb6e-e659-45c3-b1bb-f328e073304f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358195 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n4jg\" (UniqueName: \"kubernetes.io/projected/a1eb7039-2ff7-48da-85d1-471dfe4f956b-kube-api-access-7n4jg\") pod \"control-plane-machine-set-operator-78cbb6b69f-5bzw7\" (UID: \"a1eb7039-2ff7-48da-85d1-471dfe4f956b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5bzw7" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358261 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5v8l\" (UniqueName: \"kubernetes.io/projected/0d369cc1-14e7-49ff-b253-bc196840a444-kube-api-access-j5v8l\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358285 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/36518e3b-feb3-4ca9-8a3d-debecb7e80ca-etcd-service-ca\") pod \"etcd-operator-b45778765-xz8pb\" (UID: \"36518e3b-feb3-4ca9-8a3d-debecb7e80ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xz8pb" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358306 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/53de0c2d-11ec-4f56-a585-7497b8c698a2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jt7r7\" (UID: \"53de0c2d-11ec-4f56-a585-7497b8c698a2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jt7r7" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358342 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b18d2de-0b08-43c9-bcbf-1ced621bac08-client-ca\") pod \"route-controller-manager-6576b87f9c-wgj2j\" (UID: \"1b18d2de-0b08-43c9-bcbf-1ced621bac08\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358363 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lc6h\" (UniqueName: \"kubernetes.io/projected/900fb657-d80e-4887-8144-424a3cf39946-kube-api-access-2lc6h\") pod \"router-default-5444994796-lnblx\" (UID: \"900fb657-d80e-4887-8144-424a3cf39946\") " pod="openshift-ingress/router-default-5444994796-lnblx" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358384 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/15ac1097-42d7-4ca8-b258-9418f0e0993e-srv-cert\") pod \"olm-operator-6b444d44fb-vzfwr\" (UID: \"15ac1097-42d7-4ca8-b258-9418f0e0993e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vzfwr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358403 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b26392ba-50ff-4652-bbb2-52dc2328effb-service-ca-bundle\") pod \"authentication-operator-69f744f599-4jhvx\" (UID: \"b26392ba-50ff-4652-bbb2-52dc2328effb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4jhvx" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358420 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/097d2f96-ce86-4d47-a55c-c717d272a8ef-oauth-serving-cert\") pod \"console-f9d7485db-fjlrr\" (UID: \"097d2f96-ce86-4d47-a55c-c717d272a8ef\") " pod="openshift-console/console-f9d7485db-fjlrr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358439 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ltjq\" (UniqueName: \"kubernetes.io/projected/f913ba98-cee4-48cc-9167-575792819b69-kube-api-access-2ltjq\") pod \"machine-approver-56656f9798-rlxht\" (UID: \"f913ba98-cee4-48cc-9167-575792819b69\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rlxht" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358460 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/53de0c2d-11ec-4f56-a585-7497b8c698a2-metrics-tls\") pod \"ingress-operator-5b745b69d9-jt7r7\" (UID: \"53de0c2d-11ec-4f56-a585-7497b8c698a2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jt7r7" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358481 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/900fb657-d80e-4887-8144-424a3cf39946-metrics-certs\") pod \"router-default-5444994796-lnblx\" (UID: \"900fb657-d80e-4887-8144-424a3cf39946\") " pod="openshift-ingress/router-default-5444994796-lnblx" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358516 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/284e4410-86ef-4c86-a9e1-6859dea75ab2-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nkm6l\" (UID: \"284e4410-86ef-4c86-a9e1-6859dea75ab2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nkm6l" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358561 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61affb6e-e659-45c3-b1bb-f328e073304f-client-ca\") pod \"controller-manager-879f6c89f-vgk68\" (UID: \"61affb6e-e659-45c3-b1bb-f328e073304f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358585 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f9b70b6-16d7-48f4-96e9-1ad34f82cac0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-h7c7k\" (UID: \"1f9b70b6-16d7-48f4-96e9-1ad34f82cac0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h7c7k" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358608 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m4jr\" (UniqueName: \"kubernetes.io/projected/15ac1097-42d7-4ca8-b258-9418f0e0993e-kube-api-access-7m4jr\") pod \"olm-operator-6b444d44fb-vzfwr\" (UID: \"15ac1097-42d7-4ca8-b258-9418f0e0993e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vzfwr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358632 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0d369cc1-14e7-49ff-b253-bc196840a444-registry-tls\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358667 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8kcd\" (UniqueName: \"kubernetes.io/projected/21666eaa-906a-49c3-aaf9-b2466bc9d7f3-kube-api-access-t8kcd\") pod \"ingress-canary-5dh6j\" (UID: \"21666eaa-906a-49c3-aaf9-b2466bc9d7f3\") " pod="openshift-ingress-canary/ingress-canary-5dh6j" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358700 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/900fb657-d80e-4887-8144-424a3cf39946-default-certificate\") pod \"router-default-5444994796-lnblx\" (UID: \"900fb657-d80e-4887-8144-424a3cf39946\") " pod="openshift-ingress/router-default-5444994796-lnblx" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358726 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2a40a7d-4ba8-4fd0-863f-53d987b1383c-serving-cert\") pod \"service-ca-operator-777779d784-dbx7j\" (UID: \"e2a40a7d-4ba8-4fd0-863f-53d987b1383c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dbx7j" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358762 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe51cf59-5f34-4f01-8404-2f95b7ca742b-secret-volume\") pod \"collect-profiles-29497395-k4x5t\" (UID: \"fe51cf59-5f34-4f01-8404-2f95b7ca742b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-k4x5t" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358810 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a1eb7039-2ff7-48da-85d1-471dfe4f956b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5bzw7\" (UID: \"a1eb7039-2ff7-48da-85d1-471dfe4f956b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5bzw7" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358859 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/097d2f96-ce86-4d47-a55c-c717d272a8ef-console-serving-cert\") pod \"console-f9d7485db-fjlrr\" (UID: \"097d2f96-ce86-4d47-a55c-c717d272a8ef\") " pod="openshift-console/console-f9d7485db-fjlrr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358883 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/53de0c2d-11ec-4f56-a585-7497b8c698a2-trusted-ca\") pod \"ingress-operator-5b745b69d9-jt7r7\" (UID: \"53de0c2d-11ec-4f56-a585-7497b8c698a2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jt7r7" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358903 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/726a0082-0e03-4539-9f62-ee7776d0a7d8-csi-data-dir\") pod \"csi-hostpathplugin-2rbj4\" (UID: \"726a0082-0e03-4539-9f62-ee7776d0a7d8\") " pod="hostpath-provisioner/csi-hostpathplugin-2rbj4" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358926 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/284e4410-86ef-4c86-a9e1-6859dea75ab2-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nkm6l\" (UID: \"284e4410-86ef-4c86-a9e1-6859dea75ab2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nkm6l" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358948 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/312e81dd-7f53-4f53-a0fa-c0ef69a50cd2-images\") pod \"machine-config-operator-74547568cd-hhsgr\" (UID: \"312e81dd-7f53-4f53-a0fa-c0ef69a50cd2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hhsgr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.358972 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dtbh\" (UniqueName: \"kubernetes.io/projected/cbe88e47-edc8-4f4a-bc0b-006a3551c85c-kube-api-access-4dtbh\") pod \"service-ca-9c57cc56f-s2hzs\" (UID: \"cbe88e47-edc8-4f4a-bc0b-006a3551c85c\") " pod="openshift-service-ca/service-ca-9c57cc56f-s2hzs" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359065 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f913ba98-cee4-48cc-9167-575792819b69-auth-proxy-config\") pod \"machine-approver-56656f9798-rlxht\" (UID: \"f913ba98-cee4-48cc-9167-575792819b69\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rlxht" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359086 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f913ba98-cee4-48cc-9167-575792819b69-config\") pod \"machine-approver-56656f9798-rlxht\" (UID: \"f913ba98-cee4-48cc-9167-575792819b69\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rlxht" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359104 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cbe88e47-edc8-4f4a-bc0b-006a3551c85c-signing-key\") pod \"service-ca-9c57cc56f-s2hzs\" (UID: \"cbe88e47-edc8-4f4a-bc0b-006a3551c85c\") " pod="openshift-service-ca/service-ca-9c57cc56f-s2hzs" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359137 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/900fb657-d80e-4887-8144-424a3cf39946-service-ca-bundle\") pod \"router-default-5444994796-lnblx\" (UID: \"900fb657-d80e-4887-8144-424a3cf39946\") " pod="openshift-ingress/router-default-5444994796-lnblx" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359159 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0d369cc1-14e7-49ff-b253-bc196840a444-registry-certificates\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359181 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7qwq\" (UniqueName: \"kubernetes.io/projected/9742dfc9-568c-49f1-89ae-8f3959bf33ad-kube-api-access-b7qwq\") pod \"openshift-config-operator-7777fb866f-4vxx6\" (UID: \"9742dfc9-568c-49f1-89ae-8f3959bf33ad\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4vxx6" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359201 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f9b70b6-16d7-48f4-96e9-1ad34f82cac0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-h7c7k\" (UID: \"1f9b70b6-16d7-48f4-96e9-1ad34f82cac0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h7c7k" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359248 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fb91aa35-0cab-44ca-b05c-a8e73038efdc-metrics-tls\") pod \"dns-default-5djg7\" (UID: \"fb91aa35-0cab-44ca-b05c-a8e73038efdc\") " pod="openshift-dns/dns-default-5djg7" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359272 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36518e3b-feb3-4ca9-8a3d-debecb7e80ca-config\") pod \"etcd-operator-b45778765-xz8pb\" (UID: \"36518e3b-feb3-4ca9-8a3d-debecb7e80ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xz8pb" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359294 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61affb6e-e659-45c3-b1bb-f328e073304f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vgk68\" (UID: \"61affb6e-e659-45c3-b1bb-f328e073304f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359334 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfml7\" (UniqueName: \"kubernetes.io/projected/c294adb2-f360-4af0-9919-dc678235c37d-kube-api-access-mfml7\") pod \"marketplace-operator-79b997595-7cskt\" (UID: \"c294adb2-f360-4af0-9919-dc678235c37d\") " pod="openshift-marketplace/marketplace-operator-79b997595-7cskt" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359370 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj6x8\" (UniqueName: \"kubernetes.io/projected/726a0082-0e03-4539-9f62-ee7776d0a7d8-kube-api-access-rj6x8\") pod \"csi-hostpathplugin-2rbj4\" (UID: \"726a0082-0e03-4539-9f62-ee7776d0a7d8\") " pod="hostpath-provisioner/csi-hostpathplugin-2rbj4" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359409 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5swnd\" (UniqueName: \"kubernetes.io/projected/1b130058-7294-4cc4-8e1d-8dcde09ec947-kube-api-access-5swnd\") pod \"machine-config-controller-84d6567774-rzszg\" (UID: \"1b130058-7294-4cc4-8e1d-8dcde09ec947\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rzszg" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359458 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb91aa35-0cab-44ca-b05c-a8e73038efdc-config-volume\") pod \"dns-default-5djg7\" (UID: \"fb91aa35-0cab-44ca-b05c-a8e73038efdc\") " pod="openshift-dns/dns-default-5djg7" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359479 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg65b\" (UniqueName: \"kubernetes.io/projected/fe51cf59-5f34-4f01-8404-2f95b7ca742b-kube-api-access-mg65b\") pod \"collect-profiles-29497395-k4x5t\" (UID: \"fe51cf59-5f34-4f01-8404-2f95b7ca742b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-k4x5t" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359505 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxvvs\" (UniqueName: \"kubernetes.io/projected/53de0c2d-11ec-4f56-a585-7497b8c698a2-kube-api-access-zxvvs\") pod \"ingress-operator-5b745b69d9-jt7r7\" (UID: \"53de0c2d-11ec-4f56-a585-7497b8c698a2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jt7r7" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359555 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/097d2f96-ce86-4d47-a55c-c717d272a8ef-console-oauth-config\") pod \"console-f9d7485db-fjlrr\" (UID: \"097d2f96-ce86-4d47-a55c-c717d272a8ef\") " pod="openshift-console/console-f9d7485db-fjlrr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359578 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c294adb2-f360-4af0-9919-dc678235c37d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7cskt\" (UID: \"c294adb2-f360-4af0-9919-dc678235c37d\") " pod="openshift-marketplace/marketplace-operator-79b997595-7cskt" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359599 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/21666eaa-906a-49c3-aaf9-b2466bc9d7f3-cert\") pod \"ingress-canary-5dh6j\" (UID: \"21666eaa-906a-49c3-aaf9-b2466bc9d7f3\") " pod="openshift-ingress-canary/ingress-canary-5dh6j" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359626 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d369cc1-14e7-49ff-b253-bc196840a444-bound-sa-token\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359648 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/312e81dd-7f53-4f53-a0fa-c0ef69a50cd2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hhsgr\" (UID: \"312e81dd-7f53-4f53-a0fa-c0ef69a50cd2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hhsgr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359672 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shc6c\" (UniqueName: \"kubernetes.io/projected/b26392ba-50ff-4652-bbb2-52dc2328effb-kube-api-access-shc6c\") pod \"authentication-operator-69f744f599-4jhvx\" (UID: \"b26392ba-50ff-4652-bbb2-52dc2328effb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4jhvx" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359695 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh776\" (UniqueName: \"kubernetes.io/projected/097d2f96-ce86-4d47-a55c-c717d272a8ef-kube-api-access-mh776\") pod \"console-f9d7485db-fjlrr\" (UID: \"097d2f96-ce86-4d47-a55c-c717d272a8ef\") " pod="openshift-console/console-f9d7485db-fjlrr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359718 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hf7g\" (UniqueName: \"kubernetes.io/projected/1b18d2de-0b08-43c9-bcbf-1ced621bac08-kube-api-access-9hf7g\") pod \"route-controller-manager-6576b87f9c-wgj2j\" (UID: \"1b18d2de-0b08-43c9-bcbf-1ced621bac08\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359745 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbvv6\" (UniqueName: \"kubernetes.io/projected/61affb6e-e659-45c3-b1bb-f328e073304f-kube-api-access-wbvv6\") pod \"controller-manager-879f6c89f-vgk68\" (UID: \"61affb6e-e659-45c3-b1bb-f328e073304f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359770 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/36518e3b-feb3-4ca9-8a3d-debecb7e80ca-etcd-client\") pod \"etcd-operator-b45778765-xz8pb\" (UID: \"36518e3b-feb3-4ca9-8a3d-debecb7e80ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xz8pb" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359792 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd9fc355-495d-408c-9084-b781e3494409-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xptlh\" (UID: \"bd9fc355-495d-408c-9084-b781e3494409\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xptlh" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359854 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/36518e3b-feb3-4ca9-8a3d-debecb7e80ca-etcd-ca\") pod \"etcd-operator-b45778765-xz8pb\" (UID: \"36518e3b-feb3-4ca9-8a3d-debecb7e80ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xz8pb" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359879 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/726a0082-0e03-4539-9f62-ee7776d0a7d8-registration-dir\") pod \"csi-hostpathplugin-2rbj4\" (UID: \"726a0082-0e03-4539-9f62-ee7776d0a7d8\") " pod="hostpath-provisioner/csi-hostpathplugin-2rbj4" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359931 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36518e3b-feb3-4ca9-8a3d-debecb7e80ca-serving-cert\") pod \"etcd-operator-b45778765-xz8pb\" (UID: \"36518e3b-feb3-4ca9-8a3d-debecb7e80ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xz8pb" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.359955 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ae655cf0-e8e0-4118-9dac-90bfc5b7aafe-node-bootstrap-token\") pod \"machine-config-server-cgsr5\" (UID: \"ae655cf0-e8e0-4118-9dac-90bfc5b7aafe\") " pod="openshift-machine-config-operator/machine-config-server-cgsr5" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.360007 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c294adb2-f360-4af0-9919-dc678235c37d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7cskt\" (UID: \"c294adb2-f360-4af0-9919-dc678235c37d\") " pod="openshift-marketplace/marketplace-operator-79b997595-7cskt" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.360046 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnsgc\" (UniqueName: \"kubernetes.io/projected/ae655cf0-e8e0-4118-9dac-90bfc5b7aafe-kube-api-access-gnsgc\") pod \"machine-config-server-cgsr5\" (UID: \"ae655cf0-e8e0-4118-9dac-90bfc5b7aafe\") " pod="openshift-machine-config-operator/machine-config-server-cgsr5" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.360088 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61affb6e-e659-45c3-b1bb-f328e073304f-config\") pod \"controller-manager-879f6c89f-vgk68\" (UID: \"61affb6e-e659-45c3-b1bb-f328e073304f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.360113 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1b130058-7294-4cc4-8e1d-8dcde09ec947-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rzszg\" (UID: \"1b130058-7294-4cc4-8e1d-8dcde09ec947\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rzszg" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.360152 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1b130058-7294-4cc4-8e1d-8dcde09ec947-proxy-tls\") pod \"machine-config-controller-84d6567774-rzszg\" (UID: \"1b130058-7294-4cc4-8e1d-8dcde09ec947\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rzszg" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.360207 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/726a0082-0e03-4539-9f62-ee7776d0a7d8-socket-dir\") pod \"csi-hostpathplugin-2rbj4\" (UID: \"726a0082-0e03-4539-9f62-ee7776d0a7d8\") " pod="hostpath-provisioner/csi-hostpathplugin-2rbj4" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.360229 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cbe88e47-edc8-4f4a-bc0b-006a3551c85c-signing-cabundle\") pod \"service-ca-9c57cc56f-s2hzs\" (UID: \"cbe88e47-edc8-4f4a-bc0b-006a3551c85c\") " pod="openshift-service-ca/service-ca-9c57cc56f-s2hzs" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.360251 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe51cf59-5f34-4f01-8404-2f95b7ca742b-config-volume\") pod \"collect-profiles-29497395-k4x5t\" (UID: \"fe51cf59-5f34-4f01-8404-2f95b7ca742b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-k4x5t" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.360307 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b26392ba-50ff-4652-bbb2-52dc2328effb-serving-cert\") pod \"authentication-operator-69f744f599-4jhvx\" (UID: \"b26392ba-50ff-4652-bbb2-52dc2328effb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4jhvx" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.360329 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ae655cf0-e8e0-4118-9dac-90bfc5b7aafe-certs\") pod \"machine-config-server-cgsr5\" (UID: \"ae655cf0-e8e0-4118-9dac-90bfc5b7aafe\") " pod="openshift-machine-config-operator/machine-config-server-cgsr5" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.360380 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f913ba98-cee4-48cc-9167-575792819b69-machine-approver-tls\") pod \"machine-approver-56656f9798-rlxht\" (UID: \"f913ba98-cee4-48cc-9167-575792819b69\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rlxht" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.360420 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/900fb657-d80e-4887-8144-424a3cf39946-stats-auth\") pod \"router-default-5444994796-lnblx\" (UID: \"900fb657-d80e-4887-8144-424a3cf39946\") " pod="openshift-ingress/router-default-5444994796-lnblx" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.360461 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cft97\" (UniqueName: \"kubernetes.io/projected/bd9fc355-495d-408c-9084-b781e3494409-kube-api-access-cft97\") pod \"kube-storage-version-migrator-operator-b67b599dd-xptlh\" (UID: \"bd9fc355-495d-408c-9084-b781e3494409\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xptlh" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.361943 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f9b70b6-16d7-48f4-96e9-1ad34f82cac0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-h7c7k\" (UID: \"1f9b70b6-16d7-48f4-96e9-1ad34f82cac0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h7c7k" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.362093 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f913ba98-cee4-48cc-9167-575792819b69-config\") pod \"machine-approver-56656f9798-rlxht\" (UID: \"f913ba98-cee4-48cc-9167-575792819b69\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rlxht" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.362209 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f913ba98-cee4-48cc-9167-575792819b69-auth-proxy-config\") pod \"machine-approver-56656f9798-rlxht\" (UID: \"f913ba98-cee4-48cc-9167-575792819b69\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rlxht" Jan 31 07:24:15 crc kubenswrapper[4908]: E0131 07:24:15.362428 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:15.862414936 +0000 UTC m=+162.478359590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.360486 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b18d2de-0b08-43c9-bcbf-1ced621bac08-serving-cert\") pod \"route-controller-manager-6576b87f9c-wgj2j\" (UID: \"1b18d2de-0b08-43c9-bcbf-1ced621bac08\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.363034 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/900fb657-d80e-4887-8144-424a3cf39946-service-ca-bundle\") pod \"router-default-5444994796-lnblx\" (UID: \"900fb657-d80e-4887-8144-424a3cf39946\") " pod="openshift-ingress/router-default-5444994796-lnblx" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.363269 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61affb6e-e659-45c3-b1bb-f328e073304f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vgk68\" (UID: \"61affb6e-e659-45c3-b1bb-f328e073304f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.363600 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b18d2de-0b08-43c9-bcbf-1ced621bac08-client-ca\") pod \"route-controller-manager-6576b87f9c-wgj2j\" (UID: \"1b18d2de-0b08-43c9-bcbf-1ced621bac08\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.364070 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/36518e3b-feb3-4ca9-8a3d-debecb7e80ca-etcd-ca\") pod \"etcd-operator-b45778765-xz8pb\" (UID: \"36518e3b-feb3-4ca9-8a3d-debecb7e80ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xz8pb" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.364276 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0d369cc1-14e7-49ff-b253-bc196840a444-registry-certificates\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.364624 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36518e3b-feb3-4ca9-8a3d-debecb7e80ca-config\") pod \"etcd-operator-b45778765-xz8pb\" (UID: \"36518e3b-feb3-4ca9-8a3d-debecb7e80ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xz8pb" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.365093 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/097d2f96-ce86-4d47-a55c-c717d272a8ef-console-oauth-config\") pod \"console-f9d7485db-fjlrr\" (UID: \"097d2f96-ce86-4d47-a55c-c717d272a8ef\") " pod="openshift-console/console-f9d7485db-fjlrr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.365810 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd9fc355-495d-408c-9084-b781e3494409-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xptlh\" (UID: \"bd9fc355-495d-408c-9084-b781e3494409\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xptlh" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.366377 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/36518e3b-feb3-4ca9-8a3d-debecb7e80ca-etcd-service-ca\") pod \"etcd-operator-b45778765-xz8pb\" (UID: \"36518e3b-feb3-4ca9-8a3d-debecb7e80ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xz8pb" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.366397 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61affb6e-e659-45c3-b1bb-f328e073304f-config\") pod \"controller-manager-879f6c89f-vgk68\" (UID: \"61affb6e-e659-45c3-b1bb-f328e073304f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.366735 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36518e3b-feb3-4ca9-8a3d-debecb7e80ca-serving-cert\") pod \"etcd-operator-b45778765-xz8pb\" (UID: \"36518e3b-feb3-4ca9-8a3d-debecb7e80ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xz8pb" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.367000 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b26392ba-50ff-4652-bbb2-52dc2328effb-service-ca-bundle\") pod \"authentication-operator-69f744f599-4jhvx\" (UID: \"b26392ba-50ff-4652-bbb2-52dc2328effb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4jhvx" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.367006 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/097d2f96-ce86-4d47-a55c-c717d272a8ef-oauth-serving-cert\") pod \"console-f9d7485db-fjlrr\" (UID: \"097d2f96-ce86-4d47-a55c-c717d272a8ef\") " pod="openshift-console/console-f9d7485db-fjlrr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.367052 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b18d2de-0b08-43c9-bcbf-1ced621bac08-serving-cert\") pod \"route-controller-manager-6576b87f9c-wgj2j\" (UID: \"1b18d2de-0b08-43c9-bcbf-1ced621bac08\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.367514 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f9b70b6-16d7-48f4-96e9-1ad34f82cac0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-h7c7k\" (UID: \"1f9b70b6-16d7-48f4-96e9-1ad34f82cac0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h7c7k" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.367523 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/900fb657-d80e-4887-8144-424a3cf39946-default-certificate\") pod \"router-default-5444994796-lnblx\" (UID: \"900fb657-d80e-4887-8144-424a3cf39946\") " pod="openshift-ingress/router-default-5444994796-lnblx" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.368324 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1b130058-7294-4cc4-8e1d-8dcde09ec947-proxy-tls\") pod \"machine-config-controller-84d6567774-rzszg\" (UID: \"1b130058-7294-4cc4-8e1d-8dcde09ec947\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rzszg" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.369005 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/284e4410-86ef-4c86-a9e1-6859dea75ab2-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nkm6l\" (UID: \"284e4410-86ef-4c86-a9e1-6859dea75ab2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nkm6l" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.369112 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/097d2f96-ce86-4d47-a55c-c717d272a8ef-service-ca\") pod \"console-f9d7485db-fjlrr\" (UID: \"097d2f96-ce86-4d47-a55c-c717d272a8ef\") " pod="openshift-console/console-f9d7485db-fjlrr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.369480 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/36518e3b-feb3-4ca9-8a3d-debecb7e80ca-etcd-client\") pod \"etcd-operator-b45778765-xz8pb\" (UID: \"36518e3b-feb3-4ca9-8a3d-debecb7e80ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xz8pb" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.369721 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/53de0c2d-11ec-4f56-a585-7497b8c698a2-metrics-tls\") pod \"ingress-operator-5b745b69d9-jt7r7\" (UID: \"53de0c2d-11ec-4f56-a585-7497b8c698a2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jt7r7" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.369896 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/900fb657-d80e-4887-8144-424a3cf39946-metrics-certs\") pod \"router-default-5444994796-lnblx\" (UID: \"900fb657-d80e-4887-8144-424a3cf39946\") " pod="openshift-ingress/router-default-5444994796-lnblx" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.370190 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284e4410-86ef-4c86-a9e1-6859dea75ab2-config\") pod \"kube-controller-manager-operator-78b949d7b-nkm6l\" (UID: \"284e4410-86ef-4c86-a9e1-6859dea75ab2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nkm6l" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.370555 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/312e81dd-7f53-4f53-a0fa-c0ef69a50cd2-images\") pod \"machine-config-operator-74547568cd-hhsgr\" (UID: \"312e81dd-7f53-4f53-a0fa-c0ef69a50cd2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hhsgr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.379048 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/312e81dd-7f53-4f53-a0fa-c0ef69a50cd2-proxy-tls\") pod \"machine-config-operator-74547568cd-hhsgr\" (UID: \"312e81dd-7f53-4f53-a0fa-c0ef69a50cd2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hhsgr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.379484 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1b130058-7294-4cc4-8e1d-8dcde09ec947-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rzszg\" (UID: \"1b130058-7294-4cc4-8e1d-8dcde09ec947\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rzszg" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.379493 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f913ba98-cee4-48cc-9167-575792819b69-machine-approver-tls\") pod \"machine-approver-56656f9798-rlxht\" (UID: \"f913ba98-cee4-48cc-9167-575792819b69\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rlxht" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.379578 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61affb6e-e659-45c3-b1bb-f328e073304f-client-ca\") pod \"controller-manager-879f6c89f-vgk68\" (UID: \"61affb6e-e659-45c3-b1bb-f328e073304f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.380119 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d369cc1-14e7-49ff-b253-bc196840a444-trusted-ca\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.380670 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0d369cc1-14e7-49ff-b253-bc196840a444-registry-tls\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.380746 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0d369cc1-14e7-49ff-b253-bc196840a444-installation-pull-secrets\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.380819 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqfcd\" (UniqueName: \"kubernetes.io/projected/312e81dd-7f53-4f53-a0fa-c0ef69a50cd2-kube-api-access-zqfcd\") pod \"machine-config-operator-74547568cd-hhsgr\" (UID: \"312e81dd-7f53-4f53-a0fa-c0ef69a50cd2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hhsgr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.380970 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0d369cc1-14e7-49ff-b253-bc196840a444-ca-trust-extracted\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.381014 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b26392ba-50ff-4652-bbb2-52dc2328effb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4jhvx\" (UID: \"b26392ba-50ff-4652-bbb2-52dc2328effb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4jhvx" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.381039 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd9fc355-495d-408c-9084-b781e3494409-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xptlh\" (UID: \"bd9fc355-495d-408c-9084-b781e3494409\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xptlh" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.381062 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b18d2de-0b08-43c9-bcbf-1ced621bac08-config\") pod \"route-controller-manager-6576b87f9c-wgj2j\" (UID: \"1b18d2de-0b08-43c9-bcbf-1ced621bac08\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.381085 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twvcd\" (UniqueName: \"kubernetes.io/projected/f13ebf33-3e36-4c3b-9348-1f9fb94544e0-kube-api-access-twvcd\") pod \"migrator-59844c95c7-d4l95\" (UID: \"f13ebf33-3e36-4c3b-9348-1f9fb94544e0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d4l95" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.381180 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qmh9\" (UniqueName: \"kubernetes.io/projected/fb91aa35-0cab-44ca-b05c-a8e73038efdc-kube-api-access-6qmh9\") pod \"dns-default-5djg7\" (UID: \"fb91aa35-0cab-44ca-b05c-a8e73038efdc\") " pod="openshift-dns/dns-default-5djg7" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.382719 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b18d2de-0b08-43c9-bcbf-1ced621bac08-config\") pod \"route-controller-manager-6576b87f9c-wgj2j\" (UID: \"1b18d2de-0b08-43c9-bcbf-1ced621bac08\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.382895 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9742dfc9-568c-49f1-89ae-8f3959bf33ad-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4vxx6\" (UID: \"9742dfc9-568c-49f1-89ae-8f3959bf33ad\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4vxx6" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.383007 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d0a4cbb-8e82-42c7-8661-4c4f371699e0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gmh7h\" (UID: \"0d0a4cbb-8e82-42c7-8661-4c4f371699e0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gmh7h" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.383035 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/097d2f96-ce86-4d47-a55c-c717d272a8ef-console-config\") pod \"console-f9d7485db-fjlrr\" (UID: \"097d2f96-ce86-4d47-a55c-c717d272a8ef\") " pod="openshift-console/console-f9d7485db-fjlrr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.382926 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/312e81dd-7f53-4f53-a0fa-c0ef69a50cd2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hhsgr\" (UID: \"312e81dd-7f53-4f53-a0fa-c0ef69a50cd2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hhsgr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.383716 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b26392ba-50ff-4652-bbb2-52dc2328effb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4jhvx\" (UID: \"b26392ba-50ff-4652-bbb2-52dc2328effb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4jhvx" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.383793 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/097d2f96-ce86-4d47-a55c-c717d272a8ef-console-config\") pod \"console-f9d7485db-fjlrr\" (UID: \"097d2f96-ce86-4d47-a55c-c717d272a8ef\") " pod="openshift-console/console-f9d7485db-fjlrr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.384189 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd9fc355-495d-408c-9084-b781e3494409-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xptlh\" (UID: \"bd9fc355-495d-408c-9084-b781e3494409\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xptlh" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.384534 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9742dfc9-568c-49f1-89ae-8f3959bf33ad-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4vxx6\" (UID: \"9742dfc9-568c-49f1-89ae-8f3959bf33ad\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4vxx6" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.384565 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/900fb657-d80e-4887-8144-424a3cf39946-stats-auth\") pod \"router-default-5444994796-lnblx\" (UID: \"900fb657-d80e-4887-8144-424a3cf39946\") " pod="openshift-ingress/router-default-5444994796-lnblx" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.385517 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9742dfc9-568c-49f1-89ae-8f3959bf33ad-serving-cert\") pod \"openshift-config-operator-7777fb866f-4vxx6\" (UID: \"9742dfc9-568c-49f1-89ae-8f3959bf33ad\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4vxx6" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.386456 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/097d2f96-ce86-4d47-a55c-c717d272a8ef-trusted-ca-bundle\") pod \"console-f9d7485db-fjlrr\" (UID: \"097d2f96-ce86-4d47-a55c-c717d272a8ef\") " pod="openshift-console/console-f9d7485db-fjlrr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.387313 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61affb6e-e659-45c3-b1bb-f328e073304f-serving-cert\") pod \"controller-manager-879f6c89f-vgk68\" (UID: \"61affb6e-e659-45c3-b1bb-f328e073304f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.387520 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0d369cc1-14e7-49ff-b253-bc196840a444-ca-trust-extracted\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.387831 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b26392ba-50ff-4652-bbb2-52dc2328effb-serving-cert\") pod \"authentication-operator-69f744f599-4jhvx\" (UID: \"b26392ba-50ff-4652-bbb2-52dc2328effb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4jhvx" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.388406 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a1eb7039-2ff7-48da-85d1-471dfe4f956b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5bzw7\" (UID: \"a1eb7039-2ff7-48da-85d1-471dfe4f956b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5bzw7" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.388634 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0d369cc1-14e7-49ff-b253-bc196840a444-installation-pull-secrets\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.390577 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.390857 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d0a4cbb-8e82-42c7-8661-4c4f371699e0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gmh7h\" (UID: \"0d0a4cbb-8e82-42c7-8661-4c4f371699e0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gmh7h" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.391150 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/097d2f96-ce86-4d47-a55c-c717d272a8ef-console-serving-cert\") pod \"console-f9d7485db-fjlrr\" (UID: \"097d2f96-ce86-4d47-a55c-c717d272a8ef\") " pod="openshift-console/console-f9d7485db-fjlrr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.396862 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.398405 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/53de0c2d-11ec-4f56-a585-7497b8c698a2-trusted-ca\") pod \"ingress-operator-5b745b69d9-jt7r7\" (UID: \"53de0c2d-11ec-4f56-a585-7497b8c698a2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jt7r7" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.408373 4908 request.go:700] Waited for 1.67591825s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.409769 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.428921 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.450656 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.469413 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.485554 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ae655cf0-e8e0-4118-9dac-90bfc5b7aafe-node-bootstrap-token\") pod \"machine-config-server-cgsr5\" (UID: \"ae655cf0-e8e0-4118-9dac-90bfc5b7aafe\") " pod="openshift-machine-config-operator/machine-config-server-cgsr5" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.485590 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c294adb2-f360-4af0-9919-dc678235c37d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7cskt\" (UID: \"c294adb2-f360-4af0-9919-dc678235c37d\") " pod="openshift-marketplace/marketplace-operator-79b997595-7cskt" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.485610 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnsgc\" (UniqueName: \"kubernetes.io/projected/ae655cf0-e8e0-4118-9dac-90bfc5b7aafe-kube-api-access-gnsgc\") pod \"machine-config-server-cgsr5\" (UID: \"ae655cf0-e8e0-4118-9dac-90bfc5b7aafe\") " pod="openshift-machine-config-operator/machine-config-server-cgsr5" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.485631 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/726a0082-0e03-4539-9f62-ee7776d0a7d8-socket-dir\") pod \"csi-hostpathplugin-2rbj4\" (UID: \"726a0082-0e03-4539-9f62-ee7776d0a7d8\") " pod="hostpath-provisioner/csi-hostpathplugin-2rbj4" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.485646 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cbe88e47-edc8-4f4a-bc0b-006a3551c85c-signing-cabundle\") pod \"service-ca-9c57cc56f-s2hzs\" (UID: \"cbe88e47-edc8-4f4a-bc0b-006a3551c85c\") " pod="openshift-service-ca/service-ca-9c57cc56f-s2hzs" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.485661 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe51cf59-5f34-4f01-8404-2f95b7ca742b-config-volume\") pod \"collect-profiles-29497395-k4x5t\" (UID: \"fe51cf59-5f34-4f01-8404-2f95b7ca742b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-k4x5t" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.485682 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ae655cf0-e8e0-4118-9dac-90bfc5b7aafe-certs\") pod \"machine-config-server-cgsr5\" (UID: \"ae655cf0-e8e0-4118-9dac-90bfc5b7aafe\") " pod="openshift-machine-config-operator/machine-config-server-cgsr5" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.485726 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twvcd\" (UniqueName: \"kubernetes.io/projected/f13ebf33-3e36-4c3b-9348-1f9fb94544e0-kube-api-access-twvcd\") pod \"migrator-59844c95c7-d4l95\" (UID: \"f13ebf33-3e36-4c3b-9348-1f9fb94544e0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d4l95" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.485742 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qmh9\" (UniqueName: \"kubernetes.io/projected/fb91aa35-0cab-44ca-b05c-a8e73038efdc-kube-api-access-6qmh9\") pod \"dns-default-5djg7\" (UID: \"fb91aa35-0cab-44ca-b05c-a8e73038efdc\") " pod="openshift-dns/dns-default-5djg7" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.485759 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/15ac1097-42d7-4ca8-b258-9418f0e0993e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vzfwr\" (UID: \"15ac1097-42d7-4ca8-b258-9418f0e0993e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vzfwr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.485779 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/726a0082-0e03-4539-9f62-ee7776d0a7d8-plugins-dir\") pod \"csi-hostpathplugin-2rbj4\" (UID: \"726a0082-0e03-4539-9f62-ee7776d0a7d8\") " pod="hostpath-provisioner/csi-hostpathplugin-2rbj4" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.485811 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.485833 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2a40a7d-4ba8-4fd0-863f-53d987b1383c-config\") pod \"service-ca-operator-777779d784-dbx7j\" (UID: \"e2a40a7d-4ba8-4fd0-863f-53d987b1383c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dbx7j" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.485889 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbj7b\" (UniqueName: \"kubernetes.io/projected/e2a40a7d-4ba8-4fd0-863f-53d987b1383c-kube-api-access-mbj7b\") pod \"service-ca-operator-777779d784-dbx7j\" (UID: \"e2a40a7d-4ba8-4fd0-863f-53d987b1383c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dbx7j" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.485910 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/726a0082-0e03-4539-9f62-ee7776d0a7d8-mountpoint-dir\") pod \"csi-hostpathplugin-2rbj4\" (UID: \"726a0082-0e03-4539-9f62-ee7776d0a7d8\") " pod="hostpath-provisioner/csi-hostpathplugin-2rbj4" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.485964 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/15ac1097-42d7-4ca8-b258-9418f0e0993e-srv-cert\") pod \"olm-operator-6b444d44fb-vzfwr\" (UID: \"15ac1097-42d7-4ca8-b258-9418f0e0993e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vzfwr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.486022 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m4jr\" (UniqueName: \"kubernetes.io/projected/15ac1097-42d7-4ca8-b258-9418f0e0993e-kube-api-access-7m4jr\") pod \"olm-operator-6b444d44fb-vzfwr\" (UID: \"15ac1097-42d7-4ca8-b258-9418f0e0993e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vzfwr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.486052 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8kcd\" (UniqueName: \"kubernetes.io/projected/21666eaa-906a-49c3-aaf9-b2466bc9d7f3-kube-api-access-t8kcd\") pod \"ingress-canary-5dh6j\" (UID: \"21666eaa-906a-49c3-aaf9-b2466bc9d7f3\") " pod="openshift-ingress-canary/ingress-canary-5dh6j" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.486074 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2a40a7d-4ba8-4fd0-863f-53d987b1383c-serving-cert\") pod \"service-ca-operator-777779d784-dbx7j\" (UID: \"e2a40a7d-4ba8-4fd0-863f-53d987b1383c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dbx7j" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.486089 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe51cf59-5f34-4f01-8404-2f95b7ca742b-secret-volume\") pod \"collect-profiles-29497395-k4x5t\" (UID: \"fe51cf59-5f34-4f01-8404-2f95b7ca742b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-k4x5t" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.486119 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/726a0082-0e03-4539-9f62-ee7776d0a7d8-csi-data-dir\") pod \"csi-hostpathplugin-2rbj4\" (UID: \"726a0082-0e03-4539-9f62-ee7776d0a7d8\") " pod="hostpath-provisioner/csi-hostpathplugin-2rbj4" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.486142 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dtbh\" (UniqueName: \"kubernetes.io/projected/cbe88e47-edc8-4f4a-bc0b-006a3551c85c-kube-api-access-4dtbh\") pod \"service-ca-9c57cc56f-s2hzs\" (UID: \"cbe88e47-edc8-4f4a-bc0b-006a3551c85c\") " pod="openshift-service-ca/service-ca-9c57cc56f-s2hzs" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.486164 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cbe88e47-edc8-4f4a-bc0b-006a3551c85c-signing-key\") pod \"service-ca-9c57cc56f-s2hzs\" (UID: \"cbe88e47-edc8-4f4a-bc0b-006a3551c85c\") " pod="openshift-service-ca/service-ca-9c57cc56f-s2hzs" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.486188 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fb91aa35-0cab-44ca-b05c-a8e73038efdc-metrics-tls\") pod \"dns-default-5djg7\" (UID: \"fb91aa35-0cab-44ca-b05c-a8e73038efdc\") " pod="openshift-dns/dns-default-5djg7" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.486211 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfml7\" (UniqueName: \"kubernetes.io/projected/c294adb2-f360-4af0-9919-dc678235c37d-kube-api-access-mfml7\") pod \"marketplace-operator-79b997595-7cskt\" (UID: \"c294adb2-f360-4af0-9919-dc678235c37d\") " pod="openshift-marketplace/marketplace-operator-79b997595-7cskt" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.486240 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj6x8\" (UniqueName: \"kubernetes.io/projected/726a0082-0e03-4539-9f62-ee7776d0a7d8-kube-api-access-rj6x8\") pod \"csi-hostpathplugin-2rbj4\" (UID: \"726a0082-0e03-4539-9f62-ee7776d0a7d8\") " pod="hostpath-provisioner/csi-hostpathplugin-2rbj4" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.486271 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb91aa35-0cab-44ca-b05c-a8e73038efdc-config-volume\") pod \"dns-default-5djg7\" (UID: \"fb91aa35-0cab-44ca-b05c-a8e73038efdc\") " pod="openshift-dns/dns-default-5djg7" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.486291 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg65b\" (UniqueName: \"kubernetes.io/projected/fe51cf59-5f34-4f01-8404-2f95b7ca742b-kube-api-access-mg65b\") pod \"collect-profiles-29497395-k4x5t\" (UID: \"fe51cf59-5f34-4f01-8404-2f95b7ca742b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-k4x5t" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.486314 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c294adb2-f360-4af0-9919-dc678235c37d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7cskt\" (UID: \"c294adb2-f360-4af0-9919-dc678235c37d\") " pod="openshift-marketplace/marketplace-operator-79b997595-7cskt" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.486335 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/21666eaa-906a-49c3-aaf9-b2466bc9d7f3-cert\") pod \"ingress-canary-5dh6j\" (UID: \"21666eaa-906a-49c3-aaf9-b2466bc9d7f3\") " pod="openshift-ingress-canary/ingress-canary-5dh6j" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.486404 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/726a0082-0e03-4539-9f62-ee7776d0a7d8-registration-dir\") pod \"csi-hostpathplugin-2rbj4\" (UID: \"726a0082-0e03-4539-9f62-ee7776d0a7d8\") " pod="hostpath-provisioner/csi-hostpathplugin-2rbj4" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.486678 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/726a0082-0e03-4539-9f62-ee7776d0a7d8-registration-dir\") pod \"csi-hostpathplugin-2rbj4\" (UID: \"726a0082-0e03-4539-9f62-ee7776d0a7d8\") " pod="hostpath-provisioner/csi-hostpathplugin-2rbj4" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.492859 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/726a0082-0e03-4539-9f62-ee7776d0a7d8-csi-data-dir\") pod \"csi-hostpathplugin-2rbj4\" (UID: \"726a0082-0e03-4539-9f62-ee7776d0a7d8\") " pod="hostpath-provisioner/csi-hostpathplugin-2rbj4" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.493556 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/726a0082-0e03-4539-9f62-ee7776d0a7d8-socket-dir\") pod \"csi-hostpathplugin-2rbj4\" (UID: \"726a0082-0e03-4539-9f62-ee7776d0a7d8\") " pod="hostpath-provisioner/csi-hostpathplugin-2rbj4" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.494508 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cbe88e47-edc8-4f4a-bc0b-006a3551c85c-signing-cabundle\") pod \"service-ca-9c57cc56f-s2hzs\" (UID: \"cbe88e47-edc8-4f4a-bc0b-006a3551c85c\") " pod="openshift-service-ca/service-ca-9c57cc56f-s2hzs" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.495129 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/726a0082-0e03-4539-9f62-ee7776d0a7d8-plugins-dir\") pod \"csi-hostpathplugin-2rbj4\" (UID: \"726a0082-0e03-4539-9f62-ee7776d0a7d8\") " pod="hostpath-provisioner/csi-hostpathplugin-2rbj4" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.495113 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.495148 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2a40a7d-4ba8-4fd0-863f-53d987b1383c-serving-cert\") pod \"service-ca-operator-777779d784-dbx7j\" (UID: \"e2a40a7d-4ba8-4fd0-863f-53d987b1383c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dbx7j" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.495245 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/15ac1097-42d7-4ca8-b258-9418f0e0993e-srv-cert\") pod \"olm-operator-6b444d44fb-vzfwr\" (UID: \"15ac1097-42d7-4ca8-b258-9418f0e0993e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vzfwr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.495343 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/726a0082-0e03-4539-9f62-ee7776d0a7d8-mountpoint-dir\") pod \"csi-hostpathplugin-2rbj4\" (UID: \"726a0082-0e03-4539-9f62-ee7776d0a7d8\") " pod="hostpath-provisioner/csi-hostpathplugin-2rbj4" Jan 31 07:24:15 crc kubenswrapper[4908]: E0131 07:24:15.495751 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:15.995738368 +0000 UTC m=+162.611683022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.495802 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ae655cf0-e8e0-4118-9dac-90bfc5b7aafe-node-bootstrap-token\") pod \"machine-config-server-cgsr5\" (UID: \"ae655cf0-e8e0-4118-9dac-90bfc5b7aafe\") " pod="openshift-machine-config-operator/machine-config-server-cgsr5" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.495968 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe51cf59-5f34-4f01-8404-2f95b7ca742b-config-volume\") pod \"collect-profiles-29497395-k4x5t\" (UID: \"fe51cf59-5f34-4f01-8404-2f95b7ca742b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-k4x5t" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.496086 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2a40a7d-4ba8-4fd0-863f-53d987b1383c-config\") pod \"service-ca-operator-777779d784-dbx7j\" (UID: \"e2a40a7d-4ba8-4fd0-863f-53d987b1383c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dbx7j" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.497486 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c294adb2-f360-4af0-9919-dc678235c37d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7cskt\" (UID: \"c294adb2-f360-4af0-9919-dc678235c37d\") " pod="openshift-marketplace/marketplace-operator-79b997595-7cskt" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.497548 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c294adb2-f360-4af0-9919-dc678235c37d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7cskt\" (UID: \"c294adb2-f360-4af0-9919-dc678235c37d\") " pod="openshift-marketplace/marketplace-operator-79b997595-7cskt" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.498289 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cbe88e47-edc8-4f4a-bc0b-006a3551c85c-signing-key\") pod \"service-ca-9c57cc56f-s2hzs\" (UID: \"cbe88e47-edc8-4f4a-bc0b-006a3551c85c\") " pod="openshift-service-ca/service-ca-9c57cc56f-s2hzs" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.499072 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ae655cf0-e8e0-4118-9dac-90bfc5b7aafe-certs\") pod \"machine-config-server-cgsr5\" (UID: \"ae655cf0-e8e0-4118-9dac-90bfc5b7aafe\") " pod="openshift-machine-config-operator/machine-config-server-cgsr5" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.499077 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe51cf59-5f34-4f01-8404-2f95b7ca742b-secret-volume\") pod \"collect-profiles-29497395-k4x5t\" (UID: \"fe51cf59-5f34-4f01-8404-2f95b7ca742b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-k4x5t" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.499141 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/15ac1097-42d7-4ca8-b258-9418f0e0993e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vzfwr\" (UID: \"15ac1097-42d7-4ca8-b258-9418f0e0993e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vzfwr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.499159 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/21666eaa-906a-49c3-aaf9-b2466bc9d7f3-cert\") pod \"ingress-canary-5dh6j\" (UID: \"21666eaa-906a-49c3-aaf9-b2466bc9d7f3\") " pod="openshift-ingress-canary/ingress-canary-5dh6j" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.499951 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb91aa35-0cab-44ca-b05c-a8e73038efdc-config-volume\") pod \"dns-default-5djg7\" (UID: \"fb91aa35-0cab-44ca-b05c-a8e73038efdc\") " pod="openshift-dns/dns-default-5djg7" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.500349 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fb91aa35-0cab-44ca-b05c-a8e73038efdc-metrics-tls\") pod \"dns-default-5djg7\" (UID: \"fb91aa35-0cab-44ca-b05c-a8e73038efdc\") " pod="openshift-dns/dns-default-5djg7" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.501195 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b26392ba-50ff-4652-bbb2-52dc2328effb-config\") pod \"authentication-operator-69f744f599-4jhvx\" (UID: \"b26392ba-50ff-4652-bbb2-52dc2328effb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4jhvx" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.513502 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.529205 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.548707 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.569602 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.587459 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:15 crc kubenswrapper[4908]: E0131 07:24:15.587643 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:16.087618749 +0000 UTC m=+162.703563403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.587959 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:15 crc kubenswrapper[4908]: E0131 07:24:15.588364 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:16.088356188 +0000 UTC m=+162.704300842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.589139 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.611440 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.629760 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.650644 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.689249 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:15 crc kubenswrapper[4908]: E0131 07:24:15.689435 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:16.189418624 +0000 UTC m=+162.805363268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.691622 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:15 crc kubenswrapper[4908]: E0131 07:24:15.691934 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:16.191925829 +0000 UTC m=+162.807870473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.706588 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hf7g\" (UniqueName: \"kubernetes.io/projected/1b18d2de-0b08-43c9-bcbf-1ced621bac08-kube-api-access-9hf7g\") pod \"route-controller-manager-6576b87f9c-wgj2j\" (UID: \"1b18d2de-0b08-43c9-bcbf-1ced621bac08\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.727562 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7zrxf" event={"ID":"232fc61b-967c-45a9-86fc-f9481f555e6e","Type":"ContainerStarted","Data":"262638389dd3c03b4c4db24f6674384367d7e092eed909e28986b99090de0525"} Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.727615 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7zrxf" event={"ID":"232fc61b-967c-45a9-86fc-f9481f555e6e","Type":"ContainerStarted","Data":"34e22a31d474d82acf0082bda7c88afb17c4d8dc69e573e3a682132d4be52c5f"} Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.727628 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7zrxf" event={"ID":"232fc61b-967c-45a9-86fc-f9481f555e6e","Type":"ContainerStarted","Data":"41838788463b861541442ee2edc8733ec82f184655371307b4f81f5cffa4d454"} Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.729198 4908 generic.go:334] "Generic (PLEG): container finished" podID="787bae26-eaf0-4c74-84a1-4ada053cd05a" containerID="6a6df7c1c899772f8359709eb36b638aeeb757a27be493ff2e8f5d4d675157a3" exitCode=0 Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.729243 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vln8d" event={"ID":"787bae26-eaf0-4c74-84a1-4ada053cd05a","Type":"ContainerDied","Data":"6a6df7c1c899772f8359709eb36b638aeeb757a27be493ff2e8f5d4d675157a3"} Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.729260 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vln8d" event={"ID":"787bae26-eaf0-4c74-84a1-4ada053cd05a","Type":"ContainerStarted","Data":"818a873cdeeb6d1c6c88d40092a12a4af3e97f61dddc7ec6c71d693e2d7ec86f"} Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.730854 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7qwq\" (UniqueName: \"kubernetes.io/projected/9742dfc9-568c-49f1-89ae-8f3959bf33ad-kube-api-access-b7qwq\") pod \"openshift-config-operator-7777fb866f-4vxx6\" (UID: \"9742dfc9-568c-49f1-89ae-8f3959bf33ad\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4vxx6" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.732148 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4lhmd" event={"ID":"990cab2f-be71-43a6-b496-b37e97cd7156","Type":"ContainerStarted","Data":"5b03bf2f96ab9d7b846f6351b3edb3bf557e418032c8800ee6c63bae320715ff"} Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.732925 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4lhmd" event={"ID":"990cab2f-be71-43a6-b496-b37e97cd7156","Type":"ContainerStarted","Data":"3057b46b7df98d31c367b652f9e9b996b96b026595855b6c7fcfba2952c921b2"} Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.739400 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-l4rd4" event={"ID":"a0ae9cc7-27cb-43ca-9543-286306c0272c","Type":"ContainerStarted","Data":"a55efb9c4e4e0e1b1711b1cf4fcc341a6f7e83d2320f94fa9d4d23bc02b4dc6c"} Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.739427 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-l4rd4" event={"ID":"a0ae9cc7-27cb-43ca-9543-286306c0272c","Type":"ContainerStarted","Data":"607f2b00dbd0723a3430d0a6fe0fc9344079b333a04a7d91c40ea1e4c4ff7fc0"} Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.739578 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-l4rd4" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.743109 4908 patch_prober.go:28] interesting pod/console-operator-58897d9998-l4rd4 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.743157 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-l4rd4" podUID="a0ae9cc7-27cb-43ca-9543-286306c0272c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.750415 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jk7hz" event={"ID":"20545c56-1cd5-4fcf-a537-f4f2212027c7","Type":"ContainerStarted","Data":"8233152ee3abe1b0fe64df978f188e2e0c09f96c41688b16f131a6535d0bc3da"} Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.750785 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jk7hz" event={"ID":"20545c56-1cd5-4fcf-a537-f4f2212027c7","Type":"ContainerStarted","Data":"e5db308b54d5e259866e9c321a096297a5e3845718554f35b956547bf6b61903"} Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.751069 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5swnd\" (UniqueName: \"kubernetes.io/projected/1b130058-7294-4cc4-8e1d-8dcde09ec947-kube-api-access-5swnd\") pod \"machine-config-controller-84d6567774-rzszg\" (UID: \"1b130058-7294-4cc4-8e1d-8dcde09ec947\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rzszg" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.761259 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-jwbt2" event={"ID":"d2c04ff3-6875-46f8-a906-efc6ffdd312b","Type":"ContainerStarted","Data":"8ac2acc9cec07b191fe4b9a08bcd3315bc160965e10adecde4887e889d231bd6"} Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.761316 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-jwbt2" event={"ID":"d2c04ff3-6875-46f8-a906-efc6ffdd312b","Type":"ContainerStarted","Data":"745ca94cdb97db6c3c88c72957914b02935436b99a2d891947a6bf394b2434da"} Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.762806 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxvvs\" (UniqueName: \"kubernetes.io/projected/53de0c2d-11ec-4f56-a585-7497b8c698a2-kube-api-access-zxvvs\") pod \"ingress-operator-5b745b69d9-jt7r7\" (UID: \"53de0c2d-11ec-4f56-a585-7497b8c698a2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jt7r7" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.764992 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-fhcvf" event={"ID":"da4349d8-d046-46ca-86b2-1bd4a8292bec","Type":"ContainerStarted","Data":"afbe66e491d901413e50cf718bf0cc439c50e6f411b7ee83be7b3dc2bd4da0ae"} Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.765066 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-fhcvf" event={"ID":"da4349d8-d046-46ca-86b2-1bd4a8292bec","Type":"ContainerStarted","Data":"27b07fdb476ad03b3897425b7449424458edc91326f4682a5136fba3e712cc17"} Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.765079 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-fhcvf" event={"ID":"da4349d8-d046-46ca-86b2-1bd4a8292bec","Type":"ContainerStarted","Data":"9ee360d71159a88869784a6311d2339716d8f455a211bc954f817b1c194b6b88"} Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.773311 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nb82q" event={"ID":"853eae0c-97bb-4598-b299-a48780dfac55","Type":"ContainerStarted","Data":"4d9de4df8a424131794b111b1ea252c06a12724d7c8a69f2b70509a6d9cfe2b5"} Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.773347 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nb82q" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.773357 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nb82q" event={"ID":"853eae0c-97bb-4598-b299-a48780dfac55","Type":"ContainerStarted","Data":"70f0ccc0bf996714c1c1ccb04c19edcfd6965c1e80a088237a34dfc01f13389d"} Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.774546 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jm4lf" event={"ID":"e65ad778-99ee-423a-b0d7-825171576820","Type":"ContainerStarted","Data":"dcc9c6bca6d49ac794a986bd65c3b37180724c422d37657813be9419fc3e2f9b"} Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.775402 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jm4lf" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.776783 4908 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-jm4lf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" start-of-body= Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.776823 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jm4lf" podUID="e65ad778-99ee-423a-b0d7-825171576820" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.777130 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5rwgc" event={"ID":"a0f4e037-6ca7-43b0-a8c7-5f029cf833f7","Type":"ContainerStarted","Data":"cb08e367a21081cac4ed81f80c9c2aaf4fd70d49b435af05e4e6c4cfe9d7b8d1"} Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.777158 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5rwgc" event={"ID":"a0f4e037-6ca7-43b0-a8c7-5f029cf833f7","Type":"ContainerStarted","Data":"f7614361be286eb7f036df7fcf8c27be19262f241809279a799a568d8e7a5ae8"} Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.783712 4908 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-nb82q container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.783767 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nb82q" podUID="853eae0c-97bb-4598-b299-a48780dfac55" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.785715 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbvv6\" (UniqueName: \"kubernetes.io/projected/61affb6e-e659-45c3-b1bb-f328e073304f-kube-api-access-wbvv6\") pod \"controller-manager-879f6c89f-vgk68\" (UID: \"61affb6e-e659-45c3-b1bb-f328e073304f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.794928 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:15 crc kubenswrapper[4908]: E0131 07:24:15.795084 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:16.295063928 +0000 UTC m=+162.911008582 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.797052 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:15 crc kubenswrapper[4908]: E0131 07:24:15.797364 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:16.297354907 +0000 UTC m=+162.913299561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.807586 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh776\" (UniqueName: \"kubernetes.io/projected/097d2f96-ce86-4d47-a55c-c717d272a8ef-kube-api-access-mh776\") pod \"console-f9d7485db-fjlrr\" (UID: \"097d2f96-ce86-4d47-a55c-c717d272a8ef\") " pod="openshift-console/console-f9d7485db-fjlrr" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.823891 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5v8l\" (UniqueName: \"kubernetes.io/projected/0d369cc1-14e7-49ff-b253-bc196840a444-kube-api-access-j5v8l\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.844108 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lc6h\" (UniqueName: \"kubernetes.io/projected/900fb657-d80e-4887-8144-424a3cf39946-kube-api-access-2lc6h\") pod \"router-default-5444994796-lnblx\" (UID: \"900fb657-d80e-4887-8144-424a3cf39946\") " pod="openshift-ingress/router-default-5444994796-lnblx" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.866480 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ltjq\" (UniqueName: \"kubernetes.io/projected/f913ba98-cee4-48cc-9167-575792819b69-kube-api-access-2ltjq\") pod \"machine-approver-56656f9798-rlxht\" (UID: \"f913ba98-cee4-48cc-9167-575792819b69\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rlxht" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.884488 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f9b70b6-16d7-48f4-96e9-1ad34f82cac0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-h7c7k\" (UID: \"1f9b70b6-16d7-48f4-96e9-1ad34f82cac0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h7c7k" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.898035 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:15 crc kubenswrapper[4908]: E0131 07:24:15.898208 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:16.398183517 +0000 UTC m=+163.014128171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.898649 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:15 crc kubenswrapper[4908]: E0131 07:24:15.901138 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:16.401107212 +0000 UTC m=+163.017051866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.906242 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/284e4410-86ef-4c86-a9e1-6859dea75ab2-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nkm6l\" (UID: \"284e4410-86ef-4c86-a9e1-6859dea75ab2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nkm6l" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.920997 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.926968 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48vh4\" (UniqueName: \"kubernetes.io/projected/345e9f59-f1cc-40f5-97ea-42940f12805c-kube-api-access-48vh4\") pod \"downloads-7954f5f757-8rjct\" (UID: \"345e9f59-f1cc-40f5-97ea-42940f12805c\") " pod="openshift-console/downloads-7954f5f757-8rjct" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.935476 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4vxx6" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.945306 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/53de0c2d-11ec-4f56-a585-7497b8c698a2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jt7r7\" (UID: \"53de0c2d-11ec-4f56-a585-7497b8c698a2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jt7r7" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.945401 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rzszg" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.954119 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h7c7k" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.966797 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cft97\" (UniqueName: \"kubernetes.io/projected/bd9fc355-495d-408c-9084-b781e3494409-kube-api-access-cft97\") pod \"kube-storage-version-migrator-operator-b67b599dd-xptlh\" (UID: \"bd9fc355-495d-408c-9084-b781e3494409\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xptlh" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.971074 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-lnblx" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.975401 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" Jan 31 07:24:15 crc kubenswrapper[4908]: I0131 07:24:15.983188 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n4jg\" (UniqueName: \"kubernetes.io/projected/a1eb7039-2ff7-48da-85d1-471dfe4f956b-kube-api-access-7n4jg\") pod \"control-plane-machine-set-operator-78cbb6b69f-5bzw7\" (UID: \"a1eb7039-2ff7-48da-85d1-471dfe4f956b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5bzw7" Jan 31 07:24:15 crc kubenswrapper[4908]: W0131 07:24:15.991318 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod900fb657_d80e_4887_8144_424a3cf39946.slice/crio-1add4d94fb86923f3e549065df9dab86cc9ea3ff9deaac167d014a6c767320f7 WatchSource:0}: Error finding container 1add4d94fb86923f3e549065df9dab86cc9ea3ff9deaac167d014a6c767320f7: Status 404 returned error can't find the container with id 1add4d94fb86923f3e549065df9dab86cc9ea3ff9deaac167d014a6c767320f7 Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.000627 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:16 crc kubenswrapper[4908]: E0131 07:24:16.001239 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:16.501214544 +0000 UTC m=+163.117159198 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.003118 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:16 crc kubenswrapper[4908]: E0131 07:24:16.003515 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:16.503501303 +0000 UTC m=+163.119445957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.003673 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5bzw7" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.005841 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shc6c\" (UniqueName: \"kubernetes.io/projected/b26392ba-50ff-4652-bbb2-52dc2328effb-kube-api-access-shc6c\") pod \"authentication-operator-69f744f599-4jhvx\" (UID: \"b26392ba-50ff-4652-bbb2-52dc2328effb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4jhvx" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.013261 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rlxht" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.019942 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-4jhvx" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.031463 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44skr\" (UniqueName: \"kubernetes.io/projected/36518e3b-feb3-4ca9-8a3d-debecb7e80ca-kube-api-access-44skr\") pod \"etcd-operator-b45778765-xz8pb\" (UID: \"36518e3b-feb3-4ca9-8a3d-debecb7e80ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xz8pb" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.047500 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqfcd\" (UniqueName: \"kubernetes.io/projected/312e81dd-7f53-4f53-a0fa-c0ef69a50cd2-kube-api-access-zqfcd\") pod \"machine-config-operator-74547568cd-hhsgr\" (UID: \"312e81dd-7f53-4f53-a0fa-c0ef69a50cd2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hhsgr" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.066230 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vrj7\" (UniqueName: \"kubernetes.io/projected/0d0a4cbb-8e82-42c7-8661-4c4f371699e0-kube-api-access-8vrj7\") pod \"package-server-manager-789f6589d5-gmh7h\" (UID: \"0d0a4cbb-8e82-42c7-8661-4c4f371699e0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gmh7h" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.081699 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-8rjct" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.084765 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d369cc1-14e7-49ff-b253-bc196840a444-bound-sa-token\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.102261 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fjlrr" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.104573 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:16 crc kubenswrapper[4908]: E0131 07:24:16.105470 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:16.605441902 +0000 UTC m=+163.221386556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.106120 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:16 crc kubenswrapper[4908]: E0131 07:24:16.106418 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:16.606406806 +0000 UTC m=+163.222351460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.111459 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m4jr\" (UniqueName: \"kubernetes.io/projected/15ac1097-42d7-4ca8-b258-9418f0e0993e-kube-api-access-7m4jr\") pod \"olm-operator-6b444d44fb-vzfwr\" (UID: \"15ac1097-42d7-4ca8-b258-9418f0e0993e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vzfwr" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.124698 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8kcd\" (UniqueName: \"kubernetes.io/projected/21666eaa-906a-49c3-aaf9-b2466bc9d7f3-kube-api-access-t8kcd\") pod \"ingress-canary-5dh6j\" (UID: \"21666eaa-906a-49c3-aaf9-b2466bc9d7f3\") " pod="openshift-ingress-canary/ingress-canary-5dh6j" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.149014 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dtbh\" (UniqueName: \"kubernetes.io/projected/cbe88e47-edc8-4f4a-bc0b-006a3551c85c-kube-api-access-4dtbh\") pod \"service-ca-9c57cc56f-s2hzs\" (UID: \"cbe88e47-edc8-4f4a-bc0b-006a3551c85c\") " pod="openshift-service-ca/service-ca-9c57cc56f-s2hzs" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.160310 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vgk68"] Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.171443 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jt7r7" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.173373 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnsgc\" (UniqueName: \"kubernetes.io/projected/ae655cf0-e8e0-4118-9dac-90bfc5b7aafe-kube-api-access-gnsgc\") pod \"machine-config-server-cgsr5\" (UID: \"ae655cf0-e8e0-4118-9dac-90bfc5b7aafe\") " pod="openshift-machine-config-operator/machine-config-server-cgsr5" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.184448 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-xz8pb" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.185551 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfml7\" (UniqueName: \"kubernetes.io/projected/c294adb2-f360-4af0-9919-dc678235c37d-kube-api-access-mfml7\") pod \"marketplace-operator-79b997595-7cskt\" (UID: \"c294adb2-f360-4af0-9919-dc678235c37d\") " pod="openshift-marketplace/marketplace-operator-79b997595-7cskt" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.186101 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4vxx6"] Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.191698 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hhsgr" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.197686 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nkm6l" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.207310 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:16 crc kubenswrapper[4908]: E0131 07:24:16.207552 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:16.707518844 +0000 UTC m=+163.323463498 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.207628 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.207693 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj6x8\" (UniqueName: \"kubernetes.io/projected/726a0082-0e03-4539-9f62-ee7776d0a7d8-kube-api-access-rj6x8\") pod \"csi-hostpathplugin-2rbj4\" (UID: \"726a0082-0e03-4539-9f62-ee7776d0a7d8\") " pod="hostpath-provisioner/csi-hostpathplugin-2rbj4" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.207811 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.207898 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-serving-cert\") pod \"apiserver-7bbb656c7d-m8jlm\" (UID: \"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.208401 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.209122 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ec2385e-2bf3-44c1-93fe-51f82d425444-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lxjh4\" (UID: \"3ec2385e-2bf3-44c1-93fe-51f82d425444\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lxjh4" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.209170 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.209199 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.209224 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ccb2de53-ecca-4439-94c0-2b65e5b21789-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-h7tss\" (UID: \"ccb2de53-ecca-4439-94c0-2b65e5b21789\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7tss" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.209304 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.209344 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-etcd-client\") pod \"apiserver-7bbb656c7d-m8jlm\" (UID: \"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.209370 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ec2385e-2bf3-44c1-93fe-51f82d425444-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lxjh4\" (UID: \"3ec2385e-2bf3-44c1-93fe-51f82d425444\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lxjh4" Jan 31 07:24:16 crc kubenswrapper[4908]: E0131 07:24:16.210184 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:16.710170142 +0000 UTC m=+163.326114796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.210312 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ec2385e-2bf3-44c1-93fe-51f82d425444-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lxjh4\" (UID: \"3ec2385e-2bf3-44c1-93fe-51f82d425444\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lxjh4" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.210805 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.211569 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h7c7k"] Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.211688 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xptlh" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.211960 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.212603 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-etcd-client\") pod \"apiserver-7bbb656c7d-m8jlm\" (UID: \"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.213491 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a-serving-cert\") pod \"apiserver-7bbb656c7d-m8jlm\" (UID: \"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.213495 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ltcmm\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.214032 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ccb2de53-ecca-4439-94c0-2b65e5b21789-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-h7tss\" (UID: \"ccb2de53-ecca-4439-94c0-2b65e5b21789\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7tss" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.216094 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ec2385e-2bf3-44c1-93fe-51f82d425444-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lxjh4\" (UID: \"3ec2385e-2bf3-44c1-93fe-51f82d425444\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lxjh4" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.222280 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qmh9\" (UniqueName: \"kubernetes.io/projected/fb91aa35-0cab-44ca-b05c-a8e73038efdc-kube-api-access-6qmh9\") pod \"dns-default-5djg7\" (UID: \"fb91aa35-0cab-44ca-b05c-a8e73038efdc\") " pod="openshift-dns/dns-default-5djg7" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.251234 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbj7b\" (UniqueName: \"kubernetes.io/projected/e2a40a7d-4ba8-4fd0-863f-53d987b1383c-kube-api-access-mbj7b\") pod \"service-ca-operator-777779d784-dbx7j\" (UID: \"e2a40a7d-4ba8-4fd0-863f-53d987b1383c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dbx7j" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.282387 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7tss" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.284729 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg65b\" (UniqueName: \"kubernetes.io/projected/fe51cf59-5f34-4f01-8404-2f95b7ca742b-kube-api-access-mg65b\") pod \"collect-profiles-29497395-k4x5t\" (UID: \"fe51cf59-5f34-4f01-8404-2f95b7ca742b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-k4x5t" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.286524 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gmh7h" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.310224 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:16 crc kubenswrapper[4908]: E0131 07:24:16.310568 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:16.810553081 +0000 UTC m=+163.426497735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.313899 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7cskt" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.329354 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-k4x5t" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.338365 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-s2hzs" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.348637 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-dbx7j" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.350562 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lxjh4" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.353047 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5bzw7"] Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.354868 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5dh6j" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.358174 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.363233 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vzfwr" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.372392 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.389252 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-2rbj4" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.396261 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-cgsr5" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.403220 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5djg7" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.412376 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:16 crc kubenswrapper[4908]: E0131 07:24:16.412691 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:16.912681165 +0000 UTC m=+163.528625819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.455134 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rzszg"] Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.475077 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twvcd\" (UniqueName: \"kubernetes.io/projected/f13ebf33-3e36-4c3b-9348-1f9fb94544e0-kube-api-access-twvcd\") pod \"migrator-59844c95c7-d4l95\" (UID: \"f13ebf33-3e36-4c3b-9348-1f9fb94544e0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d4l95" Jan 31 07:24:16 crc kubenswrapper[4908]: W0131 07:24:16.501616 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f9b70b6_16d7_48f4_96e9_1ad34f82cac0.slice/crio-ef49926261f7094931c178b73a977891ddd12baf19a36b75a2439edd8267062c WatchSource:0}: Error finding container ef49926261f7094931c178b73a977891ddd12baf19a36b75a2439edd8267062c: Status 404 returned error can't find the container with id ef49926261f7094931c178b73a977891ddd12baf19a36b75a2439edd8267062c Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.505797 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xz8pb"] Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.513788 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:16 crc kubenswrapper[4908]: E0131 07:24:16.513928 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:17.013907415 +0000 UTC m=+163.629852079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.514144 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:16 crc kubenswrapper[4908]: E0131 07:24:16.514438 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:17.014428699 +0000 UTC m=+163.630373353 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.560419 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-8rjct"] Jan 31 07:24:16 crc kubenswrapper[4908]: W0131 07:24:16.560947 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36518e3b_feb3_4ca9_8a3d_debecb7e80ca.slice/crio-f69ed87995765667e8dc581a8bb1a4582ccdafcf0b20e55c8b3e0bfd5bbff3c0 WatchSource:0}: Error finding container f69ed87995765667e8dc581a8bb1a4582ccdafcf0b20e55c8b3e0bfd5bbff3c0: Status 404 returned error can't find the container with id f69ed87995765667e8dc581a8bb1a4582ccdafcf0b20e55c8b3e0bfd5bbff3c0 Jan 31 07:24:16 crc kubenswrapper[4908]: W0131 07:24:16.576807 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod345e9f59_f1cc_40f5_97ea_42940f12805c.slice/crio-a762551b02f4f78e044a0ff56b6477484511d68ece45b82baf5027d35e2804b5 WatchSource:0}: Error finding container a762551b02f4f78e044a0ff56b6477484511d68ece45b82baf5027d35e2804b5: Status 404 returned error can't find the container with id a762551b02f4f78e044a0ff56b6477484511d68ece45b82baf5027d35e2804b5 Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.610192 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4jhvx"] Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.611772 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j"] Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.615066 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:16 crc kubenswrapper[4908]: E0131 07:24:16.615353 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:17.115336161 +0000 UTC m=+163.731280815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.626494 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d4l95" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.717244 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:16 crc kubenswrapper[4908]: E0131 07:24:16.717872 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:17.217857765 +0000 UTC m=+163.833802419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.758405 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jt7r7"] Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.797203 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nkm6l"] Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.802661 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-8rjct" event={"ID":"345e9f59-f1cc-40f5-97ea-42940f12805c","Type":"ContainerStarted","Data":"a762551b02f4f78e044a0ff56b6477484511d68ece45b82baf5027d35e2804b5"} Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.805067 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rlxht" event={"ID":"f913ba98-cee4-48cc-9167-575792819b69","Type":"ContainerStarted","Data":"f451e02f042393810f6627dfeb64203ce2a55f1b91113eeaf6cb92b41739fcde"} Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.806376 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-lnblx" event={"ID":"900fb657-d80e-4887-8144-424a3cf39946","Type":"ContainerStarted","Data":"1add4d94fb86923f3e549065df9dab86cc9ea3ff9deaac167d014a6c767320f7"} Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.809225 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hhsgr"] Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.815392 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" event={"ID":"61affb6e-e659-45c3-b1bb-f328e073304f","Type":"ContainerStarted","Data":"76e0ccc472d22acc3d30e6f6f6c87a2fc54325091ff5a04c02e868bbcbb11f70"} Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.818694 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:16 crc kubenswrapper[4908]: E0131 07:24:16.818811 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:17.318788618 +0000 UTC m=+163.934733262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.818852 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:16 crc kubenswrapper[4908]: E0131 07:24:16.819171 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:17.319160668 +0000 UTC m=+163.935105322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.824132 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fjlrr"] Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.835659 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-jwbt2" event={"ID":"d2c04ff3-6875-46f8-a906-efc6ffdd312b","Type":"ContainerStarted","Data":"5be2ed113fb5f77e6c84ce53fd3e70e55ad1963340288889c964349da82f38c3"} Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.839989 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h7c7k" event={"ID":"1f9b70b6-16d7-48f4-96e9-1ad34f82cac0","Type":"ContainerStarted","Data":"ef49926261f7094931c178b73a977891ddd12baf19a36b75a2439edd8267062c"} Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.842673 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vln8d" event={"ID":"787bae26-eaf0-4c74-84a1-4ada053cd05a","Type":"ContainerStarted","Data":"8f649c9d1007b1bef3c989464d3f9772c7fbe119095b3df51ba3ad9f7071f9a5"} Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.843536 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5bzw7" event={"ID":"a1eb7039-2ff7-48da-85d1-471dfe4f956b","Type":"ContainerStarted","Data":"1b1c61b4e8ff82c037876f4d3dda8c340088943017e0f410d2e0b9b58c974388"} Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.844434 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rzszg" event={"ID":"1b130058-7294-4cc4-8e1d-8dcde09ec947","Type":"ContainerStarted","Data":"00e9d745550ba491a1ce49f43c443e31c4e24203fe14a5d60d612159c0ff852a"} Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.845225 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4vxx6" event={"ID":"9742dfc9-568c-49f1-89ae-8f3959bf33ad","Type":"ContainerStarted","Data":"94f16222b97fcb13d6b7d68a6efa5afdfdac3934c5d766c268a2979553697f78"} Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.846454 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-xz8pb" event={"ID":"36518e3b-feb3-4ca9-8a3d-debecb7e80ca","Type":"ContainerStarted","Data":"f69ed87995765667e8dc581a8bb1a4582ccdafcf0b20e55c8b3e0bfd5bbff3c0"} Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.846863 4908 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-jm4lf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" start-of-body= Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.846901 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jm4lf" podUID="e65ad778-99ee-423a-b0d7-825171576820" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.847112 4908 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-nb82q container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.847185 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nb82q" podUID="853eae0c-97bb-4598-b299-a48780dfac55" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.847374 4908 patch_prober.go:28] interesting pod/console-operator-58897d9998-l4rd4 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.847452 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-l4rd4" podUID="a0ae9cc7-27cb-43ca-9543-286306c0272c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 31 07:24:16 crc kubenswrapper[4908]: I0131 07:24:16.919527 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:16 crc kubenswrapper[4908]: E0131 07:24:16.925411 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:17.425389186 +0000 UTC m=+164.041333840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:16.999924 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xptlh"] Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.002675 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-2rbj4"] Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.002717 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ltcmm"] Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.024842 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:17 crc kubenswrapper[4908]: E0131 07:24:17.025160 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:17.525148539 +0000 UTC m=+164.141093193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.063458 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7tss"] Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.113653 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vzfwr"] Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.126416 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:17 crc kubenswrapper[4908]: E0131 07:24:17.126898 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:17.626880663 +0000 UTC m=+164.242825317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.180673 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nb82q" podStartSLOduration=128.180654199 podStartE2EDuration="2m8.180654199s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:17.179878159 +0000 UTC m=+163.795822813" watchObservedRunningTime="2026-01-31 07:24:17.180654199 +0000 UTC m=+163.796598853" Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.188656 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm"] Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.228864 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:17 crc kubenswrapper[4908]: E0131 07:24:17.229189 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:17.729176311 +0000 UTC m=+164.345120965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.313661 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-s2hzs"] Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.330654 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:17 crc kubenswrapper[4908]: E0131 07:24:17.331182 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:17.831166301 +0000 UTC m=+164.447110955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.341587 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-7zrxf" podStartSLOduration=128.341566987 podStartE2EDuration="2m8.341566987s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:17.338620962 +0000 UTC m=+163.954565616" watchObservedRunningTime="2026-01-31 07:24:17.341566987 +0000 UTC m=+163.957511641" Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.432278 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:17 crc kubenswrapper[4908]: E0131 07:24:17.432660 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:17.932643448 +0000 UTC m=+164.548588102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:17 crc kubenswrapper[4908]: W0131 07:24:17.483958 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod312e81dd_7f53_4f53_a0fa_c0ef69a50cd2.slice/crio-55d3f715f5af1f817644e19c9c61a117da9182bc6f20a667a5a0e6ed9174730b WatchSource:0}: Error finding container 55d3f715f5af1f817644e19c9c61a117da9182bc6f20a667a5a0e6ed9174730b: Status 404 returned error can't find the container with id 55d3f715f5af1f817644e19c9c61a117da9182bc6f20a667a5a0e6ed9174730b Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.500537 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-l4rd4" podStartSLOduration=128.500516735 podStartE2EDuration="2m8.500516735s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:17.499103549 +0000 UTC m=+164.115048203" watchObservedRunningTime="2026-01-31 07:24:17.500516735 +0000 UTC m=+164.116461389" Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.502375 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5rwgc" podStartSLOduration=128.502364373 podStartE2EDuration="2m8.502364373s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:17.466607587 +0000 UTC m=+164.082552251" watchObservedRunningTime="2026-01-31 07:24:17.502364373 +0000 UTC m=+164.118309027" Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.534017 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:17 crc kubenswrapper[4908]: E0131 07:24:17.534190 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:18.034165196 +0000 UTC m=+164.650109850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.534256 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:17 crc kubenswrapper[4908]: E0131 07:24:17.534567 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:18.034555076 +0000 UTC m=+164.650499730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.541268 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jm4lf" podStartSLOduration=128.541254658 podStartE2EDuration="2m8.541254658s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:17.539365059 +0000 UTC m=+164.155309713" watchObservedRunningTime="2026-01-31 07:24:17.541254658 +0000 UTC m=+164.157199312" Jan 31 07:24:17 crc kubenswrapper[4908]: W0131 07:24:17.616871 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae655cf0_e8e0_4118_9dac_90bfc5b7aafe.slice/crio-93d51f24a73d73c3d1980b5435358ab27268643958fa533d0ef31e184024b1d3 WatchSource:0}: Error finding container 93d51f24a73d73c3d1980b5435358ab27268643958fa533d0ef31e184024b1d3: Status 404 returned error can't find the container with id 93d51f24a73d73c3d1980b5435358ab27268643958fa533d0ef31e184024b1d3 Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.634914 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:17 crc kubenswrapper[4908]: E0131 07:24:17.635456 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:18.135435508 +0000 UTC m=+164.751380162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:17 crc kubenswrapper[4908]: W0131 07:24:17.635540 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab833be5_a275_4c72_92d4_f6c93dd249a8.slice/crio-7fe779b0b601cf3338a3cfdd76e9c57d91ea47a3b903462570f4c6b83dc9d2dd WatchSource:0}: Error finding container 7fe779b0b601cf3338a3cfdd76e9c57d91ea47a3b903462570f4c6b83dc9d2dd: Status 404 returned error can't find the container with id 7fe779b0b601cf3338a3cfdd76e9c57d91ea47a3b903462570f4c6b83dc9d2dd Jan 31 07:24:17 crc kubenswrapper[4908]: W0131 07:24:17.641677 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15ac1097_42d7_4ca8_b258_9418f0e0993e.slice/crio-4cb84acbf5b9f1ce133af6f18bc448516e901e6f3c5966bd12c44d29a11929d6 WatchSource:0}: Error finding container 4cb84acbf5b9f1ce133af6f18bc448516e901e6f3c5966bd12c44d29a11929d6: Status 404 returned error can't find the container with id 4cb84acbf5b9f1ce133af6f18bc448516e901e6f3c5966bd12c44d29a11929d6 Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.647952 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497395-k4x5t"] Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.744650 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:17 crc kubenswrapper[4908]: E0131 07:24:17.744917 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:18.24490642 +0000 UTC m=+164.860851074 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.777655 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7cskt"] Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.818325 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4lhmd" podStartSLOduration=128.818302878 podStartE2EDuration="2m8.818302878s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:17.806016724 +0000 UTC m=+164.421961378" watchObservedRunningTime="2026-01-31 07:24:17.818302878 +0000 UTC m=+164.434247532" Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.829784 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jk7hz" podStartSLOduration=128.829768112 podStartE2EDuration="2m8.829768112s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:17.828796427 +0000 UTC m=+164.444741081" watchObservedRunningTime="2026-01-31 07:24:17.829768112 +0000 UTC m=+164.445712766" Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.849938 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:17 crc kubenswrapper[4908]: E0131 07:24:17.850411 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:18.35039268 +0000 UTC m=+164.966337334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.884023 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-fhcvf" podStartSLOduration=128.88400773 podStartE2EDuration="2m8.88400773s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:17.882239485 +0000 UTC m=+164.498184129" watchObservedRunningTime="2026-01-31 07:24:17.88400773 +0000 UTC m=+164.499952384" Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.904869 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xptlh" event={"ID":"bd9fc355-495d-408c-9084-b781e3494409","Type":"ContainerStarted","Data":"87940ab433ffa618236144b1e263d11a332c2ca562b5c1a7946b8cb0b502ff0e"} Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.952486 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:17 crc kubenswrapper[4908]: E0131 07:24:17.953106 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:18.453089048 +0000 UTC m=+165.069033822 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.967098 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7cskt" event={"ID":"c294adb2-f360-4af0-9919-dc678235c37d","Type":"ContainerStarted","Data":"42f1a017586cb978399c44ee3c076cab5f4879c4c8d0a21ff8ff9a0de54a2ec2"} Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.967129 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-cgsr5" event={"ID":"ae655cf0-e8e0-4118-9dac-90bfc5b7aafe","Type":"ContainerStarted","Data":"93d51f24a73d73c3d1980b5435358ab27268643958fa533d0ef31e184024b1d3"} Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.967154 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nkm6l" event={"ID":"284e4410-86ef-4c86-a9e1-6859dea75ab2","Type":"ContainerStarted","Data":"2ea17aec47100d01c5450e338f13c2adf9474011e80f87acb365593c06da5aba"} Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.967165 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2rbj4" event={"ID":"726a0082-0e03-4539-9f62-ee7776d0a7d8","Type":"ContainerStarted","Data":"a5bbe8f39f0ce7483f274fb9975d139d4d96715832d7f8f37571d7ac8411d392"} Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.975209 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hhsgr" event={"ID":"312e81dd-7f53-4f53-a0fa-c0ef69a50cd2","Type":"ContainerStarted","Data":"55d3f715f5af1f817644e19c9c61a117da9182bc6f20a667a5a0e6ed9174730b"} Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.978971 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fjlrr" event={"ID":"097d2f96-ce86-4d47-a55c-c717d272a8ef","Type":"ContainerStarted","Data":"0d15d2d1ef0960f36e9c4003db41f1f0806d8004d5771c5b04c28e5a01c6459c"} Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.987214 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" event={"ID":"ab833be5-a275-4c72-92d4-f6c93dd249a8","Type":"ContainerStarted","Data":"7fe779b0b601cf3338a3cfdd76e9c57d91ea47a3b903462570f4c6b83dc9d2dd"} Jan 31 07:24:17 crc kubenswrapper[4908]: I0131 07:24:17.995502 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-lnblx" event={"ID":"900fb657-d80e-4887-8144-424a3cf39946","Type":"ContainerStarted","Data":"e59e9aed5459277d27872f694e698a1d795dad5f851d18f571547cbe72584c47"} Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:17.999057 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vzfwr" event={"ID":"15ac1097-42d7-4ca8-b258-9418f0e0993e","Type":"ContainerStarted","Data":"4cb84acbf5b9f1ce133af6f18bc448516e901e6f3c5966bd12c44d29a11929d6"} Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:17.999777 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" event={"ID":"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a","Type":"ContainerStarted","Data":"9b67e5346048fd26d12e57d3fb7aa17704d47c3a416f2f579a74768a13c71b0a"} Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:18.001596 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-4jhvx" event={"ID":"b26392ba-50ff-4652-bbb2-52dc2328effb","Type":"ContainerStarted","Data":"961218611a89c4a4d9d094c24942a5d9d6ebe2491e86270c4bdfce1f5955f604"} Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:18.003539 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jt7r7" event={"ID":"53de0c2d-11ec-4f56-a585-7497b8c698a2","Type":"ContainerStarted","Data":"2a44c34d0b857ff3aa2ba301949861cb9bd6b0babe1d3c9c0655c83342bcef7b"} Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:18.004721 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-k4x5t" event={"ID":"fe51cf59-5f34-4f01-8404-2f95b7ca742b","Type":"ContainerStarted","Data":"4531e8029653d51967138d9ac156596ca220042bdcd056c660b3c1be1da507d8"} Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:18.022588 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rlxht" event={"ID":"f913ba98-cee4-48cc-9167-575792819b69","Type":"ContainerStarted","Data":"62e8a9707766e782f95ea880837532ea4ee946db586d8bf6b61b2357bf8acf7b"} Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:18.029452 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" event={"ID":"1b18d2de-0b08-43c9-bcbf-1ced621bac08","Type":"ContainerStarted","Data":"926831a840dba993a32f831aa8bc5c1a8be09efaaa48fd689f636d609db93ead"} Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:18.033821 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-s2hzs" event={"ID":"cbe88e47-edc8-4f4a-bc0b-006a3551c85c","Type":"ContainerStarted","Data":"930e03a134ec22330bdaa3bc68e3a3e3576cf2fb3b88bd7504b0703651a3d0e7"} Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:18.054886 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:18 crc kubenswrapper[4908]: E0131 07:24:18.055302 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:18.555260513 +0000 UTC m=+165.171205167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:18.161957 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:18 crc kubenswrapper[4908]: E0131 07:24:18.162236 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:18.6622249 +0000 UTC m=+165.278169554 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:18.182766 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-jwbt2" podStartSLOduration=129.182747335 podStartE2EDuration="2m9.182747335s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:18.181255257 +0000 UTC m=+164.797199931" watchObservedRunningTime="2026-01-31 07:24:18.182747335 +0000 UTC m=+164.798691989" Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:18.262623 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:18 crc kubenswrapper[4908]: E0131 07:24:18.262895 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:18.762859556 +0000 UTC m=+165.378804230 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:18.263091 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:18 crc kubenswrapper[4908]: E0131 07:24:18.263488 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:18.763476532 +0000 UTC m=+165.379421186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:18.264016 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-lnblx" podStartSLOduration=129.264003405 podStartE2EDuration="2m9.264003405s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:18.261782488 +0000 UTC m=+164.877727142" watchObservedRunningTime="2026-01-31 07:24:18.264003405 +0000 UTC m=+164.879948059" Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:18.354805 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-d4l95"] Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:18.363630 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:18 crc kubenswrapper[4908]: E0131 07:24:18.363797 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:18.863776398 +0000 UTC m=+165.479721062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:18.363871 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:18 crc kubenswrapper[4908]: E0131 07:24:18.364212 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:18.864201689 +0000 UTC m=+165.480146343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:18.378957 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5djg7"] Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:18.455572 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gmh7h"] Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:18.459712 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lxjh4"] Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:18.465364 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:18 crc kubenswrapper[4908]: E0131 07:24:18.465860 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:18.9658395 +0000 UTC m=+165.581784154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:18.472095 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-dbx7j"] Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:18.511935 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5dh6j"] Jan 31 07:24:18 crc kubenswrapper[4908]: W0131 07:24:18.549941 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb91aa35_0cab_44ca_b05c_a8e73038efdc.slice/crio-fb7a6e304ccdb5cb1fa63a483610d831cd74b36e5cbdc0cb43745f9eb60d14d6 WatchSource:0}: Error finding container fb7a6e304ccdb5cb1fa63a483610d831cd74b36e5cbdc0cb43745f9eb60d14d6: Status 404 returned error can't find the container with id fb7a6e304ccdb5cb1fa63a483610d831cd74b36e5cbdc0cb43745f9eb60d14d6 Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:18.566617 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:18 crc kubenswrapper[4908]: E0131 07:24:18.566946 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:19.066930167 +0000 UTC m=+165.682874821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:18 crc kubenswrapper[4908]: W0131 07:24:18.617267 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d0a4cbb_8e82_42c7_8661_4c4f371699e0.slice/crio-5e491566a63cff4afa4e947b8e44c301beab23fafb40343cf59937f708a34bba WatchSource:0}: Error finding container 5e491566a63cff4afa4e947b8e44c301beab23fafb40343cf59937f708a34bba: Status 404 returned error can't find the container with id 5e491566a63cff4afa4e947b8e44c301beab23fafb40343cf59937f708a34bba Jan 31 07:24:18 crc kubenswrapper[4908]: W0131 07:24:18.623913 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21666eaa_906a_49c3_aaf9_b2466bc9d7f3.slice/crio-cda3afec4ee315ce844c811b1875f15f3eb21ed305e90519be22542b2733d7dc WatchSource:0}: Error finding container cda3afec4ee315ce844c811b1875f15f3eb21ed305e90519be22542b2733d7dc: Status 404 returned error can't find the container with id cda3afec4ee315ce844c811b1875f15f3eb21ed305e90519be22542b2733d7dc Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:18.667970 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:18 crc kubenswrapper[4908]: E0131 07:24:18.668109 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:19.168087096 +0000 UTC m=+165.784031760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:18.668254 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:18 crc kubenswrapper[4908]: E0131 07:24:18.668523 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:19.168516237 +0000 UTC m=+165.784460891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:18.769402 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:18 crc kubenswrapper[4908]: E0131 07:24:18.769739 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:19.269725377 +0000 UTC m=+165.885670031 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:18.873477 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:18 crc kubenswrapper[4908]: E0131 07:24:18.874414 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:19.374376175 +0000 UTC m=+165.990320829 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:18.972594 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-lnblx" Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:18.974865 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:18 crc kubenswrapper[4908]: E0131 07:24:18.975265 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:19.475245617 +0000 UTC m=+166.091190261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:18.976131 4908 patch_prober.go:28] interesting pod/router-default-5444994796-lnblx container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 31 07:24:18 crc kubenswrapper[4908]: I0131 07:24:18.976206 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lnblx" podUID="900fb657-d80e-4887-8144-424a3cf39946" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.036159 4908 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-jm4lf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.036214 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jm4lf" podUID="e65ad778-99ee-423a-b0d7-825171576820" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.045419 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-8rjct" event={"ID":"345e9f59-f1cc-40f5-97ea-42940f12805c","Type":"ContainerStarted","Data":"62e94a243fb724f5e7a2f82c5dee7fd26fbec5b3f998e78366eb7d29d35f8224"} Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.053112 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7tss" event={"ID":"ccb2de53-ecca-4439-94c0-2b65e5b21789","Type":"ContainerStarted","Data":"c324bcee4adf2ef6307d03105a1e36ba45401401d6e40cc07e3b75e91929a8d0"} Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.064298 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xptlh" event={"ID":"bd9fc355-495d-408c-9084-b781e3494409","Type":"ContainerStarted","Data":"b89366676add52f964d6d48625f90f1e1db1f383b9a4ce7f1fd6e8ec04e0637c"} Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.074791 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5dh6j" event={"ID":"21666eaa-906a-49c3-aaf9-b2466bc9d7f3","Type":"ContainerStarted","Data":"cda3afec4ee315ce844c811b1875f15f3eb21ed305e90519be22542b2733d7dc"} Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.076174 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:19 crc kubenswrapper[4908]: E0131 07:24:19.076448 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:19.576434116 +0000 UTC m=+166.192378770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.078676 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lxjh4" event={"ID":"3ec2385e-2bf3-44c1-93fe-51f82d425444","Type":"ContainerStarted","Data":"7813d18ee2bdd76dbcae8481eafd03c3946aee106a61536f050edb41730fcad2"} Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.084074 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" event={"ID":"1b18d2de-0b08-43c9-bcbf-1ced621bac08","Type":"ContainerStarted","Data":"045657d08ee33e47c4d0719a002f2f8851c792e0f12bf3a20c5b131a9b0d59a8"} Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.084764 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.091283 4908 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-wgj2j container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.091347 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" podUID="1b18d2de-0b08-43c9-bcbf-1ced621bac08" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.095241 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-xz8pb" event={"ID":"36518e3b-feb3-4ca9-8a3d-debecb7e80ca","Type":"ContainerStarted","Data":"2daff1f4c802f87f630e595470e66bf42d5dc05d4504d05c6f733bc8c406d479"} Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.095435 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xptlh" podStartSLOduration=130.095414442 podStartE2EDuration="2m10.095414442s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:19.087875059 +0000 UTC m=+165.703819713" watchObservedRunningTime="2026-01-31 07:24:19.095414442 +0000 UTC m=+165.711359096" Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.107838 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h7c7k" event={"ID":"1f9b70b6-16d7-48f4-96e9-1ad34f82cac0","Type":"ContainerStarted","Data":"d288776a173afc18a2a978ae43f40da6b4cd353ea314393005a5d6cd1e0824c7"} Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.110327 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jt7r7" event={"ID":"53de0c2d-11ec-4f56-a585-7497b8c698a2","Type":"ContainerStarted","Data":"0c00f75b66454af31c70d27d53dedee77c427174db4cdb7de9fd7dd334c3cd30"} Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.112708 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-s2hzs" event={"ID":"cbe88e47-edc8-4f4a-bc0b-006a3551c85c","Type":"ContainerStarted","Data":"e9e746c7a6f157c27317b05df82ae6c02d28d5fc41c7bee174bd902775689836"} Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.115231 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d4l95" event={"ID":"f13ebf33-3e36-4c3b-9348-1f9fb94544e0","Type":"ContainerStarted","Data":"28c5c0bf5a37ff22ce65e7dd1acfac222a245eaac3b7edc5557197cb81dd54e7"} Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.116935 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" event={"ID":"61affb6e-e659-45c3-b1bb-f328e073304f","Type":"ContainerStarted","Data":"95de7b969e700edee582912771674e68fd023ce575bf4ad16109c3372f64c164"} Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.117079 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.118608 4908 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-vgk68 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.118657 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" podUID="61affb6e-e659-45c3-b1bb-f328e073304f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.133432 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" podStartSLOduration=129.133412324 podStartE2EDuration="2m9.133412324s" podCreationTimestamp="2026-01-31 07:22:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:19.132648975 +0000 UTC m=+165.748593629" watchObservedRunningTime="2026-01-31 07:24:19.133412324 +0000 UTC m=+165.749356978" Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.134832 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fjlrr" event={"ID":"097d2f96-ce86-4d47-a55c-c717d272a8ef","Type":"ContainerStarted","Data":"75d170aeb9ea07f95483c3f806eed8d83417cfa72b4a7226dc501100c51eacd1"} Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.138494 4908 generic.go:334] "Generic (PLEG): container finished" podID="9742dfc9-568c-49f1-89ae-8f3959bf33ad" containerID="17e18bbe9b2537d01ed8e5509a363c5788f0c0e8d0483b9eb00b75d9aa5e2095" exitCode=0 Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.138562 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4vxx6" event={"ID":"9742dfc9-568c-49f1-89ae-8f3959bf33ad","Type":"ContainerDied","Data":"17e18bbe9b2537d01ed8e5509a363c5788f0c0e8d0483b9eb00b75d9aa5e2095"} Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.145969 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hhsgr" event={"ID":"312e81dd-7f53-4f53-a0fa-c0ef69a50cd2","Type":"ContainerStarted","Data":"c3d37ce374963006db53f925632219eee88653bcbafc4cf467d6378bf6f97c00"} Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.149898 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rzszg" event={"ID":"1b130058-7294-4cc4-8e1d-8dcde09ec947","Type":"ContainerStarted","Data":"47e53a914d415bec93efda5def55004dc917e817273b1d442d405e14ae95706f"} Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.171367 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-dbx7j" event={"ID":"e2a40a7d-4ba8-4fd0-863f-53d987b1383c","Type":"ContainerStarted","Data":"3ef7710c35cc78f81a24f6a2b6b57d2be240a113effe1a06ad424791c50be65a"} Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.172723 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5djg7" event={"ID":"fb91aa35-0cab-44ca-b05c-a8e73038efdc","Type":"ContainerStarted","Data":"fb7a6e304ccdb5cb1fa63a483610d831cd74b36e5cbdc0cb43745f9eb60d14d6"} Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.176601 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:19 crc kubenswrapper[4908]: E0131 07:24:19.179651 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:19.679630987 +0000 UTC m=+166.295575641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.180694 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-xz8pb" podStartSLOduration=130.180678294 podStartE2EDuration="2m10.180678294s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:19.155940721 +0000 UTC m=+165.771885375" watchObservedRunningTime="2026-01-31 07:24:19.180678294 +0000 UTC m=+165.796622938" Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.181632 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" podStartSLOduration=130.181626628 podStartE2EDuration="2m10.181626628s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:19.176109057 +0000 UTC m=+165.792053711" watchObservedRunningTime="2026-01-31 07:24:19.181626628 +0000 UTC m=+165.797571282" Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.184628 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vzfwr" event={"ID":"15ac1097-42d7-4ca8-b258-9418f0e0993e","Type":"ContainerStarted","Data":"bb7d28a6bd1e60611ebe663fa0754bc298f8c2fa35e36974764eb1392f9839c8"} Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.185259 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vzfwr" Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.196038 4908 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vzfwr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.196085 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vzfwr" podUID="15ac1097-42d7-4ca8-b258-9418f0e0993e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.197037 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h7c7k" podStartSLOduration=130.197020922 podStartE2EDuration="2m10.197020922s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:19.196171021 +0000 UTC m=+165.812115685" watchObservedRunningTime="2026-01-31 07:24:19.197020922 +0000 UTC m=+165.812965576" Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.204296 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5bzw7" event={"ID":"a1eb7039-2ff7-48da-85d1-471dfe4f956b","Type":"ContainerStarted","Data":"a03082174628d0ce779cc1f31fc527a28fd3b7a04453e4d88d434b9f78b0e6db"} Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.205668 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gmh7h" event={"ID":"0d0a4cbb-8e82-42c7-8661-4c4f371699e0","Type":"ContainerStarted","Data":"5e491566a63cff4afa4e947b8e44c301beab23fafb40343cf59937f708a34bba"} Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.209588 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-cgsr5" event={"ID":"ae655cf0-e8e0-4118-9dac-90bfc5b7aafe","Type":"ContainerStarted","Data":"49606ed599608966f66e55fddfc2c938e1f3e17df4c323a20bb0c4b81049b6cd"} Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.219471 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-s2hzs" podStartSLOduration=129.219445856 podStartE2EDuration="2m9.219445856s" podCreationTimestamp="2026-01-31 07:22:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:19.213671689 +0000 UTC m=+165.829616353" watchObservedRunningTime="2026-01-31 07:24:19.219445856 +0000 UTC m=+165.835390510" Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.268524 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-cgsr5" podStartSLOduration=7.268505202 podStartE2EDuration="7.268505202s" podCreationTimestamp="2026-01-31 07:24:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:19.252391679 +0000 UTC m=+165.868336333" watchObservedRunningTime="2026-01-31 07:24:19.268505202 +0000 UTC m=+165.884449856" Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.287848 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:19 crc kubenswrapper[4908]: E0131 07:24:19.290028 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:19.790003422 +0000 UTC m=+166.405948076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.316040 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5bzw7" podStartSLOduration=130.316016388 podStartE2EDuration="2m10.316016388s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:19.268947473 +0000 UTC m=+165.884892127" watchObservedRunningTime="2026-01-31 07:24:19.316016388 +0000 UTC m=+165.931961042" Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.347147 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vzfwr" podStartSLOduration=130.347132014 podStartE2EDuration="2m10.347132014s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:19.346887198 +0000 UTC m=+165.962831852" watchObservedRunningTime="2026-01-31 07:24:19.347132014 +0000 UTC m=+165.963076668" Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.370767 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-fjlrr" podStartSLOduration=130.370740638 podStartE2EDuration="2m10.370740638s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:19.369098776 +0000 UTC m=+165.985043430" watchObservedRunningTime="2026-01-31 07:24:19.370740638 +0000 UTC m=+165.986685292" Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.410701 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:19 crc kubenswrapper[4908]: E0131 07:24:19.410995 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:19.910931907 +0000 UTC m=+166.526876561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.411200 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:19 crc kubenswrapper[4908]: E0131 07:24:19.411957 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:19.911945963 +0000 UTC m=+166.527890617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.513514 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:19 crc kubenswrapper[4908]: E0131 07:24:19.513704 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:20.013671146 +0000 UTC m=+166.629615800 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.514234 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:19 crc kubenswrapper[4908]: E0131 07:24:19.514767 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:20.014744464 +0000 UTC m=+166.630689138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.616806 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:19 crc kubenswrapper[4908]: E0131 07:24:19.617284 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:20.117264728 +0000 UTC m=+166.733209382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.719719 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:19 crc kubenswrapper[4908]: E0131 07:24:19.720347 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:20.220335845 +0000 UTC m=+166.836280499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.820715 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:19 crc kubenswrapper[4908]: E0131 07:24:19.820895 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:20.320868108 +0000 UTC m=+166.936812762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.821151 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:19 crc kubenswrapper[4908]: E0131 07:24:19.821484 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:20.321473704 +0000 UTC m=+166.937418358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.922532 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:19 crc kubenswrapper[4908]: E0131 07:24:19.922743 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:20.422710075 +0000 UTC m=+167.038654739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.923191 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:19 crc kubenswrapper[4908]: E0131 07:24:19.923733 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:20.42371161 +0000 UTC m=+167.039656274 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.977752 4908 patch_prober.go:28] interesting pod/router-default-5444994796-lnblx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 07:24:19 crc kubenswrapper[4908]: [-]has-synced failed: reason withheld Jan 31 07:24:19 crc kubenswrapper[4908]: [+]process-running ok Jan 31 07:24:19 crc kubenswrapper[4908]: healthz check failed Jan 31 07:24:19 crc kubenswrapper[4908]: I0131 07:24:19.977819 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lnblx" podUID="900fb657-d80e-4887-8144-424a3cf39946" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.023870 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:20 crc kubenswrapper[4908]: E0131 07:24:20.024170 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:20.524151001 +0000 UTC m=+167.140095655 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.125685 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:20 crc kubenswrapper[4908]: E0131 07:24:20.125996 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:20.625964397 +0000 UTC m=+167.241909051 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.214043 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nkm6l" event={"ID":"284e4410-86ef-4c86-a9e1-6859dea75ab2","Type":"ContainerStarted","Data":"e15584a82d8654210201be01de5bf4f11d70fb161b17eb2b1b354f4f76cf10cd"} Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.216786 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jt7r7" event={"ID":"53de0c2d-11ec-4f56-a585-7497b8c698a2","Type":"ContainerStarted","Data":"b37038854c9db810628f05d36b02df9ceb5b221fe801a32490f804ba0cc57c0b"} Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.219266 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d4l95" event={"ID":"f13ebf33-3e36-4c3b-9348-1f9fb94544e0","Type":"ContainerStarted","Data":"b2fdeaa4d3047fcba152e4292e0cfce7636c0871dbbdc9496dfb8b9a72978764"} Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.219323 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d4l95" event={"ID":"f13ebf33-3e36-4c3b-9348-1f9fb94544e0","Type":"ContainerStarted","Data":"6abbf6c4de066cac990283bfb5c120ac55b23cc17c838d7f95419c55953d5df5"} Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.220757 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rlxht" event={"ID":"f913ba98-cee4-48cc-9167-575792819b69","Type":"ContainerStarted","Data":"975bfd3077eb21a5380b38d0fe83f0ec7217b80fd6b9552f7d6db2101d696cba"} Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.222125 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-k4x5t" event={"ID":"fe51cf59-5f34-4f01-8404-2f95b7ca742b","Type":"ContainerStarted","Data":"c1e155866104b6dc0710b8f68dc7c3386a414264ab5b5cce8f826379dfa04c58"} Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.223300 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-4jhvx" event={"ID":"b26392ba-50ff-4652-bbb2-52dc2328effb","Type":"ContainerStarted","Data":"ec41f27b0225387504066a3209bb30ee517d46ed5735ece6535c9c48ecda1225"} Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.224766 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" event={"ID":"ab833be5-a275-4c72-92d4-f6c93dd249a8","Type":"ContainerStarted","Data":"daa71833b6d2f5db4e38de5c2434f221667abb5b3ff730011ddd99e637a00fcd"} Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.224842 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.226247 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:20 crc kubenswrapper[4908]: E0131 07:24:20.226359 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:20.726341335 +0000 UTC m=+167.342285989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.226401 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.226413 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rzszg" event={"ID":"1b130058-7294-4cc4-8e1d-8dcde09ec947","Type":"ContainerStarted","Data":"e8ea1b74d42972e304085a06ca50e7a02f5a21e672956bdb9d8e17dd9fe5a494"} Jan 31 07:24:20 crc kubenswrapper[4908]: E0131 07:24:20.226761 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:20.726753806 +0000 UTC m=+167.342698460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.227082 4908 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-ltcmm container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.227154 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" podUID="ab833be5-a275-4c72-92d4-f6c93dd249a8" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.228711 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7tss" event={"ID":"ccb2de53-ecca-4439-94c0-2b65e5b21789","Type":"ContainerStarted","Data":"05dee4f22482bc883e51eec6aed65417a0e3f88d875e68e8f8421a8ac9fde09f"} Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.228746 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7tss" event={"ID":"ccb2de53-ecca-4439-94c0-2b65e5b21789","Type":"ContainerStarted","Data":"502a0b08a8dac0a69fe71f8598a99c2b1636bd21216fec7b3183e39a3feddbcd"} Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.230039 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lxjh4" event={"ID":"3ec2385e-2bf3-44c1-93fe-51f82d425444","Type":"ContainerStarted","Data":"5b25f4712c4b0b701240a3c6e2f1a02b5b545af0321cc4208a9b11ada1588af5"} Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.231591 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7cskt" event={"ID":"c294adb2-f360-4af0-9919-dc678235c37d","Type":"ContainerStarted","Data":"81c68dd225c27de490fa72542f296f137444119d14355f66d67e51ec9b3bf6fe"} Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.232349 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7cskt" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.233583 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hhsgr" event={"ID":"312e81dd-7f53-4f53-a0fa-c0ef69a50cd2","Type":"ContainerStarted","Data":"a5d865b84588b36ba1b20472b6f3d7a600279dca5a439352a6e89e8dac55a33f"} Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.234010 4908 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7cskt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.234043 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7cskt" podUID="c294adb2-f360-4af0-9919-dc678235c37d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.235078 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5djg7" event={"ID":"fb91aa35-0cab-44ca-b05c-a8e73038efdc","Type":"ContainerStarted","Data":"6961298832beba1bd53cc660156ce1d7952445b2e64059c76ddf8df38f668f69"} Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.235101 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5djg7" event={"ID":"fb91aa35-0cab-44ca-b05c-a8e73038efdc","Type":"ContainerStarted","Data":"baa751c14ee39a62d2152e1afa59d481047c74f4b09ce7404946ea4f04e82c19"} Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.235575 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-5djg7" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.247276 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vln8d" event={"ID":"787bae26-eaf0-4c74-84a1-4ada053cd05a","Type":"ContainerStarted","Data":"37158cf016387ac1898833d21e10c9147ab9765da0805001f05dff4b4e20f48d"} Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.248802 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5dh6j" event={"ID":"21666eaa-906a-49c3-aaf9-b2466bc9d7f3","Type":"ContainerStarted","Data":"449c1860b7436806ad75c67b8b160266ef67990744e6dfb9ace3f8ef451441db"} Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.251562 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nkm6l" podStartSLOduration=131.251549731 podStartE2EDuration="2m11.251549731s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:20.249702093 +0000 UTC m=+166.865646747" watchObservedRunningTime="2026-01-31 07:24:20.251549731 +0000 UTC m=+166.867494385" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.251843 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4vxx6" event={"ID":"9742dfc9-568c-49f1-89ae-8f3959bf33ad","Type":"ContainerStarted","Data":"d4d4ff32d228a922bd1eb647b421069fdac4499f61b0c62146238493e3aa000d"} Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.252004 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4vxx6" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.253214 4908 generic.go:334] "Generic (PLEG): container finished" podID="d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a" containerID="3bbd8156c939cbba9d7ca6702789a67646ee0f06b1527be920083493bceaf7af" exitCode=0 Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.253274 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" event={"ID":"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a","Type":"ContainerDied","Data":"3bbd8156c939cbba9d7ca6702789a67646ee0f06b1527be920083493bceaf7af"} Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.260419 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gmh7h" event={"ID":"0d0a4cbb-8e82-42c7-8661-4c4f371699e0","Type":"ContainerStarted","Data":"511d4f679a2edfe6d6cff0f43c93fe7fff0a697439aa7d8c68c68a5db587aa47"} Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.260450 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gmh7h" event={"ID":"0d0a4cbb-8e82-42c7-8661-4c4f371699e0","Type":"ContainerStarted","Data":"97ba0985b5a8f378bb0769bf3d0a675c93169b7f6c554efdcb47af41e256fd16"} Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.260462 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gmh7h" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.263123 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-dbx7j" event={"ID":"e2a40a7d-4ba8-4fd0-863f-53d987b1383c","Type":"ContainerStarted","Data":"757ce9c93c1a39c5e84996f69ed77fd9ef7de494d284b1e1e3f02affc3aa1f12"} Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.264476 4908 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vzfwr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.264700 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vzfwr" podUID="15ac1097-42d7-4ca8-b258-9418f0e0993e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.264798 4908 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-wgj2j container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.264835 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" podUID="1b18d2de-0b08-43c9-bcbf-1ced621bac08" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.265375 4908 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-vgk68 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.265400 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" podUID="61affb6e-e659-45c3-b1bb-f328e073304f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.327608 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.327739 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" podStartSLOduration=131.32771985 podStartE2EDuration="2m11.32771985s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:20.312628604 +0000 UTC m=+166.928573258" watchObservedRunningTime="2026-01-31 07:24:20.32771985 +0000 UTC m=+166.943664494" Jan 31 07:24:20 crc kubenswrapper[4908]: E0131 07:24:20.327828 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:20.827814872 +0000 UTC m=+167.443759516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.356532 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-k4x5t" podStartSLOduration=131.356513897 podStartE2EDuration="2m11.356513897s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:20.356511557 +0000 UTC m=+166.972456211" watchObservedRunningTime="2026-01-31 07:24:20.356513897 +0000 UTC m=+166.972458551" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.361414 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:20 crc kubenswrapper[4908]: E0131 07:24:20.373522 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:20.873507742 +0000 UTC m=+167.489452396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.399281 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d4l95" podStartSLOduration=131.399266621 podStartE2EDuration="2m11.399266621s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:20.397085905 +0000 UTC m=+167.013030559" watchObservedRunningTime="2026-01-31 07:24:20.399266621 +0000 UTC m=+167.015211275" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.419372 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-5djg7" podStartSLOduration=8.419358875 podStartE2EDuration="8.419358875s" podCreationTimestamp="2026-01-31 07:24:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:20.41874349 +0000 UTC m=+167.034688134" watchObservedRunningTime="2026-01-31 07:24:20.419358875 +0000 UTC m=+167.035303529" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.442041 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-7cskt" podStartSLOduration=131.442028906 podStartE2EDuration="2m11.442028906s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:20.440352093 +0000 UTC m=+167.056296747" watchObservedRunningTime="2026-01-31 07:24:20.442028906 +0000 UTC m=+167.057973560" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.461287 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7tss" podStartSLOduration=131.461270038 podStartE2EDuration="2m11.461270038s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:20.460141249 +0000 UTC m=+167.076085923" watchObservedRunningTime="2026-01-31 07:24:20.461270038 +0000 UTC m=+167.077214692" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.462418 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:20 crc kubenswrapper[4908]: E0131 07:24:20.462695 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:20.962663914 +0000 UTC m=+167.578608568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.463348 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:20 crc kubenswrapper[4908]: E0131 07:24:20.465082 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:20.965068685 +0000 UTC m=+167.581013339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.493198 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hhsgr" podStartSLOduration=131.493178715 podStartE2EDuration="2m11.493178715s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:20.492038386 +0000 UTC m=+167.107983040" watchObservedRunningTime="2026-01-31 07:24:20.493178715 +0000 UTC m=+167.109123369" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.549190 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rlxht" podStartSLOduration=131.549171228 podStartE2EDuration="2m11.549171228s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:20.528315644 +0000 UTC m=+167.144260308" watchObservedRunningTime="2026-01-31 07:24:20.549171228 +0000 UTC m=+167.165115882" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.550943 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-4jhvx" podStartSLOduration=131.550932303 podStartE2EDuration="2m11.550932303s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:20.54885965 +0000 UTC m=+167.164804304" watchObservedRunningTime="2026-01-31 07:24:20.550932303 +0000 UTC m=+167.166876957" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.565766 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:20 crc kubenswrapper[4908]: E0131 07:24:20.566241 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:21.066196373 +0000 UTC m=+167.682141027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.566741 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:20 crc kubenswrapper[4908]: E0131 07:24:20.567632 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:21.067594929 +0000 UTC m=+167.683539593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.578082 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rzszg" podStartSLOduration=131.578061187 podStartE2EDuration="2m11.578061187s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:20.577518793 +0000 UTC m=+167.193463447" watchObservedRunningTime="2026-01-31 07:24:20.578061187 +0000 UTC m=+167.194005841" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.597883 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lxjh4" podStartSLOduration=131.597859344 podStartE2EDuration="2m11.597859344s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:20.596697324 +0000 UTC m=+167.212641998" watchObservedRunningTime="2026-01-31 07:24:20.597859344 +0000 UTC m=+167.213804008" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.668724 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:20 crc kubenswrapper[4908]: E0131 07:24:20.668993 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:21.168965214 +0000 UTC m=+167.784909858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.670793 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-5dh6j" podStartSLOduration=8.67077041 podStartE2EDuration="8.67077041s" podCreationTimestamp="2026-01-31 07:24:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:20.632369607 +0000 UTC m=+167.248314261" watchObservedRunningTime="2026-01-31 07:24:20.67077041 +0000 UTC m=+167.286715064" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.694720 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jt7r7" podStartSLOduration=131.694700632 podStartE2EDuration="2m11.694700632s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:20.668137602 +0000 UTC m=+167.284082256" watchObservedRunningTime="2026-01-31 07:24:20.694700632 +0000 UTC m=+167.310645286" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.694890 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-vln8d" podStartSLOduration=131.694883847 podStartE2EDuration="2m11.694883847s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:20.691709176 +0000 UTC m=+167.307653830" watchObservedRunningTime="2026-01-31 07:24:20.694883847 +0000 UTC m=+167.310828501" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.752952 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gmh7h" podStartSLOduration=131.752937073 podStartE2EDuration="2m11.752937073s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:20.751095435 +0000 UTC m=+167.367040089" watchObservedRunningTime="2026-01-31 07:24:20.752937073 +0000 UTC m=+167.368881727" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.770520 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:20 crc kubenswrapper[4908]: E0131 07:24:20.770851 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:21.270839541 +0000 UTC m=+167.886784195 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.787619 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-8rjct" podStartSLOduration=131.78760421 podStartE2EDuration="2m11.78760421s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:20.786654426 +0000 UTC m=+167.402599100" watchObservedRunningTime="2026-01-31 07:24:20.78760421 +0000 UTC m=+167.403548864" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.814884 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4vxx6" podStartSLOduration=131.814869508 podStartE2EDuration="2m11.814869508s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:20.813905653 +0000 UTC m=+167.429850317" watchObservedRunningTime="2026-01-31 07:24:20.814869508 +0000 UTC m=+167.430814162" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.829110 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-dbx7j" podStartSLOduration=130.829095312 podStartE2EDuration="2m10.829095312s" podCreationTimestamp="2026-01-31 07:22:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:20.828216119 +0000 UTC m=+167.444160773" watchObservedRunningTime="2026-01-31 07:24:20.829095312 +0000 UTC m=+167.445039976" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.872259 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:20 crc kubenswrapper[4908]: E0131 07:24:20.872464 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:21.372434991 +0000 UTC m=+167.988379655 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.872524 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:20 crc kubenswrapper[4908]: E0131 07:24:20.873167 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:21.373154129 +0000 UTC m=+167.989098783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.938027 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qhffb"] Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.938956 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qhffb" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.942019 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.973489 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:20 crc kubenswrapper[4908]: E0131 07:24:20.973714 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:21.473684792 +0000 UTC m=+168.089629446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.973834 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:20 crc kubenswrapper[4908]: E0131 07:24:20.974170 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:21.474162704 +0000 UTC m=+168.090107358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.975344 4908 patch_prober.go:28] interesting pod/router-default-5444994796-lnblx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 07:24:20 crc kubenswrapper[4908]: [-]has-synced failed: reason withheld Jan 31 07:24:20 crc kubenswrapper[4908]: [+]process-running ok Jan 31 07:24:20 crc kubenswrapper[4908]: healthz check failed Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.975381 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lnblx" podUID="900fb657-d80e-4887-8144-424a3cf39946" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 07:24:20 crc kubenswrapper[4908]: I0131 07:24:20.985886 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qhffb"] Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.074537 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:21 crc kubenswrapper[4908]: E0131 07:24:21.074736 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:21.574712748 +0000 UTC m=+168.190657402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.074938 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.074987 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfczh\" (UniqueName: \"kubernetes.io/projected/dc5d84aa-bc03-4089-be41-0f32bd1ceff4-kube-api-access-zfczh\") pod \"certified-operators-qhffb\" (UID: \"dc5d84aa-bc03-4089-be41-0f32bd1ceff4\") " pod="openshift-marketplace/certified-operators-qhffb" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.075021 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc5d84aa-bc03-4089-be41-0f32bd1ceff4-catalog-content\") pod \"certified-operators-qhffb\" (UID: \"dc5d84aa-bc03-4089-be41-0f32bd1ceff4\") " pod="openshift-marketplace/certified-operators-qhffb" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.075043 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc5d84aa-bc03-4089-be41-0f32bd1ceff4-utilities\") pod \"certified-operators-qhffb\" (UID: \"dc5d84aa-bc03-4089-be41-0f32bd1ceff4\") " pod="openshift-marketplace/certified-operators-qhffb" Jan 31 07:24:21 crc kubenswrapper[4908]: E0131 07:24:21.075342 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:21.575335113 +0000 UTC m=+168.191279767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.140655 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ld4tn"] Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.141500 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ld4tn" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.149242 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.174484 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ld4tn"] Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.177817 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:21 crc kubenswrapper[4908]: E0131 07:24:21.178112 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:21.678087303 +0000 UTC m=+168.294031957 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.178473 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfczh\" (UniqueName: \"kubernetes.io/projected/dc5d84aa-bc03-4089-be41-0f32bd1ceff4-kube-api-access-zfczh\") pod \"certified-operators-qhffb\" (UID: \"dc5d84aa-bc03-4089-be41-0f32bd1ceff4\") " pod="openshift-marketplace/certified-operators-qhffb" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.178680 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.178709 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc5d84aa-bc03-4089-be41-0f32bd1ceff4-catalog-content\") pod \"certified-operators-qhffb\" (UID: \"dc5d84aa-bc03-4089-be41-0f32bd1ceff4\") " pod="openshift-marketplace/certified-operators-qhffb" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.178731 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc5d84aa-bc03-4089-be41-0f32bd1ceff4-utilities\") pod \"certified-operators-qhffb\" (UID: \"dc5d84aa-bc03-4089-be41-0f32bd1ceff4\") " pod="openshift-marketplace/certified-operators-qhffb" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.179362 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc5d84aa-bc03-4089-be41-0f32bd1ceff4-catalog-content\") pod \"certified-operators-qhffb\" (UID: \"dc5d84aa-bc03-4089-be41-0f32bd1ceff4\") " pod="openshift-marketplace/certified-operators-qhffb" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.179779 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc5d84aa-bc03-4089-be41-0f32bd1ceff4-utilities\") pod \"certified-operators-qhffb\" (UID: \"dc5d84aa-bc03-4089-be41-0f32bd1ceff4\") " pod="openshift-marketplace/certified-operators-qhffb" Jan 31 07:24:21 crc kubenswrapper[4908]: E0131 07:24:21.179907 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:21.679895309 +0000 UTC m=+168.295839963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.246554 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfczh\" (UniqueName: \"kubernetes.io/projected/dc5d84aa-bc03-4089-be41-0f32bd1ceff4-kube-api-access-zfczh\") pod \"certified-operators-qhffb\" (UID: \"dc5d84aa-bc03-4089-be41-0f32bd1ceff4\") " pod="openshift-marketplace/certified-operators-qhffb" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.252772 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qhffb" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.271238 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2rbj4" event={"ID":"726a0082-0e03-4539-9f62-ee7776d0a7d8","Type":"ContainerStarted","Data":"43ca280288be586ec44bccad26e7ba7511dabac09e0cc37ef7230ee86c03b47d"} Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.275198 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" event={"ID":"d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a","Type":"ContainerStarted","Data":"158722dd1fbf80ec8555326c2bae628b91646d170989b7147e11ffa6a74da465"} Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.275857 4908 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-wgj2j container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.275906 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" podUID="1b18d2de-0b08-43c9-bcbf-1ced621bac08" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.276353 4908 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-ltcmm container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.276387 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" podUID="ab833be5-a275-4c72-92d4-f6c93dd249a8" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.276390 4908 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7cskt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.276431 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7cskt" podUID="c294adb2-f360-4af0-9919-dc678235c37d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.277017 4908 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vzfwr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.277038 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vzfwr" podUID="15ac1097-42d7-4ca8-b258-9418f0e0993e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.279245 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:21 crc kubenswrapper[4908]: E0131 07:24:21.279638 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:21.779614562 +0000 UTC m=+168.395559206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.279794 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.279887 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c0fecdd-45be-4880-9629-53c2efef8340-catalog-content\") pod \"community-operators-ld4tn\" (UID: \"0c0fecdd-45be-4880-9629-53c2efef8340\") " pod="openshift-marketplace/community-operators-ld4tn" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.279925 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggjrz\" (UniqueName: \"kubernetes.io/projected/0c0fecdd-45be-4880-9629-53c2efef8340-kube-api-access-ggjrz\") pod \"community-operators-ld4tn\" (UID: \"0c0fecdd-45be-4880-9629-53c2efef8340\") " pod="openshift-marketplace/community-operators-ld4tn" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.279968 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c0fecdd-45be-4880-9629-53c2efef8340-utilities\") pod \"community-operators-ld4tn\" (UID: \"0c0fecdd-45be-4880-9629-53c2efef8340\") " pod="openshift-marketplace/community-operators-ld4tn" Jan 31 07:24:21 crc kubenswrapper[4908]: E0131 07:24:21.280431 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:21.780421392 +0000 UTC m=+168.396366036 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.299836 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" podStartSLOduration=132.299822199 podStartE2EDuration="2m12.299822199s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:21.299339786 +0000 UTC m=+167.915284440" watchObservedRunningTime="2026-01-31 07:24:21.299822199 +0000 UTC m=+167.915766853" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.343221 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c8k74"] Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.344708 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c8k74" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.358435 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.358563 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.369314 4908 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-m8jlm container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.369576 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" podUID="d3edea2b-13a4-4b3c-8464-d6bcc1f47b0a" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.372768 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c8k74"] Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.383035 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.383430 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c0fecdd-45be-4880-9629-53c2efef8340-catalog-content\") pod \"community-operators-ld4tn\" (UID: \"0c0fecdd-45be-4880-9629-53c2efef8340\") " pod="openshift-marketplace/community-operators-ld4tn" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.383553 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggjrz\" (UniqueName: \"kubernetes.io/projected/0c0fecdd-45be-4880-9629-53c2efef8340-kube-api-access-ggjrz\") pod \"community-operators-ld4tn\" (UID: \"0c0fecdd-45be-4880-9629-53c2efef8340\") " pod="openshift-marketplace/community-operators-ld4tn" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.383647 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c0fecdd-45be-4880-9629-53c2efef8340-utilities\") pod \"community-operators-ld4tn\" (UID: \"0c0fecdd-45be-4880-9629-53c2efef8340\") " pod="openshift-marketplace/community-operators-ld4tn" Jan 31 07:24:21 crc kubenswrapper[4908]: E0131 07:24:21.384585 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:21.884569938 +0000 UTC m=+168.500514592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.389524 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c0fecdd-45be-4880-9629-53c2efef8340-utilities\") pod \"community-operators-ld4tn\" (UID: \"0c0fecdd-45be-4880-9629-53c2efef8340\") " pod="openshift-marketplace/community-operators-ld4tn" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.395034 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c0fecdd-45be-4880-9629-53c2efef8340-catalog-content\") pod \"community-operators-ld4tn\" (UID: \"0c0fecdd-45be-4880-9629-53c2efef8340\") " pod="openshift-marketplace/community-operators-ld4tn" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.441627 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggjrz\" (UniqueName: \"kubernetes.io/projected/0c0fecdd-45be-4880-9629-53c2efef8340-kube-api-access-ggjrz\") pod \"community-operators-ld4tn\" (UID: \"0c0fecdd-45be-4880-9629-53c2efef8340\") " pod="openshift-marketplace/community-operators-ld4tn" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.459244 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ld4tn" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.484961 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzgbh\" (UniqueName: \"kubernetes.io/projected/ba3d735e-ca4d-48b1-90c2-2edbcfa582ac-kube-api-access-zzgbh\") pod \"certified-operators-c8k74\" (UID: \"ba3d735e-ca4d-48b1-90c2-2edbcfa582ac\") " pod="openshift-marketplace/certified-operators-c8k74" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.485056 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba3d735e-ca4d-48b1-90c2-2edbcfa582ac-utilities\") pod \"certified-operators-c8k74\" (UID: \"ba3d735e-ca4d-48b1-90c2-2edbcfa582ac\") " pod="openshift-marketplace/certified-operators-c8k74" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.485076 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba3d735e-ca4d-48b1-90c2-2edbcfa582ac-catalog-content\") pod \"certified-operators-c8k74\" (UID: \"ba3d735e-ca4d-48b1-90c2-2edbcfa582ac\") " pod="openshift-marketplace/certified-operators-c8k74" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.485100 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:21 crc kubenswrapper[4908]: E0131 07:24:21.485352 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:21.985341357 +0000 UTC m=+168.601286011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.542325 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hfhwx"] Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.543593 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hfhwx" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.585752 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:21 crc kubenswrapper[4908]: E0131 07:24:21.585924 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:22.08589659 +0000 UTC m=+168.701841244 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.586774 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba3d735e-ca4d-48b1-90c2-2edbcfa582ac-utilities\") pod \"certified-operators-c8k74\" (UID: \"ba3d735e-ca4d-48b1-90c2-2edbcfa582ac\") " pod="openshift-marketplace/certified-operators-c8k74" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.586856 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba3d735e-ca4d-48b1-90c2-2edbcfa582ac-catalog-content\") pod \"certified-operators-c8k74\" (UID: \"ba3d735e-ca4d-48b1-90c2-2edbcfa582ac\") " pod="openshift-marketplace/certified-operators-c8k74" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.587002 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.587142 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzgbh\" (UniqueName: \"kubernetes.io/projected/ba3d735e-ca4d-48b1-90c2-2edbcfa582ac-kube-api-access-zzgbh\") pod \"certified-operators-c8k74\" (UID: \"ba3d735e-ca4d-48b1-90c2-2edbcfa582ac\") " pod="openshift-marketplace/certified-operators-c8k74" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.588230 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba3d735e-ca4d-48b1-90c2-2edbcfa582ac-utilities\") pod \"certified-operators-c8k74\" (UID: \"ba3d735e-ca4d-48b1-90c2-2edbcfa582ac\") " pod="openshift-marketplace/certified-operators-c8k74" Jan 31 07:24:21 crc kubenswrapper[4908]: E0131 07:24:21.588692 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:22.088681401 +0000 UTC m=+168.704626055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.592124 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba3d735e-ca4d-48b1-90c2-2edbcfa582ac-catalog-content\") pod \"certified-operators-c8k74\" (UID: \"ba3d735e-ca4d-48b1-90c2-2edbcfa582ac\") " pod="openshift-marketplace/certified-operators-c8k74" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.601730 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hfhwx"] Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.618326 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzgbh\" (UniqueName: \"kubernetes.io/projected/ba3d735e-ca4d-48b1-90c2-2edbcfa582ac-kube-api-access-zzgbh\") pod \"certified-operators-c8k74\" (UID: \"ba3d735e-ca4d-48b1-90c2-2edbcfa582ac\") " pod="openshift-marketplace/certified-operators-c8k74" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.666370 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c8k74" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.690165 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.690337 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/100bafc6-355c-4131-9907-45004788f44c-catalog-content\") pod \"community-operators-hfhwx\" (UID: \"100bafc6-355c-4131-9907-45004788f44c\") " pod="openshift-marketplace/community-operators-hfhwx" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.690364 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wbrx\" (UniqueName: \"kubernetes.io/projected/100bafc6-355c-4131-9907-45004788f44c-kube-api-access-5wbrx\") pod \"community-operators-hfhwx\" (UID: \"100bafc6-355c-4131-9907-45004788f44c\") " pod="openshift-marketplace/community-operators-hfhwx" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.690453 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/100bafc6-355c-4131-9907-45004788f44c-utilities\") pod \"community-operators-hfhwx\" (UID: \"100bafc6-355c-4131-9907-45004788f44c\") " pod="openshift-marketplace/community-operators-hfhwx" Jan 31 07:24:21 crc kubenswrapper[4908]: E0131 07:24:21.690572 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:22.190553239 +0000 UTC m=+168.806497893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.768188 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qhffb"] Jan 31 07:24:21 crc kubenswrapper[4908]: W0131 07:24:21.784364 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc5d84aa_bc03_4089_be41_0f32bd1ceff4.slice/crio-78b04d97699af8a0461fa1adef1512ef98870ac4eaf5995b7e28c1daa67ad1bc WatchSource:0}: Error finding container 78b04d97699af8a0461fa1adef1512ef98870ac4eaf5995b7e28c1daa67ad1bc: Status 404 returned error can't find the container with id 78b04d97699af8a0461fa1adef1512ef98870ac4eaf5995b7e28c1daa67ad1bc Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.793902 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.793995 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/100bafc6-355c-4131-9907-45004788f44c-utilities\") pod \"community-operators-hfhwx\" (UID: \"100bafc6-355c-4131-9907-45004788f44c\") " pod="openshift-marketplace/community-operators-hfhwx" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.794023 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/100bafc6-355c-4131-9907-45004788f44c-catalog-content\") pod \"community-operators-hfhwx\" (UID: \"100bafc6-355c-4131-9907-45004788f44c\") " pod="openshift-marketplace/community-operators-hfhwx" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.794042 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wbrx\" (UniqueName: \"kubernetes.io/projected/100bafc6-355c-4131-9907-45004788f44c-kube-api-access-5wbrx\") pod \"community-operators-hfhwx\" (UID: \"100bafc6-355c-4131-9907-45004788f44c\") " pod="openshift-marketplace/community-operators-hfhwx" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.794607 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/100bafc6-355c-4131-9907-45004788f44c-utilities\") pod \"community-operators-hfhwx\" (UID: \"100bafc6-355c-4131-9907-45004788f44c\") " pod="openshift-marketplace/community-operators-hfhwx" Jan 31 07:24:21 crc kubenswrapper[4908]: E0131 07:24:21.794866 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:22.294852638 +0000 UTC m=+168.910797292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.796026 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/100bafc6-355c-4131-9907-45004788f44c-catalog-content\") pod \"community-operators-hfhwx\" (UID: \"100bafc6-355c-4131-9907-45004788f44c\") " pod="openshift-marketplace/community-operators-hfhwx" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.849730 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wbrx\" (UniqueName: \"kubernetes.io/projected/100bafc6-355c-4131-9907-45004788f44c-kube-api-access-5wbrx\") pod \"community-operators-hfhwx\" (UID: \"100bafc6-355c-4131-9907-45004788f44c\") " pod="openshift-marketplace/community-operators-hfhwx" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.889262 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hfhwx" Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.897392 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:21 crc kubenswrapper[4908]: E0131 07:24:21.897785 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:22.397770412 +0000 UTC m=+169.013715066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.981275 4908 patch_prober.go:28] interesting pod/router-default-5444994796-lnblx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 07:24:21 crc kubenswrapper[4908]: [-]has-synced failed: reason withheld Jan 31 07:24:21 crc kubenswrapper[4908]: [+]process-running ok Jan 31 07:24:21 crc kubenswrapper[4908]: healthz check failed Jan 31 07:24:21 crc kubenswrapper[4908]: I0131 07:24:21.981319 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lnblx" podUID="900fb657-d80e-4887-8144-424a3cf39946" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 07:24:22 crc kubenswrapper[4908]: I0131 07:24:22.001067 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ld4tn"] Jan 31 07:24:22 crc kubenswrapper[4908]: I0131 07:24:22.002104 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:22 crc kubenswrapper[4908]: E0131 07:24:22.009387 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:22.509367958 +0000 UTC m=+169.125312612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:22 crc kubenswrapper[4908]: W0131 07:24:22.029869 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c0fecdd_45be_4880_9629_53c2efef8340.slice/crio-d41c75f0a90b95d5b5376214e975283f5c765977c07413210dc0a762ecd7e334 WatchSource:0}: Error finding container d41c75f0a90b95d5b5376214e975283f5c765977c07413210dc0a762ecd7e334: Status 404 returned error can't find the container with id d41c75f0a90b95d5b5376214e975283f5c765977c07413210dc0a762ecd7e334 Jan 31 07:24:22 crc kubenswrapper[4908]: I0131 07:24:22.110519 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:22 crc kubenswrapper[4908]: E0131 07:24:22.110783 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:22.610767532 +0000 UTC m=+169.226712186 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:22 crc kubenswrapper[4908]: I0131 07:24:22.213564 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:22 crc kubenswrapper[4908]: E0131 07:24:22.214311 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:22.714295902 +0000 UTC m=+169.330240556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:22 crc kubenswrapper[4908]: I0131 07:24:22.310706 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhffb" event={"ID":"dc5d84aa-bc03-4089-be41-0f32bd1ceff4","Type":"ContainerStarted","Data":"41ea6e3e6e35952afe65f0021c18892f47a7781d9a3a3b728c98799bcd77a1a7"} Jan 31 07:24:22 crc kubenswrapper[4908]: I0131 07:24:22.310753 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhffb" event={"ID":"dc5d84aa-bc03-4089-be41-0f32bd1ceff4","Type":"ContainerStarted","Data":"78b04d97699af8a0461fa1adef1512ef98870ac4eaf5995b7e28c1daa67ad1bc"} Jan 31 07:24:22 crc kubenswrapper[4908]: I0131 07:24:22.314956 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:22 crc kubenswrapper[4908]: E0131 07:24:22.319351 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:22.81932736 +0000 UTC m=+169.435272014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:22 crc kubenswrapper[4908]: I0131 07:24:22.319482 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:22 crc kubenswrapper[4908]: E0131 07:24:22.319880 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:22.819872494 +0000 UTC m=+169.435817148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:22 crc kubenswrapper[4908]: I0131 07:24:22.320114 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ld4tn" event={"ID":"0c0fecdd-45be-4880-9629-53c2efef8340","Type":"ContainerStarted","Data":"d41c75f0a90b95d5b5376214e975283f5c765977c07413210dc0a762ecd7e334"} Jan 31 07:24:22 crc kubenswrapper[4908]: I0131 07:24:22.321257 4908 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7cskt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Jan 31 07:24:22 crc kubenswrapper[4908]: I0131 07:24:22.321300 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7cskt" podUID="c294adb2-f360-4af0-9919-dc678235c37d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Jan 31 07:24:22 crc kubenswrapper[4908]: I0131 07:24:22.355162 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c8k74"] Jan 31 07:24:22 crc kubenswrapper[4908]: I0131 07:24:22.424253 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:22 crc kubenswrapper[4908]: E0131 07:24:22.425705 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:22.925690042 +0000 UTC m=+169.541634686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:22 crc kubenswrapper[4908]: I0131 07:24:22.532137 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:22 crc kubenswrapper[4908]: E0131 07:24:22.532505 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:23.032489745 +0000 UTC m=+169.648434399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:22 crc kubenswrapper[4908]: I0131 07:24:22.634411 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:22 crc kubenswrapper[4908]: E0131 07:24:22.634686 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:23.13467098 +0000 UTC m=+169.750615624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:22 crc kubenswrapper[4908]: I0131 07:24:22.735832 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:22 crc kubenswrapper[4908]: E0131 07:24:22.736480 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:23.236468805 +0000 UTC m=+169.852413459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:22 crc kubenswrapper[4908]: I0131 07:24:22.783911 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hfhwx"] Jan 31 07:24:22 crc kubenswrapper[4908]: W0131 07:24:22.794767 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod100bafc6_355c_4131_9907_45004788f44c.slice/crio-5335251c873b37870693eaf7f4ba281f9a14a83bd7bc3ccb10d36c46aedd1bcf WatchSource:0}: Error finding container 5335251c873b37870693eaf7f4ba281f9a14a83bd7bc3ccb10d36c46aedd1bcf: Status 404 returned error can't find the container with id 5335251c873b37870693eaf7f4ba281f9a14a83bd7bc3ccb10d36c46aedd1bcf Jan 31 07:24:22 crc kubenswrapper[4908]: I0131 07:24:22.836826 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:22 crc kubenswrapper[4908]: E0131 07:24:22.837179 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:23.337162462 +0000 UTC m=+169.953107116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:22 crc kubenswrapper[4908]: I0131 07:24:22.938832 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:22 crc kubenswrapper[4908]: E0131 07:24:22.939273 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:23.439262185 +0000 UTC m=+170.055206839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:22 crc kubenswrapper[4908]: I0131 07:24:22.950197 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hqxmf"] Jan 31 07:24:22 crc kubenswrapper[4908]: I0131 07:24:22.951418 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hqxmf" Jan 31 07:24:22 crc kubenswrapper[4908]: I0131 07:24:22.954347 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 07:24:22 crc kubenswrapper[4908]: I0131 07:24:22.970174 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqxmf"] Jan 31 07:24:22 crc kubenswrapper[4908]: I0131 07:24:22.983566 4908 patch_prober.go:28] interesting pod/router-default-5444994796-lnblx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 07:24:22 crc kubenswrapper[4908]: [-]has-synced failed: reason withheld Jan 31 07:24:22 crc kubenswrapper[4908]: [+]process-running ok Jan 31 07:24:22 crc kubenswrapper[4908]: healthz check failed Jan 31 07:24:22 crc kubenswrapper[4908]: I0131 07:24:22.983640 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lnblx" podUID="900fb657-d80e-4887-8144-424a3cf39946" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.003136 4908 csr.go:261] certificate signing request csr-p4n66 is approved, waiting to be issued Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.025624 4908 csr.go:257] certificate signing request csr-p4n66 is issued Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.040275 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:23 crc kubenswrapper[4908]: E0131 07:24:23.040521 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:23.540489216 +0000 UTC m=+170.156433870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.040615 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.040659 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2db9dfc-20ec-446a-878c-db0e800be1a0-catalog-content\") pod \"redhat-marketplace-hqxmf\" (UID: \"b2db9dfc-20ec-446a-878c-db0e800be1a0\") " pod="openshift-marketplace/redhat-marketplace-hqxmf" Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.040693 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2db9dfc-20ec-446a-878c-db0e800be1a0-utilities\") pod \"redhat-marketplace-hqxmf\" (UID: \"b2db9dfc-20ec-446a-878c-db0e800be1a0\") " pod="openshift-marketplace/redhat-marketplace-hqxmf" Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.040754 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhwjd\" (UniqueName: \"kubernetes.io/projected/b2db9dfc-20ec-446a-878c-db0e800be1a0-kube-api-access-nhwjd\") pod \"redhat-marketplace-hqxmf\" (UID: \"b2db9dfc-20ec-446a-878c-db0e800be1a0\") " pod="openshift-marketplace/redhat-marketplace-hqxmf" Jan 31 07:24:23 crc kubenswrapper[4908]: E0131 07:24:23.040967 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:23.540955938 +0000 UTC m=+170.156900592 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.142267 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:23 crc kubenswrapper[4908]: E0131 07:24:23.142485 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:23.642454506 +0000 UTC m=+170.258399170 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.142821 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhwjd\" (UniqueName: \"kubernetes.io/projected/b2db9dfc-20ec-446a-878c-db0e800be1a0-kube-api-access-nhwjd\") pod \"redhat-marketplace-hqxmf\" (UID: \"b2db9dfc-20ec-446a-878c-db0e800be1a0\") " pod="openshift-marketplace/redhat-marketplace-hqxmf" Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.142887 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.142911 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2db9dfc-20ec-446a-878c-db0e800be1a0-catalog-content\") pod \"redhat-marketplace-hqxmf\" (UID: \"b2db9dfc-20ec-446a-878c-db0e800be1a0\") " pod="openshift-marketplace/redhat-marketplace-hqxmf" Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.142937 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2db9dfc-20ec-446a-878c-db0e800be1a0-utilities\") pod \"redhat-marketplace-hqxmf\" (UID: \"b2db9dfc-20ec-446a-878c-db0e800be1a0\") " pod="openshift-marketplace/redhat-marketplace-hqxmf" Jan 31 07:24:23 crc kubenswrapper[4908]: E0131 07:24:23.143208 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:23.643197825 +0000 UTC m=+170.259142479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.143393 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2db9dfc-20ec-446a-878c-db0e800be1a0-utilities\") pod \"redhat-marketplace-hqxmf\" (UID: \"b2db9dfc-20ec-446a-878c-db0e800be1a0\") " pod="openshift-marketplace/redhat-marketplace-hqxmf" Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.143512 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2db9dfc-20ec-446a-878c-db0e800be1a0-catalog-content\") pod \"redhat-marketplace-hqxmf\" (UID: \"b2db9dfc-20ec-446a-878c-db0e800be1a0\") " pod="openshift-marketplace/redhat-marketplace-hqxmf" Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.169370 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhwjd\" (UniqueName: \"kubernetes.io/projected/b2db9dfc-20ec-446a-878c-db0e800be1a0-kube-api-access-nhwjd\") pod \"redhat-marketplace-hqxmf\" (UID: \"b2db9dfc-20ec-446a-878c-db0e800be1a0\") " pod="openshift-marketplace/redhat-marketplace-hqxmf" Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.243758 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:23 crc kubenswrapper[4908]: E0131 07:24:23.243910 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:23.743890262 +0000 UTC m=+170.359834926 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.243943 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:23 crc kubenswrapper[4908]: E0131 07:24:23.244232 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:23.74422252 +0000 UTC m=+170.360167174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.270457 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hqxmf" Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.345597 4908 generic.go:334] "Generic (PLEG): container finished" podID="100bafc6-355c-4131-9907-45004788f44c" containerID="bbd003566d1a9309aeb64e31face697eb37e9dc8dc96de24f3947bd2733485ba" exitCode=0 Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.345689 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfhwx" event={"ID":"100bafc6-355c-4131-9907-45004788f44c","Type":"ContainerDied","Data":"bbd003566d1a9309aeb64e31face697eb37e9dc8dc96de24f3947bd2733485ba"} Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.345716 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfhwx" event={"ID":"100bafc6-355c-4131-9907-45004788f44c","Type":"ContainerStarted","Data":"5335251c873b37870693eaf7f4ba281f9a14a83bd7bc3ccb10d36c46aedd1bcf"} Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.347356 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:23 crc kubenswrapper[4908]: E0131 07:24:23.347683 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:23.847666858 +0000 UTC m=+170.463611512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.350466 4908 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.355334 4908 generic.go:334] "Generic (PLEG): container finished" podID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" containerID="41ea6e3e6e35952afe65f0021c18892f47a7781d9a3a3b728c98799bcd77a1a7" exitCode=0 Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.355396 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhffb" event={"ID":"dc5d84aa-bc03-4089-be41-0f32bd1ceff4","Type":"ContainerDied","Data":"41ea6e3e6e35952afe65f0021c18892f47a7781d9a3a3b728c98799bcd77a1a7"} Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.365612 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n9szx"] Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.367041 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n9szx" Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.374595 4908 generic.go:334] "Generic (PLEG): container finished" podID="0c0fecdd-45be-4880-9629-53c2efef8340" containerID="6042dde4253d792b82f569620786b94646ce5f06f4f6ef578a2af6288f82b628" exitCode=0 Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.374822 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ld4tn" event={"ID":"0c0fecdd-45be-4880-9629-53c2efef8340","Type":"ContainerDied","Data":"6042dde4253d792b82f569620786b94646ce5f06f4f6ef578a2af6288f82b628"} Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.407504 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c8k74" event={"ID":"ba3d735e-ca4d-48b1-90c2-2edbcfa582ac","Type":"ContainerDied","Data":"59ef281ca63a4f4c54fdf6d2b28b4d29bc451b8eaaff7060de89c24f89dae2d5"} Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.407455 4908 generic.go:334] "Generic (PLEG): container finished" podID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" containerID="59ef281ca63a4f4c54fdf6d2b28b4d29bc451b8eaaff7060de89c24f89dae2d5" exitCode=0 Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.408016 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c8k74" event={"ID":"ba3d735e-ca4d-48b1-90c2-2edbcfa582ac","Type":"ContainerStarted","Data":"a1da9aa2345801d9df2aa7c782295856a95eda5415a0dafe5d80879717327b8b"} Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.435924 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9szx"] Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.452524 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.452614 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05f1e995-f324-4225-a4a8-d476a4da7ff4-utilities\") pod \"redhat-marketplace-n9szx\" (UID: \"05f1e995-f324-4225-a4a8-d476a4da7ff4\") " pod="openshift-marketplace/redhat-marketplace-n9szx" Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.452736 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05f1e995-f324-4225-a4a8-d476a4da7ff4-catalog-content\") pod \"redhat-marketplace-n9szx\" (UID: \"05f1e995-f324-4225-a4a8-d476a4da7ff4\") " pod="openshift-marketplace/redhat-marketplace-n9szx" Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.452792 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m4nx\" (UniqueName: \"kubernetes.io/projected/05f1e995-f324-4225-a4a8-d476a4da7ff4-kube-api-access-8m4nx\") pod \"redhat-marketplace-n9szx\" (UID: \"05f1e995-f324-4225-a4a8-d476a4da7ff4\") " pod="openshift-marketplace/redhat-marketplace-n9szx" Jan 31 07:24:23 crc kubenswrapper[4908]: E0131 07:24:23.453153 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:23.953140267 +0000 UTC m=+170.569084931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.562094 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.562342 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05f1e995-f324-4225-a4a8-d476a4da7ff4-catalog-content\") pod \"redhat-marketplace-n9szx\" (UID: \"05f1e995-f324-4225-a4a8-d476a4da7ff4\") " pod="openshift-marketplace/redhat-marketplace-n9szx" Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.562388 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m4nx\" (UniqueName: \"kubernetes.io/projected/05f1e995-f324-4225-a4a8-d476a4da7ff4-kube-api-access-8m4nx\") pod \"redhat-marketplace-n9szx\" (UID: \"05f1e995-f324-4225-a4a8-d476a4da7ff4\") " pod="openshift-marketplace/redhat-marketplace-n9szx" Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.562439 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05f1e995-f324-4225-a4a8-d476a4da7ff4-utilities\") pod \"redhat-marketplace-n9szx\" (UID: \"05f1e995-f324-4225-a4a8-d476a4da7ff4\") " pod="openshift-marketplace/redhat-marketplace-n9szx" Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.562896 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05f1e995-f324-4225-a4a8-d476a4da7ff4-utilities\") pod \"redhat-marketplace-n9szx\" (UID: \"05f1e995-f324-4225-a4a8-d476a4da7ff4\") " pod="openshift-marketplace/redhat-marketplace-n9szx" Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.563145 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05f1e995-f324-4225-a4a8-d476a4da7ff4-catalog-content\") pod \"redhat-marketplace-n9szx\" (UID: \"05f1e995-f324-4225-a4a8-d476a4da7ff4\") " pod="openshift-marketplace/redhat-marketplace-n9szx" Jan 31 07:24:23 crc kubenswrapper[4908]: E0131 07:24:23.563194 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:24.063176853 +0000 UTC m=+170.679121507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.621828 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m4nx\" (UniqueName: \"kubernetes.io/projected/05f1e995-f324-4225-a4a8-d476a4da7ff4-kube-api-access-8m4nx\") pod \"redhat-marketplace-n9szx\" (UID: \"05f1e995-f324-4225-a4a8-d476a4da7ff4\") " pod="openshift-marketplace/redhat-marketplace-n9szx" Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.664657 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:23 crc kubenswrapper[4908]: E0131 07:24:23.665033 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:24.165017959 +0000 UTC m=+170.780962613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.698565 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n9szx" Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.768543 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:23 crc kubenswrapper[4908]: E0131 07:24:23.768726 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:24.268697373 +0000 UTC m=+170.884642027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.768820 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:23 crc kubenswrapper[4908]: E0131 07:24:23.769150 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:24.269143554 +0000 UTC m=+170.885088208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.869788 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:23 crc kubenswrapper[4908]: E0131 07:24:23.870121 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:24.370090528 +0000 UTC m=+170.986035202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.870300 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:23 crc kubenswrapper[4908]: E0131 07:24:23.870676 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:24.370657942 +0000 UTC m=+170.986602606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.972717 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:23 crc kubenswrapper[4908]: E0131 07:24:23.973130 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:24.473088924 +0000 UTC m=+171.089033578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.973370 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:23 crc kubenswrapper[4908]: E0131 07:24:23.973729 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:24.47371376 +0000 UTC m=+171.089658414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.976592 4908 patch_prober.go:28] interesting pod/router-default-5444994796-lnblx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 07:24:23 crc kubenswrapper[4908]: [-]has-synced failed: reason withheld Jan 31 07:24:23 crc kubenswrapper[4908]: [+]process-running ok Jan 31 07:24:23 crc kubenswrapper[4908]: healthz check failed Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.976663 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lnblx" podUID="900fb657-d80e-4887-8144-424a3cf39946" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 07:24:23 crc kubenswrapper[4908]: I0131 07:24:23.980798 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqxmf"] Jan 31 07:24:24 crc kubenswrapper[4908]: W0131 07:24:24.002960 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2db9dfc_20ec_446a_878c_db0e800be1a0.slice/crio-4ad118a6cdfdb81da7850f7a9e64cd0f3c6f489997da2fa4b8f83e62981fd88c WatchSource:0}: Error finding container 4ad118a6cdfdb81da7850f7a9e64cd0f3c6f489997da2fa4b8f83e62981fd88c: Status 404 returned error can't find the container with id 4ad118a6cdfdb81da7850f7a9e64cd0f3c6f489997da2fa4b8f83e62981fd88c Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.026524 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-31 07:19:23 +0000 UTC, rotation deadline is 2026-12-25 12:47:27.639230026 +0000 UTC Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.026562 4908 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7877h23m3.612670704s for next certificate rotation Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.074702 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:24 crc kubenswrapper[4908]: E0131 07:24:24.075106 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:24.575089314 +0000 UTC m=+171.191033968 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.149079 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9szx"] Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.186953 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:24 crc kubenswrapper[4908]: E0131 07:24:24.187627 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:24.687612264 +0000 UTC m=+171.303556918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.267423 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.269751 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.278706 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.278925 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.282588 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.288809 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:24 crc kubenswrapper[4908]: E0131 07:24:24.289283 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:24.789257475 +0000 UTC m=+171.405202139 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.353335 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wg9bf"] Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.354511 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wg9bf" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.361340 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.388838 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wg9bf"] Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.393924 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.393972 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f-catalog-content\") pod \"redhat-operators-wg9bf\" (UID: \"18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f\") " pod="openshift-marketplace/redhat-operators-wg9bf" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.394023 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcf0ad03-3820-4978-930d-0991084d4eb0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bcf0ad03-3820-4978-930d-0991084d4eb0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.394072 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b56tl\" (UniqueName: \"kubernetes.io/projected/18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f-kube-api-access-b56tl\") pod \"redhat-operators-wg9bf\" (UID: \"18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f\") " pod="openshift-marketplace/redhat-operators-wg9bf" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.394100 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcf0ad03-3820-4978-930d-0991084d4eb0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bcf0ad03-3820-4978-930d-0991084d4eb0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.394125 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f-utilities\") pod \"redhat-operators-wg9bf\" (UID: \"18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f\") " pod="openshift-marketplace/redhat-operators-wg9bf" Jan 31 07:24:24 crc kubenswrapper[4908]: E0131 07:24:24.394467 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:24.894452588 +0000 UTC m=+171.510397242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.439346 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqxmf" event={"ID":"b2db9dfc-20ec-446a-878c-db0e800be1a0","Type":"ContainerStarted","Data":"4ad118a6cdfdb81da7850f7a9e64cd0f3c6f489997da2fa4b8f83e62981fd88c"} Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.451527 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2rbj4" event={"ID":"726a0082-0e03-4539-9f62-ee7776d0a7d8","Type":"ContainerStarted","Data":"c655f1ab6b6aadf5791046e03eff6c58ae92cf5db40845f7e4e60d9e466003b3"} Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.465004 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9szx" event={"ID":"05f1e995-f324-4225-a4a8-d476a4da7ff4","Type":"ContainerStarted","Data":"ccd1121ca22ed2accf8fc4361b87e0d7857ba6b2605f066b56d28b60cc53c98d"} Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.468709 4908 generic.go:334] "Generic (PLEG): container finished" podID="fe51cf59-5f34-4f01-8404-2f95b7ca742b" containerID="c1e155866104b6dc0710b8f68dc7c3386a414264ab5b5cce8f826379dfa04c58" exitCode=0 Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.468754 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-k4x5t" event={"ID":"fe51cf59-5f34-4f01-8404-2f95b7ca742b","Type":"ContainerDied","Data":"c1e155866104b6dc0710b8f68dc7c3386a414264ab5b5cce8f826379dfa04c58"} Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.481789 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jm4lf" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.496303 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.496667 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f-catalog-content\") pod \"redhat-operators-wg9bf\" (UID: \"18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f\") " pod="openshift-marketplace/redhat-operators-wg9bf" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.496718 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcf0ad03-3820-4978-930d-0991084d4eb0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bcf0ad03-3820-4978-930d-0991084d4eb0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.496752 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b56tl\" (UniqueName: \"kubernetes.io/projected/18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f-kube-api-access-b56tl\") pod \"redhat-operators-wg9bf\" (UID: \"18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f\") " pod="openshift-marketplace/redhat-operators-wg9bf" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.496772 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcf0ad03-3820-4978-930d-0991084d4eb0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bcf0ad03-3820-4978-930d-0991084d4eb0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.496787 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f-utilities\") pod \"redhat-operators-wg9bf\" (UID: \"18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f\") " pod="openshift-marketplace/redhat-operators-wg9bf" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.497066 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcf0ad03-3820-4978-930d-0991084d4eb0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bcf0ad03-3820-4978-930d-0991084d4eb0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.497214 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f-utilities\") pod \"redhat-operators-wg9bf\" (UID: \"18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f\") " pod="openshift-marketplace/redhat-operators-wg9bf" Jan 31 07:24:24 crc kubenswrapper[4908]: E0131 07:24:24.497357 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:24.997332431 +0000 UTC m=+171.613277145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.497787 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f-catalog-content\") pod \"redhat-operators-wg9bf\" (UID: \"18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f\") " pod="openshift-marketplace/redhat-operators-wg9bf" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.531924 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcf0ad03-3820-4978-930d-0991084d4eb0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bcf0ad03-3820-4978-930d-0991084d4eb0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.554401 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b56tl\" (UniqueName: \"kubernetes.io/projected/18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f-kube-api-access-b56tl\") pod \"redhat-operators-wg9bf\" (UID: \"18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f\") " pod="openshift-marketplace/redhat-operators-wg9bf" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.597887 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:24 crc kubenswrapper[4908]: E0131 07:24:24.598699 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:25.098683194 +0000 UTC m=+171.714627848 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.611998 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.622660 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.622771 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.646137 4908 patch_prober.go:28] interesting pod/apiserver-76f77b778f-vln8d container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 31 07:24:24 crc kubenswrapper[4908]: [+]log ok Jan 31 07:24:24 crc kubenswrapper[4908]: [+]etcd ok Jan 31 07:24:24 crc kubenswrapper[4908]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 31 07:24:24 crc kubenswrapper[4908]: [+]poststarthook/generic-apiserver-start-informers ok Jan 31 07:24:24 crc kubenswrapper[4908]: [+]poststarthook/max-in-flight-filter ok Jan 31 07:24:24 crc kubenswrapper[4908]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 31 07:24:24 crc kubenswrapper[4908]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 31 07:24:24 crc kubenswrapper[4908]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 31 07:24:24 crc kubenswrapper[4908]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 31 07:24:24 crc kubenswrapper[4908]: [+]poststarthook/project.openshift.io-projectcache ok Jan 31 07:24:24 crc kubenswrapper[4908]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 31 07:24:24 crc kubenswrapper[4908]: [+]poststarthook/openshift.io-startinformers ok Jan 31 07:24:24 crc kubenswrapper[4908]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 31 07:24:24 crc kubenswrapper[4908]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 31 07:24:24 crc kubenswrapper[4908]: livez check failed Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.646202 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-vln8d" podUID="787bae26-eaf0-4c74-84a1-4ada053cd05a" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.699352 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:24 crc kubenswrapper[4908]: E0131 07:24:24.699531 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:25.199504715 +0000 UTC m=+171.815449369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.699706 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:24 crc kubenswrapper[4908]: E0131 07:24:24.700905 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:25.20088995 +0000 UTC m=+171.816834604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.707259 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wg9bf" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.760489 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wj798"] Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.766683 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wj798" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.770155 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wj798"] Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.787523 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nb82q" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.818720 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.818959 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce51ffb3-c332-4bb8-b574-44911178c9a1-utilities\") pod \"redhat-operators-wj798\" (UID: \"ce51ffb3-c332-4bb8-b574-44911178c9a1\") " pod="openshift-marketplace/redhat-operators-wj798" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.819021 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce51ffb3-c332-4bb8-b574-44911178c9a1-catalog-content\") pod \"redhat-operators-wj798\" (UID: \"ce51ffb3-c332-4bb8-b574-44911178c9a1\") " pod="openshift-marketplace/redhat-operators-wj798" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.819091 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qczk\" (UniqueName: \"kubernetes.io/projected/ce51ffb3-c332-4bb8-b574-44911178c9a1-kube-api-access-8qczk\") pod \"redhat-operators-wj798\" (UID: \"ce51ffb3-c332-4bb8-b574-44911178c9a1\") " pod="openshift-marketplace/redhat-operators-wj798" Jan 31 07:24:24 crc kubenswrapper[4908]: E0131 07:24:24.819222 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:25.319206958 +0000 UTC m=+171.935151612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.921401 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.921450 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce51ffb3-c332-4bb8-b574-44911178c9a1-utilities\") pod \"redhat-operators-wj798\" (UID: \"ce51ffb3-c332-4bb8-b574-44911178c9a1\") " pod="openshift-marketplace/redhat-operators-wj798" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.921495 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce51ffb3-c332-4bb8-b574-44911178c9a1-catalog-content\") pod \"redhat-operators-wj798\" (UID: \"ce51ffb3-c332-4bb8-b574-44911178c9a1\") " pod="openshift-marketplace/redhat-operators-wj798" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.921579 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qczk\" (UniqueName: \"kubernetes.io/projected/ce51ffb3-c332-4bb8-b574-44911178c9a1-kube-api-access-8qczk\") pod \"redhat-operators-wj798\" (UID: \"ce51ffb3-c332-4bb8-b574-44911178c9a1\") " pod="openshift-marketplace/redhat-operators-wj798" Jan 31 07:24:24 crc kubenswrapper[4908]: E0131 07:24:24.923110 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:25.423094797 +0000 UTC m=+172.039039451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.984940 4908 patch_prober.go:28] interesting pod/router-default-5444994796-lnblx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 07:24:24 crc kubenswrapper[4908]: [-]has-synced failed: reason withheld Jan 31 07:24:24 crc kubenswrapper[4908]: [+]process-running ok Jan 31 07:24:24 crc kubenswrapper[4908]: healthz check failed Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.985011 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lnblx" podUID="900fb657-d80e-4887-8144-424a3cf39946" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 07:24:24 crc kubenswrapper[4908]: I0131 07:24:24.998919 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4vxx6" Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.022486 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:25 crc kubenswrapper[4908]: E0131 07:24:25.022688 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:24:25.522663685 +0000 UTC m=+172.138608339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.022869 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:25 crc kubenswrapper[4908]: E0131 07:24:25.023243 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:24:25.5232362 +0000 UTC m=+172.139180854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-glh6f" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.024664 4908 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.033286 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce51ffb3-c332-4bb8-b574-44911178c9a1-catalog-content\") pod \"redhat-operators-wj798\" (UID: \"ce51ffb3-c332-4bb8-b574-44911178c9a1\") " pod="openshift-marketplace/redhat-operators-wj798" Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.033421 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce51ffb3-c332-4bb8-b574-44911178c9a1-utilities\") pod \"redhat-operators-wj798\" (UID: \"ce51ffb3-c332-4bb8-b574-44911178c9a1\") " pod="openshift-marketplace/redhat-operators-wj798" Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.037676 4908 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-31T07:24:25.024686587Z","Handler":null,"Name":""} Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.043787 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qczk\" (UniqueName: \"kubernetes.io/projected/ce51ffb3-c332-4bb8-b574-44911178c9a1-kube-api-access-8qczk\") pod \"redhat-operators-wj798\" (UID: \"ce51ffb3-c332-4bb8-b574-44911178c9a1\") " pod="openshift-marketplace/redhat-operators-wj798" Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.057472 4908 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.057769 4908 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.077263 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-l4rd4" Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.085170 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.104528 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wj798" Jan 31 07:24:25 crc kubenswrapper[4908]: W0131 07:24:25.104714 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podbcf0ad03_3820_4978_930d_0991084d4eb0.slice/crio-24fd1da745c193678d8bbce4946bf885b3bc945ab656ceb52fd0b42a075b11b9 WatchSource:0}: Error finding container 24fd1da745c193678d8bbce4946bf885b3bc945ab656ceb52fd0b42a075b11b9: Status 404 returned error can't find the container with id 24fd1da745c193678d8bbce4946bf885b3bc945ab656ceb52fd0b42a075b11b9 Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.125530 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.142623 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.168886 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wg9bf"] Jan 31 07:24:25 crc kubenswrapper[4908]: W0131 07:24:25.197080 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18a62ca4_8320_4c0e_a5e0_7e3aaa8b837f.slice/crio-ba9f4b714e0de9f3c637ef3484d179dc4542fb72645a47e1964b820601cee349 WatchSource:0}: Error finding container ba9f4b714e0de9f3c637ef3484d179dc4542fb72645a47e1964b820601cee349: Status 404 returned error can't find the container with id ba9f4b714e0de9f3c637ef3484d179dc4542fb72645a47e1964b820601cee349 Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.226768 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.231412 4908 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.231610 4908 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.495262 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-glh6f\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.530566 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.536046 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.542433 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.543406 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.549415 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.602281 4908 generic.go:334] "Generic (PLEG): container finished" podID="b2db9dfc-20ec-446a-878c-db0e800be1a0" containerID="89e3b579cecb6f161ecdf1c9b57c063b22660b51b27da058b9e6039632d6775d" exitCode=0 Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.602356 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqxmf" event={"ID":"b2db9dfc-20ec-446a-878c-db0e800be1a0","Type":"ContainerDied","Data":"89e3b579cecb6f161ecdf1c9b57c063b22660b51b27da058b9e6039632d6775d"} Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.606182 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bcf0ad03-3820-4978-930d-0991084d4eb0","Type":"ContainerStarted","Data":"24fd1da745c193678d8bbce4946bf885b3bc945ab656ceb52fd0b42a075b11b9"} Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.645897 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4feb749-2256-4d6b-870f-d0512aa20a6b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c4feb749-2256-4d6b-870f-d0512aa20a6b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.646334 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c4feb749-2256-4d6b-870f-d0512aa20a6b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c4feb749-2256-4d6b-870f-d0512aa20a6b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.668695 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2rbj4" event={"ID":"726a0082-0e03-4539-9f62-ee7776d0a7d8","Type":"ContainerStarted","Data":"e742b8df7d226ba2b3d4a320ae0691287aed5c93dc583af2470ef1b0531a7c00"} Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.686259 4908 generic.go:334] "Generic (PLEG): container finished" podID="05f1e995-f324-4225-a4a8-d476a4da7ff4" containerID="b21481e10502d4608d2a9dcc61c6b5f508f2c68cb128b6292cd26dd66992c43f" exitCode=0 Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.686327 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9szx" event={"ID":"05f1e995-f324-4225-a4a8-d476a4da7ff4","Type":"ContainerDied","Data":"b21481e10502d4608d2a9dcc61c6b5f508f2c68cb128b6292cd26dd66992c43f"} Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.694769 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg9bf" event={"ID":"18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f","Type":"ContainerStarted","Data":"ba9f4b714e0de9f3c637ef3484d179dc4542fb72645a47e1964b820601cee349"} Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.747406 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4feb749-2256-4d6b-870f-d0512aa20a6b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c4feb749-2256-4d6b-870f-d0512aa20a6b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.747494 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c4feb749-2256-4d6b-870f-d0512aa20a6b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c4feb749-2256-4d6b-870f-d0512aa20a6b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.748118 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c4feb749-2256-4d6b-870f-d0512aa20a6b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c4feb749-2256-4d6b-870f-d0512aa20a6b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.752602 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.806198 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4feb749-2256-4d6b-870f-d0512aa20a6b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c4feb749-2256-4d6b-870f-d0512aa20a6b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.825566 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wj798"] Jan 31 07:24:25 crc kubenswrapper[4908]: W0131 07:24:25.870022 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce51ffb3_c332_4bb8_b574_44911178c9a1.slice/crio-de3fcbcf87aae94fae8d852867e7c387d43705f1be79c433d49652c49f46ce71 WatchSource:0}: Error finding container de3fcbcf87aae94fae8d852867e7c387d43705f1be79c433d49652c49f46ce71: Status 404 returned error can't find the container with id de3fcbcf87aae94fae8d852867e7c387d43705f1be79c433d49652c49f46ce71 Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.902293 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.902651 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-k4x5t" Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.950192 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe51cf59-5f34-4f01-8404-2f95b7ca742b-config-volume\") pod \"fe51cf59-5f34-4f01-8404-2f95b7ca742b\" (UID: \"fe51cf59-5f34-4f01-8404-2f95b7ca742b\") " Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.950273 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe51cf59-5f34-4f01-8404-2f95b7ca742b-secret-volume\") pod \"fe51cf59-5f34-4f01-8404-2f95b7ca742b\" (UID: \"fe51cf59-5f34-4f01-8404-2f95b7ca742b\") " Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.950308 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg65b\" (UniqueName: \"kubernetes.io/projected/fe51cf59-5f34-4f01-8404-2f95b7ca742b-kube-api-access-mg65b\") pod \"fe51cf59-5f34-4f01-8404-2f95b7ca742b\" (UID: \"fe51cf59-5f34-4f01-8404-2f95b7ca742b\") " Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.952002 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe51cf59-5f34-4f01-8404-2f95b7ca742b-config-volume" (OuterVolumeSpecName: "config-volume") pod "fe51cf59-5f34-4f01-8404-2f95b7ca742b" (UID: "fe51cf59-5f34-4f01-8404-2f95b7ca742b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.955485 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe51cf59-5f34-4f01-8404-2f95b7ca742b-kube-api-access-mg65b" (OuterVolumeSpecName: "kube-api-access-mg65b") pod "fe51cf59-5f34-4f01-8404-2f95b7ca742b" (UID: "fe51cf59-5f34-4f01-8404-2f95b7ca742b"). InnerVolumeSpecName "kube-api-access-mg65b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.960038 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe51cf59-5f34-4f01-8404-2f95b7ca742b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fe51cf59-5f34-4f01-8404-2f95b7ca742b" (UID: "fe51cf59-5f34-4f01-8404-2f95b7ca742b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.965393 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.966389 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.972214 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-lnblx" Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.977943 4908 patch_prober.go:28] interesting pod/router-default-5444994796-lnblx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 07:24:25 crc kubenswrapper[4908]: [-]has-synced failed: reason withheld Jan 31 07:24:25 crc kubenswrapper[4908]: [+]process-running ok Jan 31 07:24:25 crc kubenswrapper[4908]: healthz check failed Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.977999 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lnblx" podUID="900fb657-d80e-4887-8144-424a3cf39946" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 07:24:25 crc kubenswrapper[4908]: I0131 07:24:25.986146 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.052392 4908 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe51cf59-5f34-4f01-8404-2f95b7ca742b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.052467 4908 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe51cf59-5f34-4f01-8404-2f95b7ca742b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.052486 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg65b\" (UniqueName: \"kubernetes.io/projected/fe51cf59-5f34-4f01-8404-2f95b7ca742b-kube-api-access-mg65b\") on node \"crc\" DevicePath \"\"" Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.075623 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-glh6f"] Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.085022 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-8rjct" Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.085948 4908 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rjct container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.086022 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8rjct" podUID="345e9f59-f1cc-40f5-97ea-42940f12805c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.086207 4908 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rjct container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.086225 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8rjct" podUID="345e9f59-f1cc-40f5-97ea-42940f12805c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.088328 4908 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rjct container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.088395 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-8rjct" podUID="345e9f59-f1cc-40f5-97ea-42940f12805c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.103173 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-fjlrr" Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.103204 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-fjlrr" Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.104734 4908 patch_prober.go:28] interesting pod/console-f9d7485db-fjlrr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.104769 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fjlrr" podUID="097d2f96-ce86-4d47-a55c-c717d272a8ef" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.219907 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.324569 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7cskt" Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.381011 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.382320 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.388380 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m8jlm" Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.388684 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vzfwr" Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.719335 4908 generic.go:334] "Generic (PLEG): container finished" podID="18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f" containerID="fa12f5f5dca3b5b6d36ad74786bb11a991fdf7f0879fdf9aacda88db5435fc98" exitCode=0 Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.720733 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg9bf" event={"ID":"18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f","Type":"ContainerDied","Data":"fa12f5f5dca3b5b6d36ad74786bb11a991fdf7f0879fdf9aacda88db5435fc98"} Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.728357 4908 generic.go:334] "Generic (PLEG): container finished" podID="ce51ffb3-c332-4bb8-b574-44911178c9a1" containerID="83a8352a694ce2d73f6f211580a05550c0e675cb58219a424c23e3c7dfcc7446" exitCode=0 Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.728408 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj798" event={"ID":"ce51ffb3-c332-4bb8-b574-44911178c9a1","Type":"ContainerDied","Data":"83a8352a694ce2d73f6f211580a05550c0e675cb58219a424c23e3c7dfcc7446"} Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.728433 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj798" event={"ID":"ce51ffb3-c332-4bb8-b574-44911178c9a1","Type":"ContainerStarted","Data":"de3fcbcf87aae94fae8d852867e7c387d43705f1be79c433d49652c49f46ce71"} Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.731152 4908 generic.go:334] "Generic (PLEG): container finished" podID="bcf0ad03-3820-4978-930d-0991084d4eb0" containerID="ffa48f6fe6d6b2653c5f4b1bd87b2b65d26fb46c5af715dc6bdc38b74b0a7095" exitCode=0 Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.731274 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bcf0ad03-3820-4978-930d-0991084d4eb0","Type":"ContainerDied","Data":"ffa48f6fe6d6b2653c5f4b1bd87b2b65d26fb46c5af715dc6bdc38b74b0a7095"} Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.735608 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2rbj4" event={"ID":"726a0082-0e03-4539-9f62-ee7776d0a7d8","Type":"ContainerStarted","Data":"61ecf6e634cd1107318daedbb2f0dea3e098cc80fff2953c6f9fe25483cb40ee"} Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.750083 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" event={"ID":"0d369cc1-14e7-49ff-b253-bc196840a444","Type":"ContainerStarted","Data":"851691cf9aae707910865e676c602ccbaaea0d6f70debfb98de1c59487996c0e"} Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.750123 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" event={"ID":"0d369cc1-14e7-49ff-b253-bc196840a444","Type":"ContainerStarted","Data":"170ade0cdeb13d8c098004c415e273987010d0d5be7f2fd8031a845b1f97d25d"} Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.753560 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c4feb749-2256-4d6b-870f-d0512aa20a6b","Type":"ContainerStarted","Data":"099150e043ee7fce74f5e08fddd55e918a2bd09220b1b0a458849105cca92ca6"} Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.770741 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-2rbj4" podStartSLOduration=14.770725022 podStartE2EDuration="14.770725022s" podCreationTimestamp="2026-01-31 07:24:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:26.77024408 +0000 UTC m=+173.386188744" watchObservedRunningTime="2026-01-31 07:24:26.770725022 +0000 UTC m=+173.386669676" Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.775511 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-k4x5t" Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.775880 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-k4x5t" event={"ID":"fe51cf59-5f34-4f01-8404-2f95b7ca742b","Type":"ContainerDied","Data":"4531e8029653d51967138d9ac156596ca220042bdcd056c660b3c1be1da507d8"} Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.775903 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4531e8029653d51967138d9ac156596ca220042bdcd056c660b3c1be1da507d8" Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.976763 4908 patch_prober.go:28] interesting pod/router-default-5444994796-lnblx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 07:24:26 crc kubenswrapper[4908]: [-]has-synced failed: reason withheld Jan 31 07:24:26 crc kubenswrapper[4908]: [+]process-running ok Jan 31 07:24:26 crc kubenswrapper[4908]: healthz check failed Jan 31 07:24:26 crc kubenswrapper[4908]: I0131 07:24:26.976839 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lnblx" podUID="900fb657-d80e-4887-8144-424a3cf39946" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 07:24:27 crc kubenswrapper[4908]: I0131 07:24:27.814734 4908 generic.go:334] "Generic (PLEG): container finished" podID="c4feb749-2256-4d6b-870f-d0512aa20a6b" containerID="4f9bf381726b85ca0e24be83e4058abfd13e8c8ab8958c7bfd3ee3921c41e9b3" exitCode=0 Jan 31 07:24:27 crc kubenswrapper[4908]: I0131 07:24:27.814942 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c4feb749-2256-4d6b-870f-d0512aa20a6b","Type":"ContainerDied","Data":"4f9bf381726b85ca0e24be83e4058abfd13e8c8ab8958c7bfd3ee3921c41e9b3"} Jan 31 07:24:27 crc kubenswrapper[4908]: I0131 07:24:27.873667 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" podStartSLOduration=138.873646499 podStartE2EDuration="2m18.873646499s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:24:27.870927379 +0000 UTC m=+174.486872033" watchObservedRunningTime="2026-01-31 07:24:27.873646499 +0000 UTC m=+174.489591153" Jan 31 07:24:27 crc kubenswrapper[4908]: I0131 07:24:27.974109 4908 patch_prober.go:28] interesting pod/router-default-5444994796-lnblx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 07:24:27 crc kubenswrapper[4908]: [-]has-synced failed: reason withheld Jan 31 07:24:27 crc kubenswrapper[4908]: [+]process-running ok Jan 31 07:24:27 crc kubenswrapper[4908]: healthz check failed Jan 31 07:24:27 crc kubenswrapper[4908]: I0131 07:24:27.974151 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lnblx" podUID="900fb657-d80e-4887-8144-424a3cf39946" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 07:24:28 crc kubenswrapper[4908]: I0131 07:24:28.100760 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 07:24:28 crc kubenswrapper[4908]: I0131 07:24:28.208455 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcf0ad03-3820-4978-930d-0991084d4eb0-kubelet-dir\") pod \"bcf0ad03-3820-4978-930d-0991084d4eb0\" (UID: \"bcf0ad03-3820-4978-930d-0991084d4eb0\") " Jan 31 07:24:28 crc kubenswrapper[4908]: I0131 07:24:28.208596 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcf0ad03-3820-4978-930d-0991084d4eb0-kube-api-access\") pod \"bcf0ad03-3820-4978-930d-0991084d4eb0\" (UID: \"bcf0ad03-3820-4978-930d-0991084d4eb0\") " Jan 31 07:24:28 crc kubenswrapper[4908]: I0131 07:24:28.208591 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcf0ad03-3820-4978-930d-0991084d4eb0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bcf0ad03-3820-4978-930d-0991084d4eb0" (UID: "bcf0ad03-3820-4978-930d-0991084d4eb0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:24:28 crc kubenswrapper[4908]: I0131 07:24:28.209218 4908 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcf0ad03-3820-4978-930d-0991084d4eb0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 07:24:28 crc kubenswrapper[4908]: I0131 07:24:28.222748 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcf0ad03-3820-4978-930d-0991084d4eb0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bcf0ad03-3820-4978-930d-0991084d4eb0" (UID: "bcf0ad03-3820-4978-930d-0991084d4eb0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:24:28 crc kubenswrapper[4908]: I0131 07:24:28.310973 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcf0ad03-3820-4978-930d-0991084d4eb0-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 07:24:28 crc kubenswrapper[4908]: I0131 07:24:28.825881 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bcf0ad03-3820-4978-930d-0991084d4eb0","Type":"ContainerDied","Data":"24fd1da745c193678d8bbce4946bf885b3bc945ab656ceb52fd0b42a075b11b9"} Jan 31 07:24:28 crc kubenswrapper[4908]: I0131 07:24:28.826255 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24fd1da745c193678d8bbce4946bf885b3bc945ab656ceb52fd0b42a075b11b9" Jan 31 07:24:28 crc kubenswrapper[4908]: I0131 07:24:28.825956 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 07:24:28 crc kubenswrapper[4908]: I0131 07:24:28.974431 4908 patch_prober.go:28] interesting pod/router-default-5444994796-lnblx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 07:24:28 crc kubenswrapper[4908]: [-]has-synced failed: reason withheld Jan 31 07:24:28 crc kubenswrapper[4908]: [+]process-running ok Jan 31 07:24:28 crc kubenswrapper[4908]: healthz check failed Jan 31 07:24:28 crc kubenswrapper[4908]: I0131 07:24:28.974501 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lnblx" podUID="900fb657-d80e-4887-8144-424a3cf39946" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 07:24:29 crc kubenswrapper[4908]: I0131 07:24:29.131635 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 07:24:29 crc kubenswrapper[4908]: I0131 07:24:29.226065 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4feb749-2256-4d6b-870f-d0512aa20a6b-kube-api-access\") pod \"c4feb749-2256-4d6b-870f-d0512aa20a6b\" (UID: \"c4feb749-2256-4d6b-870f-d0512aa20a6b\") " Jan 31 07:24:29 crc kubenswrapper[4908]: I0131 07:24:29.226299 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c4feb749-2256-4d6b-870f-d0512aa20a6b-kubelet-dir\") pod \"c4feb749-2256-4d6b-870f-d0512aa20a6b\" (UID: \"c4feb749-2256-4d6b-870f-d0512aa20a6b\") " Jan 31 07:24:29 crc kubenswrapper[4908]: I0131 07:24:29.226404 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4feb749-2256-4d6b-870f-d0512aa20a6b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c4feb749-2256-4d6b-870f-d0512aa20a6b" (UID: "c4feb749-2256-4d6b-870f-d0512aa20a6b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:24:29 crc kubenswrapper[4908]: I0131 07:24:29.226702 4908 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c4feb749-2256-4d6b-870f-d0512aa20a6b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 07:24:29 crc kubenswrapper[4908]: I0131 07:24:29.231513 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4feb749-2256-4d6b-870f-d0512aa20a6b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c4feb749-2256-4d6b-870f-d0512aa20a6b" (UID: "c4feb749-2256-4d6b-870f-d0512aa20a6b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:24:29 crc kubenswrapper[4908]: I0131 07:24:29.328527 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4feb749-2256-4d6b-870f-d0512aa20a6b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 07:24:29 crc kubenswrapper[4908]: I0131 07:24:29.629602 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:29 crc kubenswrapper[4908]: I0131 07:24:29.634444 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-vln8d" Jan 31 07:24:29 crc kubenswrapper[4908]: I0131 07:24:29.847194 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 07:24:29 crc kubenswrapper[4908]: I0131 07:24:29.847275 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c4feb749-2256-4d6b-870f-d0512aa20a6b","Type":"ContainerDied","Data":"099150e043ee7fce74f5e08fddd55e918a2bd09220b1b0a458849105cca92ca6"} Jan 31 07:24:29 crc kubenswrapper[4908]: I0131 07:24:29.847311 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="099150e043ee7fce74f5e08fddd55e918a2bd09220b1b0a458849105cca92ca6" Jan 31 07:24:29 crc kubenswrapper[4908]: I0131 07:24:29.976108 4908 patch_prober.go:28] interesting pod/router-default-5444994796-lnblx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 07:24:29 crc kubenswrapper[4908]: [-]has-synced failed: reason withheld Jan 31 07:24:29 crc kubenswrapper[4908]: [+]process-running ok Jan 31 07:24:29 crc kubenswrapper[4908]: healthz check failed Jan 31 07:24:29 crc kubenswrapper[4908]: I0131 07:24:29.976175 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lnblx" podUID="900fb657-d80e-4887-8144-424a3cf39946" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 07:24:30 crc kubenswrapper[4908]: I0131 07:24:30.978097 4908 patch_prober.go:28] interesting pod/router-default-5444994796-lnblx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 07:24:30 crc kubenswrapper[4908]: [-]has-synced failed: reason withheld Jan 31 07:24:30 crc kubenswrapper[4908]: [+]process-running ok Jan 31 07:24:30 crc kubenswrapper[4908]: healthz check failed Jan 31 07:24:30 crc kubenswrapper[4908]: I0131 07:24:30.978430 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lnblx" podUID="900fb657-d80e-4887-8144-424a3cf39946" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 07:24:31 crc kubenswrapper[4908]: I0131 07:24:31.407287 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-5djg7" Jan 31 07:24:31 crc kubenswrapper[4908]: I0131 07:24:31.885244 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-h7tss_ccb2de53-ecca-4439-94c0-2b65e5b21789/cluster-samples-operator/0.log" Jan 31 07:24:31 crc kubenswrapper[4908]: I0131 07:24:31.885314 4908 generic.go:334] "Generic (PLEG): container finished" podID="ccb2de53-ecca-4439-94c0-2b65e5b21789" containerID="502a0b08a8dac0a69fe71f8598a99c2b1636bd21216fec7b3183e39a3feddbcd" exitCode=2 Jan 31 07:24:31 crc kubenswrapper[4908]: I0131 07:24:31.885346 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7tss" event={"ID":"ccb2de53-ecca-4439-94c0-2b65e5b21789","Type":"ContainerDied","Data":"502a0b08a8dac0a69fe71f8598a99c2b1636bd21216fec7b3183e39a3feddbcd"} Jan 31 07:24:31 crc kubenswrapper[4908]: I0131 07:24:31.886021 4908 scope.go:117] "RemoveContainer" containerID="502a0b08a8dac0a69fe71f8598a99c2b1636bd21216fec7b3183e39a3feddbcd" Jan 31 07:24:31 crc kubenswrapper[4908]: I0131 07:24:31.887319 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1242d7b7-ba0b-4084-88f1-fedf57d84b11-metrics-certs\") pod \"network-metrics-daemon-2cg54\" (UID: \"1242d7b7-ba0b-4084-88f1-fedf57d84b11\") " pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:24:31 crc kubenswrapper[4908]: I0131 07:24:31.914902 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1242d7b7-ba0b-4084-88f1-fedf57d84b11-metrics-certs\") pod \"network-metrics-daemon-2cg54\" (UID: \"1242d7b7-ba0b-4084-88f1-fedf57d84b11\") " pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:24:31 crc kubenswrapper[4908]: I0131 07:24:31.993613 4908 patch_prober.go:28] interesting pod/router-default-5444994796-lnblx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 07:24:31 crc kubenswrapper[4908]: [-]has-synced failed: reason withheld Jan 31 07:24:31 crc kubenswrapper[4908]: [+]process-running ok Jan 31 07:24:31 crc kubenswrapper[4908]: healthz check failed Jan 31 07:24:31 crc kubenswrapper[4908]: I0131 07:24:31.993942 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lnblx" podUID="900fb657-d80e-4887-8144-424a3cf39946" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 07:24:32 crc kubenswrapper[4908]: I0131 07:24:32.163297 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2cg54" Jan 31 07:24:32 crc kubenswrapper[4908]: I0131 07:24:32.974654 4908 patch_prober.go:28] interesting pod/router-default-5444994796-lnblx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 07:24:32 crc kubenswrapper[4908]: [-]has-synced failed: reason withheld Jan 31 07:24:32 crc kubenswrapper[4908]: [+]process-running ok Jan 31 07:24:32 crc kubenswrapper[4908]: healthz check failed Jan 31 07:24:32 crc kubenswrapper[4908]: I0131 07:24:32.974780 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lnblx" podUID="900fb657-d80e-4887-8144-424a3cf39946" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 07:24:33 crc kubenswrapper[4908]: I0131 07:24:33.974019 4908 patch_prober.go:28] interesting pod/router-default-5444994796-lnblx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 07:24:33 crc kubenswrapper[4908]: [-]has-synced failed: reason withheld Jan 31 07:24:33 crc kubenswrapper[4908]: [+]process-running ok Jan 31 07:24:33 crc kubenswrapper[4908]: healthz check failed Jan 31 07:24:33 crc kubenswrapper[4908]: I0131 07:24:33.974442 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lnblx" podUID="900fb657-d80e-4887-8144-424a3cf39946" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 07:24:34 crc kubenswrapper[4908]: I0131 07:24:34.974490 4908 patch_prober.go:28] interesting pod/router-default-5444994796-lnblx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 07:24:34 crc kubenswrapper[4908]: [-]has-synced failed: reason withheld Jan 31 07:24:34 crc kubenswrapper[4908]: [+]process-running ok Jan 31 07:24:34 crc kubenswrapper[4908]: healthz check failed Jan 31 07:24:34 crc kubenswrapper[4908]: I0131 07:24:34.974550 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lnblx" podUID="900fb657-d80e-4887-8144-424a3cf39946" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 07:24:35 crc kubenswrapper[4908]: I0131 07:24:35.753790 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:35 crc kubenswrapper[4908]: I0131 07:24:35.973864 4908 patch_prober.go:28] interesting pod/router-default-5444994796-lnblx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 07:24:35 crc kubenswrapper[4908]: [-]has-synced failed: reason withheld Jan 31 07:24:35 crc kubenswrapper[4908]: [+]process-running ok Jan 31 07:24:35 crc kubenswrapper[4908]: healthz check failed Jan 31 07:24:35 crc kubenswrapper[4908]: I0131 07:24:35.973922 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lnblx" podUID="900fb657-d80e-4887-8144-424a3cf39946" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 07:24:36 crc kubenswrapper[4908]: I0131 07:24:36.082813 4908 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rjct container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 31 07:24:36 crc kubenswrapper[4908]: I0131 07:24:36.082843 4908 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rjct container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 31 07:24:36 crc kubenswrapper[4908]: I0131 07:24:36.082876 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8rjct" podUID="345e9f59-f1cc-40f5-97ea-42940f12805c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 31 07:24:36 crc kubenswrapper[4908]: I0131 07:24:36.082901 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-8rjct" podUID="345e9f59-f1cc-40f5-97ea-42940f12805c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 31 07:24:36 crc kubenswrapper[4908]: I0131 07:24:36.104164 4908 patch_prober.go:28] interesting pod/console-f9d7485db-fjlrr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 31 07:24:36 crc kubenswrapper[4908]: I0131 07:24:36.104221 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fjlrr" podUID="097d2f96-ce86-4d47-a55c-c717d272a8ef" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 31 07:24:36 crc kubenswrapper[4908]: I0131 07:24:36.974608 4908 patch_prober.go:28] interesting pod/router-default-5444994796-lnblx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 07:24:36 crc kubenswrapper[4908]: [-]has-synced failed: reason withheld Jan 31 07:24:36 crc kubenswrapper[4908]: [+]process-running ok Jan 31 07:24:36 crc kubenswrapper[4908]: healthz check failed Jan 31 07:24:36 crc kubenswrapper[4908]: I0131 07:24:36.975136 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lnblx" podUID="900fb657-d80e-4887-8144-424a3cf39946" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 07:24:37 crc kubenswrapper[4908]: I0131 07:24:37.975305 4908 patch_prober.go:28] interesting pod/router-default-5444994796-lnblx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 07:24:37 crc kubenswrapper[4908]: [-]has-synced failed: reason withheld Jan 31 07:24:37 crc kubenswrapper[4908]: [+]process-running ok Jan 31 07:24:37 crc kubenswrapper[4908]: healthz check failed Jan 31 07:24:37 crc kubenswrapper[4908]: I0131 07:24:37.975408 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lnblx" podUID="900fb657-d80e-4887-8144-424a3cf39946" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 07:24:38 crc kubenswrapper[4908]: I0131 07:24:38.974853 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-lnblx" Jan 31 07:24:38 crc kubenswrapper[4908]: I0131 07:24:38.977432 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-lnblx" Jan 31 07:24:40 crc kubenswrapper[4908]: I0131 07:24:40.431141 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:24:40 crc kubenswrapper[4908]: I0131 07:24:40.431252 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:24:43 crc kubenswrapper[4908]: I0131 07:24:43.892124 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vgk68"] Jan 31 07:24:43 crc kubenswrapper[4908]: I0131 07:24:43.892780 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" podUID="61affb6e-e659-45c3-b1bb-f328e073304f" containerName="controller-manager" containerID="cri-o://95de7b969e700edee582912771674e68fd023ce575bf4ad16109c3372f64c164" gracePeriod=30 Jan 31 07:24:43 crc kubenswrapper[4908]: I0131 07:24:43.913033 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j"] Jan 31 07:24:43 crc kubenswrapper[4908]: I0131 07:24:43.913312 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" podUID="1b18d2de-0b08-43c9-bcbf-1ced621bac08" containerName="route-controller-manager" containerID="cri-o://045657d08ee33e47c4d0719a002f2f8851c792e0f12bf3a20c5b131a9b0d59a8" gracePeriod=30 Jan 31 07:24:44 crc kubenswrapper[4908]: I0131 07:24:44.961147 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" event={"ID":"61affb6e-e659-45c3-b1bb-f328e073304f","Type":"ContainerDied","Data":"95de7b969e700edee582912771674e68fd023ce575bf4ad16109c3372f64c164"} Jan 31 07:24:44 crc kubenswrapper[4908]: I0131 07:24:44.961097 4908 generic.go:334] "Generic (PLEG): container finished" podID="61affb6e-e659-45c3-b1bb-f328e073304f" containerID="95de7b969e700edee582912771674e68fd023ce575bf4ad16109c3372f64c164" exitCode=0 Jan 31 07:24:44 crc kubenswrapper[4908]: I0131 07:24:44.963374 4908 generic.go:334] "Generic (PLEG): container finished" podID="1b18d2de-0b08-43c9-bcbf-1ced621bac08" containerID="045657d08ee33e47c4d0719a002f2f8851c792e0f12bf3a20c5b131a9b0d59a8" exitCode=0 Jan 31 07:24:44 crc kubenswrapper[4908]: I0131 07:24:44.963407 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" event={"ID":"1b18d2de-0b08-43c9-bcbf-1ced621bac08","Type":"ContainerDied","Data":"045657d08ee33e47c4d0719a002f2f8851c792e0f12bf3a20c5b131a9b0d59a8"} Jan 31 07:24:45 crc kubenswrapper[4908]: I0131 07:24:45.759278 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:24:45 crc kubenswrapper[4908]: I0131 07:24:45.923216 4908 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-vgk68 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 31 07:24:45 crc kubenswrapper[4908]: I0131 07:24:45.923311 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" podUID="61affb6e-e659-45c3-b1bb-f328e073304f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 31 07:24:45 crc kubenswrapper[4908]: I0131 07:24:45.976387 4908 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-wgj2j container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 31 07:24:45 crc kubenswrapper[4908]: I0131 07:24:45.976437 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" podUID="1b18d2de-0b08-43c9-bcbf-1ced621bac08" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 31 07:24:46 crc kubenswrapper[4908]: I0131 07:24:46.083608 4908 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rjct container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 31 07:24:46 crc kubenswrapper[4908]: I0131 07:24:46.083662 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8rjct" podUID="345e9f59-f1cc-40f5-97ea-42940f12805c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 31 07:24:46 crc kubenswrapper[4908]: I0131 07:24:46.083609 4908 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rjct container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 31 07:24:46 crc kubenswrapper[4908]: I0131 07:24:46.083755 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-8rjct" podUID="345e9f59-f1cc-40f5-97ea-42940f12805c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 31 07:24:46 crc kubenswrapper[4908]: I0131 07:24:46.083795 4908 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-8rjct" Jan 31 07:24:46 crc kubenswrapper[4908]: I0131 07:24:46.084287 4908 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"62e94a243fb724f5e7a2f82c5dee7fd26fbec5b3f998e78366eb7d29d35f8224"} pod="openshift-console/downloads-7954f5f757-8rjct" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 31 07:24:46 crc kubenswrapper[4908]: I0131 07:24:46.084356 4908 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rjct container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 31 07:24:46 crc kubenswrapper[4908]: I0131 07:24:46.084366 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-8rjct" podUID="345e9f59-f1cc-40f5-97ea-42940f12805c" containerName="download-server" containerID="cri-o://62e94a243fb724f5e7a2f82c5dee7fd26fbec5b3f998e78366eb7d29d35f8224" gracePeriod=2 Jan 31 07:24:46 crc kubenswrapper[4908]: I0131 07:24:46.084377 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8rjct" podUID="345e9f59-f1cc-40f5-97ea-42940f12805c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 31 07:24:46 crc kubenswrapper[4908]: I0131 07:24:46.118316 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-fjlrr" Jan 31 07:24:46 crc kubenswrapper[4908]: I0131 07:24:46.121602 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-fjlrr" Jan 31 07:24:46 crc kubenswrapper[4908]: I0131 07:24:46.976753 4908 generic.go:334] "Generic (PLEG): container finished" podID="345e9f59-f1cc-40f5-97ea-42940f12805c" containerID="62e94a243fb724f5e7a2f82c5dee7fd26fbec5b3f998e78366eb7d29d35f8224" exitCode=0 Jan 31 07:24:46 crc kubenswrapper[4908]: I0131 07:24:46.976842 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-8rjct" event={"ID":"345e9f59-f1cc-40f5-97ea-42940f12805c","Type":"ContainerDied","Data":"62e94a243fb724f5e7a2f82c5dee7fd26fbec5b3f998e78366eb7d29d35f8224"} Jan 31 07:24:50 crc kubenswrapper[4908]: I0131 07:24:50.648964 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:24:56 crc kubenswrapper[4908]: I0131 07:24:56.082650 4908 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rjct container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 31 07:24:56 crc kubenswrapper[4908]: I0131 07:24:56.083039 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8rjct" podUID="345e9f59-f1cc-40f5-97ea-42940f12805c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 31 07:24:56 crc kubenswrapper[4908]: I0131 07:24:56.289705 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gmh7h" Jan 31 07:24:56 crc kubenswrapper[4908]: I0131 07:24:56.922951 4908 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-vgk68 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 07:24:56 crc kubenswrapper[4908]: I0131 07:24:56.923044 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" podUID="61affb6e-e659-45c3-b1bb-f328e073304f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 07:24:56 crc kubenswrapper[4908]: I0131 07:24:56.976507 4908 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-wgj2j container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 07:24:56 crc kubenswrapper[4908]: I0131 07:24:56.976570 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" podUID="1b18d2de-0b08-43c9-bcbf-1ced621bac08" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 07:25:00 crc kubenswrapper[4908]: E0131 07:25:00.028921 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 31 07:25:00 crc kubenswrapper[4908]: E0131 07:25:00.029144 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5wbrx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-hfhwx_openshift-marketplace(100bafc6-355c-4131-9907-45004788f44c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 07:25:00 crc kubenswrapper[4908]: E0131 07:25:00.031265 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-hfhwx" podUID="100bafc6-355c-4131-9907-45004788f44c" Jan 31 07:25:04 crc kubenswrapper[4908]: E0131 07:25:04.368645 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-hfhwx" podUID="100bafc6-355c-4131-9907-45004788f44c" Jan 31 07:25:05 crc kubenswrapper[4908]: I0131 07:25:05.327070 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 07:25:05 crc kubenswrapper[4908]: E0131 07:25:05.327498 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcf0ad03-3820-4978-930d-0991084d4eb0" containerName="pruner" Jan 31 07:25:05 crc kubenswrapper[4908]: I0131 07:25:05.327534 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcf0ad03-3820-4978-930d-0991084d4eb0" containerName="pruner" Jan 31 07:25:05 crc kubenswrapper[4908]: E0131 07:25:05.327571 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe51cf59-5f34-4f01-8404-2f95b7ca742b" containerName="collect-profiles" Jan 31 07:25:05 crc kubenswrapper[4908]: I0131 07:25:05.327589 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe51cf59-5f34-4f01-8404-2f95b7ca742b" containerName="collect-profiles" Jan 31 07:25:05 crc kubenswrapper[4908]: E0131 07:25:05.327610 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4feb749-2256-4d6b-870f-d0512aa20a6b" containerName="pruner" Jan 31 07:25:05 crc kubenswrapper[4908]: I0131 07:25:05.327629 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4feb749-2256-4d6b-870f-d0512aa20a6b" containerName="pruner" Jan 31 07:25:05 crc kubenswrapper[4908]: I0131 07:25:05.327874 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe51cf59-5f34-4f01-8404-2f95b7ca742b" containerName="collect-profiles" Jan 31 07:25:05 crc kubenswrapper[4908]: I0131 07:25:05.327919 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4feb749-2256-4d6b-870f-d0512aa20a6b" containerName="pruner" Jan 31 07:25:05 crc kubenswrapper[4908]: I0131 07:25:05.327946 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcf0ad03-3820-4978-930d-0991084d4eb0" containerName="pruner" Jan 31 07:25:05 crc kubenswrapper[4908]: I0131 07:25:05.328722 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 07:25:05 crc kubenswrapper[4908]: I0131 07:25:05.330649 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 07:25:05 crc kubenswrapper[4908]: I0131 07:25:05.331542 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 07:25:05 crc kubenswrapper[4908]: I0131 07:25:05.344547 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 07:25:05 crc kubenswrapper[4908]: I0131 07:25:05.412738 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ccc69e3-f43e-40e0-8e47-da5491596d18-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9ccc69e3-f43e-40e0-8e47-da5491596d18\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 07:25:05 crc kubenswrapper[4908]: I0131 07:25:05.412802 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ccc69e3-f43e-40e0-8e47-da5491596d18-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9ccc69e3-f43e-40e0-8e47-da5491596d18\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 07:25:05 crc kubenswrapper[4908]: I0131 07:25:05.514205 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ccc69e3-f43e-40e0-8e47-da5491596d18-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9ccc69e3-f43e-40e0-8e47-da5491596d18\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 07:25:05 crc kubenswrapper[4908]: I0131 07:25:05.514602 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ccc69e3-f43e-40e0-8e47-da5491596d18-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9ccc69e3-f43e-40e0-8e47-da5491596d18\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 07:25:05 crc kubenswrapper[4908]: I0131 07:25:05.514709 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ccc69e3-f43e-40e0-8e47-da5491596d18-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9ccc69e3-f43e-40e0-8e47-da5491596d18\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 07:25:05 crc kubenswrapper[4908]: I0131 07:25:05.538460 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ccc69e3-f43e-40e0-8e47-da5491596d18-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9ccc69e3-f43e-40e0-8e47-da5491596d18\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 07:25:05 crc kubenswrapper[4908]: I0131 07:25:05.668583 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 07:25:06 crc kubenswrapper[4908]: I0131 07:25:06.084916 4908 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rjct container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 31 07:25:06 crc kubenswrapper[4908]: I0131 07:25:06.085013 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8rjct" podUID="345e9f59-f1cc-40f5-97ea-42940f12805c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 31 07:25:06 crc kubenswrapper[4908]: I0131 07:25:06.922348 4908 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-vgk68 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 07:25:06 crc kubenswrapper[4908]: I0131 07:25:06.922408 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" podUID="61affb6e-e659-45c3-b1bb-f328e073304f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 07:25:06 crc kubenswrapper[4908]: I0131 07:25:06.976738 4908 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-wgj2j container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 07:25:06 crc kubenswrapper[4908]: I0131 07:25:06.976825 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" podUID="1b18d2de-0b08-43c9-bcbf-1ced621bac08" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 07:25:08 crc kubenswrapper[4908]: E0131 07:25:08.823511 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 31 07:25:08 crc kubenswrapper[4908]: E0131 07:25:08.823660 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zzgbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-c8k74_openshift-marketplace(ba3d735e-ca4d-48b1-90c2-2edbcfa582ac): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 07:25:08 crc kubenswrapper[4908]: E0131 07:25:08.825041 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-c8k74" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" Jan 31 07:25:10 crc kubenswrapper[4908]: E0131 07:25:10.174565 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-c8k74" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" Jan 31 07:25:10 crc kubenswrapper[4908]: I0131 07:25:10.323481 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 07:25:10 crc kubenswrapper[4908]: I0131 07:25:10.324308 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 07:25:10 crc kubenswrapper[4908]: I0131 07:25:10.333924 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 07:25:10 crc kubenswrapper[4908]: I0131 07:25:10.381252 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/14dc50c5-4628-4ac4-a3e5-035acac8d1cc-var-lock\") pod \"installer-9-crc\" (UID: \"14dc50c5-4628-4ac4-a3e5-035acac8d1cc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 07:25:10 crc kubenswrapper[4908]: I0131 07:25:10.381323 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14dc50c5-4628-4ac4-a3e5-035acac8d1cc-kube-api-access\") pod \"installer-9-crc\" (UID: \"14dc50c5-4628-4ac4-a3e5-035acac8d1cc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 07:25:10 crc kubenswrapper[4908]: I0131 07:25:10.381348 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14dc50c5-4628-4ac4-a3e5-035acac8d1cc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"14dc50c5-4628-4ac4-a3e5-035acac8d1cc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 07:25:10 crc kubenswrapper[4908]: I0131 07:25:10.430611 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:25:10 crc kubenswrapper[4908]: I0131 07:25:10.430912 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:25:10 crc kubenswrapper[4908]: I0131 07:25:10.482379 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/14dc50c5-4628-4ac4-a3e5-035acac8d1cc-var-lock\") pod \"installer-9-crc\" (UID: \"14dc50c5-4628-4ac4-a3e5-035acac8d1cc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 07:25:10 crc kubenswrapper[4908]: I0131 07:25:10.482445 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14dc50c5-4628-4ac4-a3e5-035acac8d1cc-kube-api-access\") pod \"installer-9-crc\" (UID: \"14dc50c5-4628-4ac4-a3e5-035acac8d1cc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 07:25:10 crc kubenswrapper[4908]: I0131 07:25:10.482478 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14dc50c5-4628-4ac4-a3e5-035acac8d1cc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"14dc50c5-4628-4ac4-a3e5-035acac8d1cc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 07:25:10 crc kubenswrapper[4908]: I0131 07:25:10.482514 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/14dc50c5-4628-4ac4-a3e5-035acac8d1cc-var-lock\") pod \"installer-9-crc\" (UID: \"14dc50c5-4628-4ac4-a3e5-035acac8d1cc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 07:25:10 crc kubenswrapper[4908]: I0131 07:25:10.482601 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14dc50c5-4628-4ac4-a3e5-035acac8d1cc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"14dc50c5-4628-4ac4-a3e5-035acac8d1cc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 07:25:10 crc kubenswrapper[4908]: I0131 07:25:10.498516 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14dc50c5-4628-4ac4-a3e5-035acac8d1cc-kube-api-access\") pod \"installer-9-crc\" (UID: \"14dc50c5-4628-4ac4-a3e5-035acac8d1cc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 07:25:10 crc kubenswrapper[4908]: I0131 07:25:10.649107 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 07:25:11 crc kubenswrapper[4908]: E0131 07:25:11.126470 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 31 07:25:11 crc kubenswrapper[4908]: E0131 07:25:11.126652 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zfczh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qhffb_openshift-marketplace(dc5d84aa-bc03-4089-be41-0f32bd1ceff4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 07:25:11 crc kubenswrapper[4908]: E0131 07:25:11.127858 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-qhffb" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" Jan 31 07:25:16 crc kubenswrapper[4908]: I0131 07:25:16.083847 4908 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rjct container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 31 07:25:16 crc kubenswrapper[4908]: I0131 07:25:16.084364 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8rjct" podUID="345e9f59-f1cc-40f5-97ea-42940f12805c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 31 07:25:16 crc kubenswrapper[4908]: I0131 07:25:16.921963 4908 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-vgk68 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: i/o timeout" start-of-body= Jan 31 07:25:16 crc kubenswrapper[4908]: I0131 07:25:16.922363 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" podUID="61affb6e-e659-45c3-b1bb-f328e073304f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: i/o timeout" Jan 31 07:25:16 crc kubenswrapper[4908]: I0131 07:25:16.977161 4908 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-wgj2j container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 07:25:16 crc kubenswrapper[4908]: I0131 07:25:16.977235 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" podUID="1b18d2de-0b08-43c9-bcbf-1ced621bac08" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 07:25:17 crc kubenswrapper[4908]: E0131 07:25:17.017053 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 31 07:25:17 crc kubenswrapper[4908]: E0131 07:25:17.017206 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nhwjd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-hqxmf_openshift-marketplace(b2db9dfc-20ec-446a-878c-db0e800be1a0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 07:25:17 crc kubenswrapper[4908]: E0131 07:25:17.018370 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-hqxmf" podUID="b2db9dfc-20ec-446a-878c-db0e800be1a0" Jan 31 07:25:20 crc kubenswrapper[4908]: E0131 07:25:20.045519 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-hqxmf" podUID="b2db9dfc-20ec-446a-878c-db0e800be1a0" Jan 31 07:25:20 crc kubenswrapper[4908]: E0131 07:25:20.045613 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qhffb" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.144772 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" event={"ID":"1b18d2de-0b08-43c9-bcbf-1ced621bac08","Type":"ContainerDied","Data":"926831a840dba993a32f831aa8bc5c1a8be09efaaa48fd689f636d609db93ead"} Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.145240 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="926831a840dba993a32f831aa8bc5c1a8be09efaaa48fd689f636d609db93ead" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.152026 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" event={"ID":"61affb6e-e659-45c3-b1bb-f328e073304f","Type":"ContainerDied","Data":"76e0ccc472d22acc3d30e6f6f6c87a2fc54325091ff5a04c02e868bbcbb11f70"} Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.152069 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76e0ccc472d22acc3d30e6f6f6c87a2fc54325091ff5a04c02e868bbcbb11f70" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.156366 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.208997 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.224281 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bd9cbfdf8-r6fnr"] Jan 31 07:25:20 crc kubenswrapper[4908]: E0131 07:25:20.226938 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b18d2de-0b08-43c9-bcbf-1ced621bac08" containerName="route-controller-manager" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.226957 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b18d2de-0b08-43c9-bcbf-1ced621bac08" containerName="route-controller-manager" Jan 31 07:25:20 crc kubenswrapper[4908]: E0131 07:25:20.226999 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61affb6e-e659-45c3-b1bb-f328e073304f" containerName="controller-manager" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.227008 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="61affb6e-e659-45c3-b1bb-f328e073304f" containerName="controller-manager" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.227403 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b18d2de-0b08-43c9-bcbf-1ced621bac08" containerName="route-controller-manager" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.227427 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="61affb6e-e659-45c3-b1bb-f328e073304f" containerName="controller-manager" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.228036 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bd9cbfdf8-r6fnr" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.230306 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bd9cbfdf8-r6fnr"] Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.324696 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61affb6e-e659-45c3-b1bb-f328e073304f-serving-cert\") pod \"61affb6e-e659-45c3-b1bb-f328e073304f\" (UID: \"61affb6e-e659-45c3-b1bb-f328e073304f\") " Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.324781 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b18d2de-0b08-43c9-bcbf-1ced621bac08-config\") pod \"1b18d2de-0b08-43c9-bcbf-1ced621bac08\" (UID: \"1b18d2de-0b08-43c9-bcbf-1ced621bac08\") " Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.326078 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b18d2de-0b08-43c9-bcbf-1ced621bac08-config" (OuterVolumeSpecName: "config") pod "1b18d2de-0b08-43c9-bcbf-1ced621bac08" (UID: "1b18d2de-0b08-43c9-bcbf-1ced621bac08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.326160 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b18d2de-0b08-43c9-bcbf-1ced621bac08-serving-cert\") pod \"1b18d2de-0b08-43c9-bcbf-1ced621bac08\" (UID: \"1b18d2de-0b08-43c9-bcbf-1ced621bac08\") " Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.326193 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b18d2de-0b08-43c9-bcbf-1ced621bac08-client-ca\") pod \"1b18d2de-0b08-43c9-bcbf-1ced621bac08\" (UID: \"1b18d2de-0b08-43c9-bcbf-1ced621bac08\") " Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.326533 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61affb6e-e659-45c3-b1bb-f328e073304f-config\") pod \"61affb6e-e659-45c3-b1bb-f328e073304f\" (UID: \"61affb6e-e659-45c3-b1bb-f328e073304f\") " Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.326569 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbvv6\" (UniqueName: \"kubernetes.io/projected/61affb6e-e659-45c3-b1bb-f328e073304f-kube-api-access-wbvv6\") pod \"61affb6e-e659-45c3-b1bb-f328e073304f\" (UID: \"61affb6e-e659-45c3-b1bb-f328e073304f\") " Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.326585 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61affb6e-e659-45c3-b1bb-f328e073304f-client-ca\") pod \"61affb6e-e659-45c3-b1bb-f328e073304f\" (UID: \"61affb6e-e659-45c3-b1bb-f328e073304f\") " Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.326925 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61affb6e-e659-45c3-b1bb-f328e073304f-proxy-ca-bundles\") pod \"61affb6e-e659-45c3-b1bb-f328e073304f\" (UID: \"61affb6e-e659-45c3-b1bb-f328e073304f\") " Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.327003 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hf7g\" (UniqueName: \"kubernetes.io/projected/1b18d2de-0b08-43c9-bcbf-1ced621bac08-kube-api-access-9hf7g\") pod \"1b18d2de-0b08-43c9-bcbf-1ced621bac08\" (UID: \"1b18d2de-0b08-43c9-bcbf-1ced621bac08\") " Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.326883 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b18d2de-0b08-43c9-bcbf-1ced621bac08-client-ca" (OuterVolumeSpecName: "client-ca") pod "1b18d2de-0b08-43c9-bcbf-1ced621bac08" (UID: "1b18d2de-0b08-43c9-bcbf-1ced621bac08"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.327255 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdttj\" (UniqueName: \"kubernetes.io/projected/ffd0651f-bbf6-45bc-9897-dd6fb28b729d-kube-api-access-jdttj\") pod \"route-controller-manager-bd9cbfdf8-r6fnr\" (UID: \"ffd0651f-bbf6-45bc-9897-dd6fb28b729d\") " pod="openshift-route-controller-manager/route-controller-manager-bd9cbfdf8-r6fnr" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.327284 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffd0651f-bbf6-45bc-9897-dd6fb28b729d-config\") pod \"route-controller-manager-bd9cbfdf8-r6fnr\" (UID: \"ffd0651f-bbf6-45bc-9897-dd6fb28b729d\") " pod="openshift-route-controller-manager/route-controller-manager-bd9cbfdf8-r6fnr" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.327380 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffd0651f-bbf6-45bc-9897-dd6fb28b729d-client-ca\") pod \"route-controller-manager-bd9cbfdf8-r6fnr\" (UID: \"ffd0651f-bbf6-45bc-9897-dd6fb28b729d\") " pod="openshift-route-controller-manager/route-controller-manager-bd9cbfdf8-r6fnr" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.327418 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffd0651f-bbf6-45bc-9897-dd6fb28b729d-serving-cert\") pod \"route-controller-manager-bd9cbfdf8-r6fnr\" (UID: \"ffd0651f-bbf6-45bc-9897-dd6fb28b729d\") " pod="openshift-route-controller-manager/route-controller-manager-bd9cbfdf8-r6fnr" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.327631 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b18d2de-0b08-43c9-bcbf-1ced621bac08-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.327654 4908 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b18d2de-0b08-43c9-bcbf-1ced621bac08-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.327849 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61affb6e-e659-45c3-b1bb-f328e073304f-client-ca" (OuterVolumeSpecName: "client-ca") pod "61affb6e-e659-45c3-b1bb-f328e073304f" (UID: "61affb6e-e659-45c3-b1bb-f328e073304f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.327867 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61affb6e-e659-45c3-b1bb-f328e073304f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "61affb6e-e659-45c3-b1bb-f328e073304f" (UID: "61affb6e-e659-45c3-b1bb-f328e073304f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.327971 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61affb6e-e659-45c3-b1bb-f328e073304f-config" (OuterVolumeSpecName: "config") pod "61affb6e-e659-45c3-b1bb-f328e073304f" (UID: "61affb6e-e659-45c3-b1bb-f328e073304f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.344613 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61affb6e-e659-45c3-b1bb-f328e073304f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "61affb6e-e659-45c3-b1bb-f328e073304f" (UID: "61affb6e-e659-45c3-b1bb-f328e073304f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.347304 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61affb6e-e659-45c3-b1bb-f328e073304f-kube-api-access-wbvv6" (OuterVolumeSpecName: "kube-api-access-wbvv6") pod "61affb6e-e659-45c3-b1bb-f328e073304f" (UID: "61affb6e-e659-45c3-b1bb-f328e073304f"). InnerVolumeSpecName "kube-api-access-wbvv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.348169 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b18d2de-0b08-43c9-bcbf-1ced621bac08-kube-api-access-9hf7g" (OuterVolumeSpecName: "kube-api-access-9hf7g") pod "1b18d2de-0b08-43c9-bcbf-1ced621bac08" (UID: "1b18d2de-0b08-43c9-bcbf-1ced621bac08"). InnerVolumeSpecName "kube-api-access-9hf7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.354193 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b18d2de-0b08-43c9-bcbf-1ced621bac08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1b18d2de-0b08-43c9-bcbf-1ced621bac08" (UID: "1b18d2de-0b08-43c9-bcbf-1ced621bac08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.400845 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.429076 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdttj\" (UniqueName: \"kubernetes.io/projected/ffd0651f-bbf6-45bc-9897-dd6fb28b729d-kube-api-access-jdttj\") pod \"route-controller-manager-bd9cbfdf8-r6fnr\" (UID: \"ffd0651f-bbf6-45bc-9897-dd6fb28b729d\") " pod="openshift-route-controller-manager/route-controller-manager-bd9cbfdf8-r6fnr" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.429156 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffd0651f-bbf6-45bc-9897-dd6fb28b729d-config\") pod \"route-controller-manager-bd9cbfdf8-r6fnr\" (UID: \"ffd0651f-bbf6-45bc-9897-dd6fb28b729d\") " pod="openshift-route-controller-manager/route-controller-manager-bd9cbfdf8-r6fnr" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.429206 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffd0651f-bbf6-45bc-9897-dd6fb28b729d-client-ca\") pod \"route-controller-manager-bd9cbfdf8-r6fnr\" (UID: \"ffd0651f-bbf6-45bc-9897-dd6fb28b729d\") " pod="openshift-route-controller-manager/route-controller-manager-bd9cbfdf8-r6fnr" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.429247 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffd0651f-bbf6-45bc-9897-dd6fb28b729d-serving-cert\") pod \"route-controller-manager-bd9cbfdf8-r6fnr\" (UID: \"ffd0651f-bbf6-45bc-9897-dd6fb28b729d\") " pod="openshift-route-controller-manager/route-controller-manager-bd9cbfdf8-r6fnr" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.429310 4908 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61affb6e-e659-45c3-b1bb-f328e073304f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.429321 4908 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b18d2de-0b08-43c9-bcbf-1ced621bac08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.429329 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61affb6e-e659-45c3-b1bb-f328e073304f-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.429338 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbvv6\" (UniqueName: \"kubernetes.io/projected/61affb6e-e659-45c3-b1bb-f328e073304f-kube-api-access-wbvv6\") on node \"crc\" DevicePath \"\"" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.429346 4908 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61affb6e-e659-45c3-b1bb-f328e073304f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.429354 4908 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61affb6e-e659-45c3-b1bb-f328e073304f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.429362 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hf7g\" (UniqueName: \"kubernetes.io/projected/1b18d2de-0b08-43c9-bcbf-1ced621bac08-kube-api-access-9hf7g\") on node \"crc\" DevicePath \"\"" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.430460 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffd0651f-bbf6-45bc-9897-dd6fb28b729d-config\") pod \"route-controller-manager-bd9cbfdf8-r6fnr\" (UID: \"ffd0651f-bbf6-45bc-9897-dd6fb28b729d\") " pod="openshift-route-controller-manager/route-controller-manager-bd9cbfdf8-r6fnr" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.430783 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffd0651f-bbf6-45bc-9897-dd6fb28b729d-client-ca\") pod \"route-controller-manager-bd9cbfdf8-r6fnr\" (UID: \"ffd0651f-bbf6-45bc-9897-dd6fb28b729d\") " pod="openshift-route-controller-manager/route-controller-manager-bd9cbfdf8-r6fnr" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.433660 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffd0651f-bbf6-45bc-9897-dd6fb28b729d-serving-cert\") pod \"route-controller-manager-bd9cbfdf8-r6fnr\" (UID: \"ffd0651f-bbf6-45bc-9897-dd6fb28b729d\") " pod="openshift-route-controller-manager/route-controller-manager-bd9cbfdf8-r6fnr" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.445844 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdttj\" (UniqueName: \"kubernetes.io/projected/ffd0651f-bbf6-45bc-9897-dd6fb28b729d-kube-api-access-jdttj\") pod \"route-controller-manager-bd9cbfdf8-r6fnr\" (UID: \"ffd0651f-bbf6-45bc-9897-dd6fb28b729d\") " pod="openshift-route-controller-manager/route-controller-manager-bd9cbfdf8-r6fnr" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.512662 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2cg54"] Jan 31 07:25:20 crc kubenswrapper[4908]: W0131 07:25:20.549324 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1242d7b7_ba0b_4084_88f1_fedf57d84b11.slice/crio-f871cd5e7f9a47bfb58c4529b9f9c97a5bfad87ca95acf45d088b2e6f484ed88 WatchSource:0}: Error finding container f871cd5e7f9a47bfb58c4529b9f9c97a5bfad87ca95acf45d088b2e6f484ed88: Status 404 returned error can't find the container with id f871cd5e7f9a47bfb58c4529b9f9c97a5bfad87ca95acf45d088b2e6f484ed88 Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.551319 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bd9cbfdf8-r6fnr" Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.568761 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 07:25:20 crc kubenswrapper[4908]: W0131 07:25:20.590656 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod14dc50c5_4628_4ac4_a3e5_035acac8d1cc.slice/crio-8ddf7dbd3012a2f45ebbc158bbe1eba777d5cd5b5fbeb0cd0da9ee867666e413 WatchSource:0}: Error finding container 8ddf7dbd3012a2f45ebbc158bbe1eba777d5cd5b5fbeb0cd0da9ee867666e413: Status 404 returned error can't find the container with id 8ddf7dbd3012a2f45ebbc158bbe1eba777d5cd5b5fbeb0cd0da9ee867666e413 Jan 31 07:25:20 crc kubenswrapper[4908]: I0131 07:25:20.753761 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bd9cbfdf8-r6fnr"] Jan 31 07:25:20 crc kubenswrapper[4908]: W0131 07:25:20.763386 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffd0651f_bbf6_45bc_9897_dd6fb28b729d.slice/crio-7be32f708298db97bbadcdd79c0ad1548d59c872128b3d409ee8d36644b5eb96 WatchSource:0}: Error finding container 7be32f708298db97bbadcdd79c0ad1548d59c872128b3d409ee8d36644b5eb96: Status 404 returned error can't find the container with id 7be32f708298db97bbadcdd79c0ad1548d59c872128b3d409ee8d36644b5eb96 Jan 31 07:25:21 crc kubenswrapper[4908]: I0131 07:25:21.156969 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bd9cbfdf8-r6fnr" event={"ID":"ffd0651f-bbf6-45bc-9897-dd6fb28b729d","Type":"ContainerStarted","Data":"7be32f708298db97bbadcdd79c0ad1548d59c872128b3d409ee8d36644b5eb96"} Jan 31 07:25:21 crc kubenswrapper[4908]: I0131 07:25:21.157658 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2cg54" event={"ID":"1242d7b7-ba0b-4084-88f1-fedf57d84b11","Type":"ContainerStarted","Data":"f871cd5e7f9a47bfb58c4529b9f9c97a5bfad87ca95acf45d088b2e6f484ed88"} Jan 31 07:25:21 crc kubenswrapper[4908]: I0131 07:25:21.158319 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"14dc50c5-4628-4ac4-a3e5-035acac8d1cc","Type":"ContainerStarted","Data":"8ddf7dbd3012a2f45ebbc158bbe1eba777d5cd5b5fbeb0cd0da9ee867666e413"} Jan 31 07:25:21 crc kubenswrapper[4908]: I0131 07:25:21.158915 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9ccc69e3-f43e-40e0-8e47-da5491596d18","Type":"ContainerStarted","Data":"ab538d7b458e7d8a4260dde0c1692a7126308be2c835ef75c2cbac4a0cf914a8"} Jan 31 07:25:21 crc kubenswrapper[4908]: I0131 07:25:21.160507 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-8rjct" event={"ID":"345e9f59-f1cc-40f5-97ea-42940f12805c","Type":"ContainerStarted","Data":"cbcbfc1eba32ba771c9646c04e1541ac73e4b4295453c1f0c6ea16be3670cf2c"} Jan 31 07:25:21 crc kubenswrapper[4908]: I0131 07:25:21.160854 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-8rjct" Jan 31 07:25:21 crc kubenswrapper[4908]: I0131 07:25:21.161175 4908 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rjct container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 31 07:25:21 crc kubenswrapper[4908]: I0131 07:25:21.161224 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8rjct" podUID="345e9f59-f1cc-40f5-97ea-42940f12805c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 31 07:25:21 crc kubenswrapper[4908]: I0131 07:25:21.162871 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-h7tss_ccb2de53-ecca-4439-94c0-2b65e5b21789/cluster-samples-operator/0.log" Jan 31 07:25:21 crc kubenswrapper[4908]: I0131 07:25:21.162926 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7tss" event={"ID":"ccb2de53-ecca-4439-94c0-2b65e5b21789","Type":"ContainerStarted","Data":"d8e29279c328ea846245427eade01999ab4326bfce7f11754dd92fd899989d1d"} Jan 31 07:25:21 crc kubenswrapper[4908]: I0131 07:25:21.162939 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vgk68" Jan 31 07:25:21 crc kubenswrapper[4908]: I0131 07:25:21.163082 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j" Jan 31 07:25:21 crc kubenswrapper[4908]: I0131 07:25:21.211925 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j"] Jan 31 07:25:21 crc kubenswrapper[4908]: I0131 07:25:21.216227 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgj2j"] Jan 31 07:25:21 crc kubenswrapper[4908]: I0131 07:25:21.219459 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vgk68"] Jan 31 07:25:21 crc kubenswrapper[4908]: I0131 07:25:21.221874 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vgk68"] Jan 31 07:25:21 crc kubenswrapper[4908]: E0131 07:25:21.497665 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 31 07:25:21 crc kubenswrapper[4908]: E0131 07:25:21.498174 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8qczk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-wj798_openshift-marketplace(ce51ffb3-c332-4bb8-b574-44911178c9a1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 07:25:21 crc kubenswrapper[4908]: E0131 07:25:21.499476 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-wj798" podUID="ce51ffb3-c332-4bb8-b574-44911178c9a1" Jan 31 07:25:21 crc kubenswrapper[4908]: E0131 07:25:21.745612 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 31 07:25:21 crc kubenswrapper[4908]: E0131 07:25:21.745958 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8m4nx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-n9szx_openshift-marketplace(05f1e995-f324-4225-a4a8-d476a4da7ff4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 07:25:21 crc kubenswrapper[4908]: E0131 07:25:21.747419 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-n9szx" podUID="05f1e995-f324-4225-a4a8-d476a4da7ff4" Jan 31 07:25:21 crc kubenswrapper[4908]: I0131 07:25:21.955858 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b18d2de-0b08-43c9-bcbf-1ced621bac08" path="/var/lib/kubelet/pods/1b18d2de-0b08-43c9-bcbf-1ced621bac08/volumes" Jan 31 07:25:21 crc kubenswrapper[4908]: I0131 07:25:21.957014 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61affb6e-e659-45c3-b1bb-f328e073304f" path="/var/lib/kubelet/pods/61affb6e-e659-45c3-b1bb-f328e073304f/volumes" Jan 31 07:25:22 crc kubenswrapper[4908]: E0131 07:25:22.058604 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 31 07:25:22 crc kubenswrapper[4908]: E0131 07:25:22.058739 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b56tl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-wg9bf_openshift-marketplace(18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 07:25:22 crc kubenswrapper[4908]: E0131 07:25:22.060428 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-wg9bf" podUID="18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f" Jan 31 07:25:22 crc kubenswrapper[4908]: I0131 07:25:22.178339 4908 generic.go:334] "Generic (PLEG): container finished" podID="9ccc69e3-f43e-40e0-8e47-da5491596d18" containerID="849225dda00c1176ee1e67dc9e1e91d32b932128f2742e933ae55ac99ad33130" exitCode=0 Jan 31 07:25:22 crc kubenswrapper[4908]: I0131 07:25:22.178410 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9ccc69e3-f43e-40e0-8e47-da5491596d18","Type":"ContainerDied","Data":"849225dda00c1176ee1e67dc9e1e91d32b932128f2742e933ae55ac99ad33130"} Jan 31 07:25:22 crc kubenswrapper[4908]: I0131 07:25:22.179503 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bd9cbfdf8-r6fnr" event={"ID":"ffd0651f-bbf6-45bc-9897-dd6fb28b729d","Type":"ContainerStarted","Data":"fdb597c31867248d12a1481eab52bd497235c6b8fa80368dd62575891558100d"} Jan 31 07:25:22 crc kubenswrapper[4908]: I0131 07:25:22.179716 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-bd9cbfdf8-r6fnr" Jan 31 07:25:22 crc kubenswrapper[4908]: I0131 07:25:22.180902 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2cg54" event={"ID":"1242d7b7-ba0b-4084-88f1-fedf57d84b11","Type":"ContainerStarted","Data":"693518cc19dda7e0581a75c4eab431c93efdb1d46e42a12e8795771dc8942d33"} Jan 31 07:25:22 crc kubenswrapper[4908]: I0131 07:25:22.180944 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2cg54" event={"ID":"1242d7b7-ba0b-4084-88f1-fedf57d84b11","Type":"ContainerStarted","Data":"90e2f2ef5b0f9757fe3854004e0b8f872b827d701e7cc42d0b9bd481660d640a"} Jan 31 07:25:22 crc kubenswrapper[4908]: I0131 07:25:22.182098 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"14dc50c5-4628-4ac4-a3e5-035acac8d1cc","Type":"ContainerStarted","Data":"cea710f2e60c90d29c9f0109ab926542c7c0fdc82be81d9c6fe5c1545f33c3ed"} Jan 31 07:25:22 crc kubenswrapper[4908]: I0131 07:25:22.182505 4908 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rjct container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 31 07:25:22 crc kubenswrapper[4908]: I0131 07:25:22.182635 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8rjct" podUID="345e9f59-f1cc-40f5-97ea-42940f12805c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 31 07:25:22 crc kubenswrapper[4908]: E0131 07:25:22.183635 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-wg9bf" podUID="18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f" Jan 31 07:25:22 crc kubenswrapper[4908]: E0131 07:25:22.183687 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-n9szx" podUID="05f1e995-f324-4225-a4a8-d476a4da7ff4" Jan 31 07:25:22 crc kubenswrapper[4908]: I0131 07:25:22.186206 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-bd9cbfdf8-r6fnr" Jan 31 07:25:22 crc kubenswrapper[4908]: I0131 07:25:22.211425 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-bd9cbfdf8-r6fnr" podStartSLOduration=19.211408894 podStartE2EDuration="19.211408894s" podCreationTimestamp="2026-01-31 07:25:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:25:22.209408711 +0000 UTC m=+228.825353375" watchObservedRunningTime="2026-01-31 07:25:22.211408894 +0000 UTC m=+228.827353548" Jan 31 07:25:22 crc kubenswrapper[4908]: I0131 07:25:22.223138 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-2cg54" podStartSLOduration=193.22311557 podStartE2EDuration="3m13.22311557s" podCreationTimestamp="2026-01-31 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:25:22.223038178 +0000 UTC m=+228.838982832" watchObservedRunningTime="2026-01-31 07:25:22.22311557 +0000 UTC m=+228.839060224" Jan 31 07:25:22 crc kubenswrapper[4908]: I0131 07:25:22.260519 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=12.260493699 podStartE2EDuration="12.260493699s" podCreationTimestamp="2026-01-31 07:25:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:25:22.253812804 +0000 UTC m=+228.869757458" watchObservedRunningTime="2026-01-31 07:25:22.260493699 +0000 UTC m=+228.876438353" Jan 31 07:25:22 crc kubenswrapper[4908]: I0131 07:25:22.868760 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-755766b95d-vhs26"] Jan 31 07:25:22 crc kubenswrapper[4908]: I0131 07:25:22.870479 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-755766b95d-vhs26" Jan 31 07:25:22 crc kubenswrapper[4908]: I0131 07:25:22.872627 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 07:25:22 crc kubenswrapper[4908]: I0131 07:25:22.873183 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 07:25:22 crc kubenswrapper[4908]: I0131 07:25:22.873334 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 07:25:22 crc kubenswrapper[4908]: I0131 07:25:22.873383 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 07:25:22 crc kubenswrapper[4908]: I0131 07:25:22.878170 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 07:25:22 crc kubenswrapper[4908]: I0131 07:25:22.885344 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 07:25:22 crc kubenswrapper[4908]: I0131 07:25:22.886661 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 07:25:22 crc kubenswrapper[4908]: I0131 07:25:22.887775 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-755766b95d-vhs26"] Jan 31 07:25:22 crc kubenswrapper[4908]: E0131 07:25:22.960876 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 31 07:25:22 crc kubenswrapper[4908]: E0131 07:25:22.961402 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ggjrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ld4tn_openshift-marketplace(0c0fecdd-45be-4880-9629-53c2efef8340): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 07:25:22 crc kubenswrapper[4908]: E0131 07:25:22.962830 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-ld4tn" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" Jan 31 07:25:22 crc kubenswrapper[4908]: I0131 07:25:22.963414 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0-config\") pod \"controller-manager-755766b95d-vhs26\" (UID: \"d87e2a2b-5881-4e1d-90d9-e9ead40ccad0\") " pod="openshift-controller-manager/controller-manager-755766b95d-vhs26" Jan 31 07:25:22 crc kubenswrapper[4908]: I0131 07:25:22.963486 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0-serving-cert\") pod \"controller-manager-755766b95d-vhs26\" (UID: \"d87e2a2b-5881-4e1d-90d9-e9ead40ccad0\") " pod="openshift-controller-manager/controller-manager-755766b95d-vhs26" Jan 31 07:25:22 crc kubenswrapper[4908]: I0131 07:25:22.963521 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0-proxy-ca-bundles\") pod \"controller-manager-755766b95d-vhs26\" (UID: \"d87e2a2b-5881-4e1d-90d9-e9ead40ccad0\") " pod="openshift-controller-manager/controller-manager-755766b95d-vhs26" Jan 31 07:25:22 crc kubenswrapper[4908]: I0131 07:25:22.963543 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0-client-ca\") pod \"controller-manager-755766b95d-vhs26\" (UID: \"d87e2a2b-5881-4e1d-90d9-e9ead40ccad0\") " pod="openshift-controller-manager/controller-manager-755766b95d-vhs26" Jan 31 07:25:22 crc kubenswrapper[4908]: I0131 07:25:22.963596 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c86fq\" (UniqueName: \"kubernetes.io/projected/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0-kube-api-access-c86fq\") pod \"controller-manager-755766b95d-vhs26\" (UID: \"d87e2a2b-5881-4e1d-90d9-e9ead40ccad0\") " pod="openshift-controller-manager/controller-manager-755766b95d-vhs26" Jan 31 07:25:23 crc kubenswrapper[4908]: I0131 07:25:23.064263 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0-config\") pod \"controller-manager-755766b95d-vhs26\" (UID: \"d87e2a2b-5881-4e1d-90d9-e9ead40ccad0\") " pod="openshift-controller-manager/controller-manager-755766b95d-vhs26" Jan 31 07:25:23 crc kubenswrapper[4908]: I0131 07:25:23.064341 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0-serving-cert\") pod \"controller-manager-755766b95d-vhs26\" (UID: \"d87e2a2b-5881-4e1d-90d9-e9ead40ccad0\") " pod="openshift-controller-manager/controller-manager-755766b95d-vhs26" Jan 31 07:25:23 crc kubenswrapper[4908]: I0131 07:25:23.064364 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0-proxy-ca-bundles\") pod \"controller-manager-755766b95d-vhs26\" (UID: \"d87e2a2b-5881-4e1d-90d9-e9ead40ccad0\") " pod="openshift-controller-manager/controller-manager-755766b95d-vhs26" Jan 31 07:25:23 crc kubenswrapper[4908]: I0131 07:25:23.064380 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0-client-ca\") pod \"controller-manager-755766b95d-vhs26\" (UID: \"d87e2a2b-5881-4e1d-90d9-e9ead40ccad0\") " pod="openshift-controller-manager/controller-manager-755766b95d-vhs26" Jan 31 07:25:23 crc kubenswrapper[4908]: I0131 07:25:23.064438 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c86fq\" (UniqueName: \"kubernetes.io/projected/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0-kube-api-access-c86fq\") pod \"controller-manager-755766b95d-vhs26\" (UID: \"d87e2a2b-5881-4e1d-90d9-e9ead40ccad0\") " pod="openshift-controller-manager/controller-manager-755766b95d-vhs26" Jan 31 07:25:23 crc kubenswrapper[4908]: I0131 07:25:23.065662 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0-client-ca\") pod \"controller-manager-755766b95d-vhs26\" (UID: \"d87e2a2b-5881-4e1d-90d9-e9ead40ccad0\") " pod="openshift-controller-manager/controller-manager-755766b95d-vhs26" Jan 31 07:25:23 crc kubenswrapper[4908]: I0131 07:25:23.072856 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0-serving-cert\") pod \"controller-manager-755766b95d-vhs26\" (UID: \"d87e2a2b-5881-4e1d-90d9-e9ead40ccad0\") " pod="openshift-controller-manager/controller-manager-755766b95d-vhs26" Jan 31 07:25:23 crc kubenswrapper[4908]: I0131 07:25:23.087969 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c86fq\" (UniqueName: \"kubernetes.io/projected/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0-kube-api-access-c86fq\") pod \"controller-manager-755766b95d-vhs26\" (UID: \"d87e2a2b-5881-4e1d-90d9-e9ead40ccad0\") " pod="openshift-controller-manager/controller-manager-755766b95d-vhs26" Jan 31 07:25:23 crc kubenswrapper[4908]: I0131 07:25:23.129971 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0-proxy-ca-bundles\") pod \"controller-manager-755766b95d-vhs26\" (UID: \"d87e2a2b-5881-4e1d-90d9-e9ead40ccad0\") " pod="openshift-controller-manager/controller-manager-755766b95d-vhs26" Jan 31 07:25:23 crc kubenswrapper[4908]: I0131 07:25:23.130956 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0-config\") pod \"controller-manager-755766b95d-vhs26\" (UID: \"d87e2a2b-5881-4e1d-90d9-e9ead40ccad0\") " pod="openshift-controller-manager/controller-manager-755766b95d-vhs26" Jan 31 07:25:23 crc kubenswrapper[4908]: I0131 07:25:23.191120 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-755766b95d-vhs26" Jan 31 07:25:26 crc kubenswrapper[4908]: I0131 07:25:26.082868 4908 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rjct container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 31 07:25:26 crc kubenswrapper[4908]: I0131 07:25:26.082869 4908 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rjct container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 31 07:25:26 crc kubenswrapper[4908]: I0131 07:25:26.083365 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8rjct" podUID="345e9f59-f1cc-40f5-97ea-42940f12805c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 31 07:25:26 crc kubenswrapper[4908]: I0131 07:25:26.083394 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-8rjct" podUID="345e9f59-f1cc-40f5-97ea-42940f12805c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 31 07:25:26 crc kubenswrapper[4908]: E0131 07:25:26.777115 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ld4tn" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" Jan 31 07:25:26 crc kubenswrapper[4908]: I0131 07:25:26.832576 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 07:25:26 crc kubenswrapper[4908]: I0131 07:25:26.915695 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ccc69e3-f43e-40e0-8e47-da5491596d18-kube-api-access\") pod \"9ccc69e3-f43e-40e0-8e47-da5491596d18\" (UID: \"9ccc69e3-f43e-40e0-8e47-da5491596d18\") " Jan 31 07:25:26 crc kubenswrapper[4908]: I0131 07:25:26.915761 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ccc69e3-f43e-40e0-8e47-da5491596d18-kubelet-dir\") pod \"9ccc69e3-f43e-40e0-8e47-da5491596d18\" (UID: \"9ccc69e3-f43e-40e0-8e47-da5491596d18\") " Jan 31 07:25:26 crc kubenswrapper[4908]: I0131 07:25:26.916201 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ccc69e3-f43e-40e0-8e47-da5491596d18-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9ccc69e3-f43e-40e0-8e47-da5491596d18" (UID: "9ccc69e3-f43e-40e0-8e47-da5491596d18"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:25:26 crc kubenswrapper[4908]: I0131 07:25:26.922015 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ccc69e3-f43e-40e0-8e47-da5491596d18-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9ccc69e3-f43e-40e0-8e47-da5491596d18" (UID: "9ccc69e3-f43e-40e0-8e47-da5491596d18"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:25:27 crc kubenswrapper[4908]: I0131 07:25:27.017855 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ccc69e3-f43e-40e0-8e47-da5491596d18-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 07:25:27 crc kubenswrapper[4908]: I0131 07:25:27.017884 4908 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ccc69e3-f43e-40e0-8e47-da5491596d18-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 07:25:27 crc kubenswrapper[4908]: I0131 07:25:27.217919 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9ccc69e3-f43e-40e0-8e47-da5491596d18","Type":"ContainerDied","Data":"ab538d7b458e7d8a4260dde0c1692a7126308be2c835ef75c2cbac4a0cf914a8"} Jan 31 07:25:27 crc kubenswrapper[4908]: I0131 07:25:27.217991 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab538d7b458e7d8a4260dde0c1692a7126308be2c835ef75c2cbac4a0cf914a8" Jan 31 07:25:27 crc kubenswrapper[4908]: I0131 07:25:27.218043 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 07:25:36 crc kubenswrapper[4908]: I0131 07:25:36.090559 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-8rjct" Jan 31 07:25:38 crc kubenswrapper[4908]: I0131 07:25:38.975259 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-755766b95d-vhs26"] Jan 31 07:25:39 crc kubenswrapper[4908]: I0131 07:25:39.287415 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-755766b95d-vhs26" event={"ID":"d87e2a2b-5881-4e1d-90d9-e9ead40ccad0","Type":"ContainerStarted","Data":"ed4ef95aedf16d6d49bb6694f9a082e61dbf6bfb25d6df3d2590fc08d6bef020"} Jan 31 07:25:40 crc kubenswrapper[4908]: I0131 07:25:40.431545 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:25:40 crc kubenswrapper[4908]: I0131 07:25:40.431951 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:25:40 crc kubenswrapper[4908]: I0131 07:25:40.432068 4908 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 07:25:40 crc kubenswrapper[4908]: I0131 07:25:40.433058 4908 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d"} pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 07:25:40 crc kubenswrapper[4908]: I0131 07:25:40.433170 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" containerID="cri-o://34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d" gracePeriod=600 Jan 31 07:25:41 crc kubenswrapper[4908]: I0131 07:25:41.303266 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c8k74" event={"ID":"ba3d735e-ca4d-48b1-90c2-2edbcfa582ac","Type":"ContainerStarted","Data":"a1afa65ca189140daf9888a78b1f8fd1193bae0602c0972099450673a8745d2d"} Jan 31 07:25:41 crc kubenswrapper[4908]: I0131 07:25:41.305061 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-755766b95d-vhs26" event={"ID":"d87e2a2b-5881-4e1d-90d9-e9ead40ccad0","Type":"ContainerStarted","Data":"36d9d195ad8931a3639541e97989ac4d23169cc0840507a6be7e08f15ba22578"} Jan 31 07:25:41 crc kubenswrapper[4908]: I0131 07:25:41.307364 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfhwx" event={"ID":"100bafc6-355c-4131-9907-45004788f44c","Type":"ContainerStarted","Data":"76b90fffc34716e5e6a8a8e178d489d2fe352042a3dac36e2a621b7c992089a4"} Jan 31 07:25:48 crc kubenswrapper[4908]: I0131 07:25:48.073720 4908 generic.go:334] "Generic (PLEG): container finished" podID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerID="34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d" exitCode=0 Jan 31 07:25:48 crc kubenswrapper[4908]: I0131 07:25:48.073814 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerDied","Data":"34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d"} Jan 31 07:25:48 crc kubenswrapper[4908]: I0131 07:25:48.075056 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-755766b95d-vhs26" Jan 31 07:25:48 crc kubenswrapper[4908]: I0131 07:25:48.079831 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-755766b95d-vhs26" Jan 31 07:25:48 crc kubenswrapper[4908]: I0131 07:25:48.097659 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-755766b95d-vhs26" podStartSLOduration=45.097634786 podStartE2EDuration="45.097634786s" podCreationTimestamp="2026-01-31 07:25:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:25:48.093541729 +0000 UTC m=+254.709486383" watchObservedRunningTime="2026-01-31 07:25:48.097634786 +0000 UTC m=+254.713579480" Jan 31 07:25:49 crc kubenswrapper[4908]: I0131 07:25:49.082335 4908 generic.go:334] "Generic (PLEG): container finished" podID="100bafc6-355c-4131-9907-45004788f44c" containerID="76b90fffc34716e5e6a8a8e178d489d2fe352042a3dac36e2a621b7c992089a4" exitCode=0 Jan 31 07:25:49 crc kubenswrapper[4908]: I0131 07:25:49.082414 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfhwx" event={"ID":"100bafc6-355c-4131-9907-45004788f44c","Type":"ContainerDied","Data":"76b90fffc34716e5e6a8a8e178d489d2fe352042a3dac36e2a621b7c992089a4"} Jan 31 07:25:49 crc kubenswrapper[4908]: I0131 07:25:49.085961 4908 generic.go:334] "Generic (PLEG): container finished" podID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" containerID="a1afa65ca189140daf9888a78b1f8fd1193bae0602c0972099450673a8745d2d" exitCode=0 Jan 31 07:25:49 crc kubenswrapper[4908]: I0131 07:25:49.086017 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c8k74" event={"ID":"ba3d735e-ca4d-48b1-90c2-2edbcfa582ac","Type":"ContainerDied","Data":"a1afa65ca189140daf9888a78b1f8fd1193bae0602c0972099450673a8745d2d"} Jan 31 07:25:49 crc kubenswrapper[4908]: I0131 07:25:49.088649 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerStarted","Data":"b64acf3642ba08c436df2ada76a037d785a8cac8726ab06ab67e5b2ad4afcbf4"} Jan 31 07:25:52 crc kubenswrapper[4908]: I0131 07:25:52.120087 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj798" event={"ID":"ce51ffb3-c332-4bb8-b574-44911178c9a1","Type":"ContainerStarted","Data":"bd3ca5f6087e38373c1f4469fdc8e713473394765321de78e92823efd9839fd1"} Jan 31 07:25:52 crc kubenswrapper[4908]: I0131 07:25:52.124822 4908 generic.go:334] "Generic (PLEG): container finished" podID="b2db9dfc-20ec-446a-878c-db0e800be1a0" containerID="8c9a14e976942b45358ee87f97f81fc9980c4eca8a7ac71d68d52b744e2ba0ed" exitCode=0 Jan 31 07:25:52 crc kubenswrapper[4908]: I0131 07:25:52.124907 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqxmf" event={"ID":"b2db9dfc-20ec-446a-878c-db0e800be1a0","Type":"ContainerDied","Data":"8c9a14e976942b45358ee87f97f81fc9980c4eca8a7ac71d68d52b744e2ba0ed"} Jan 31 07:25:52 crc kubenswrapper[4908]: I0131 07:25:52.132965 4908 generic.go:334] "Generic (PLEG): container finished" podID="05f1e995-f324-4225-a4a8-d476a4da7ff4" containerID="fa93e920cebc06331c818632d1e1188e86b8a6af40e7568598381566f9dfb4e4" exitCode=0 Jan 31 07:25:52 crc kubenswrapper[4908]: I0131 07:25:52.133056 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9szx" event={"ID":"05f1e995-f324-4225-a4a8-d476a4da7ff4","Type":"ContainerDied","Data":"fa93e920cebc06331c818632d1e1188e86b8a6af40e7568598381566f9dfb4e4"} Jan 31 07:25:52 crc kubenswrapper[4908]: I0131 07:25:52.142298 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfhwx" event={"ID":"100bafc6-355c-4131-9907-45004788f44c","Type":"ContainerStarted","Data":"121568c6be516eb96e2c067813ea1469502affabee53442a201a8aad08b8daae"} Jan 31 07:25:52 crc kubenswrapper[4908]: I0131 07:25:52.147772 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg9bf" event={"ID":"18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f","Type":"ContainerStarted","Data":"56f7b139e1f19e5312675f499a9f2405beb20c008aaa3f21107a04e0533614db"} Jan 31 07:25:52 crc kubenswrapper[4908]: I0131 07:25:52.149571 4908 generic.go:334] "Generic (PLEG): container finished" podID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" containerID="37c40e34c6f4526aa23df0169bfc1040d05545332b87bfd10a435851534cb90e" exitCode=0 Jan 31 07:25:52 crc kubenswrapper[4908]: I0131 07:25:52.149654 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhffb" event={"ID":"dc5d84aa-bc03-4089-be41-0f32bd1ceff4","Type":"ContainerDied","Data":"37c40e34c6f4526aa23df0169bfc1040d05545332b87bfd10a435851534cb90e"} Jan 31 07:25:52 crc kubenswrapper[4908]: I0131 07:25:52.154005 4908 generic.go:334] "Generic (PLEG): container finished" podID="0c0fecdd-45be-4880-9629-53c2efef8340" containerID="4d113c7a167c8ffe32889850e77ab9def80a86abafc132634c517e9aabc8fc92" exitCode=0 Jan 31 07:25:52 crc kubenswrapper[4908]: I0131 07:25:52.154078 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ld4tn" event={"ID":"0c0fecdd-45be-4880-9629-53c2efef8340","Type":"ContainerDied","Data":"4d113c7a167c8ffe32889850e77ab9def80a86abafc132634c517e9aabc8fc92"} Jan 31 07:25:52 crc kubenswrapper[4908]: I0131 07:25:52.160383 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c8k74" event={"ID":"ba3d735e-ca4d-48b1-90c2-2edbcfa582ac","Type":"ContainerStarted","Data":"2d9391ec85825d92309fb7d475e9051a80259e61c1fd5d99e6560db3aea8fadf"} Jan 31 07:25:52 crc kubenswrapper[4908]: I0131 07:25:52.217821 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hfhwx" podStartSLOduration=2.720081152 podStartE2EDuration="1m31.217800839s" podCreationTimestamp="2026-01-31 07:24:21 +0000 UTC" firstStartedPulling="2026-01-31 07:24:23.35011219 +0000 UTC m=+169.966056844" lastFinishedPulling="2026-01-31 07:25:51.847831877 +0000 UTC m=+258.463776531" observedRunningTime="2026-01-31 07:25:52.215445125 +0000 UTC m=+258.831389779" watchObservedRunningTime="2026-01-31 07:25:52.217800839 +0000 UTC m=+258.833745493" Jan 31 07:25:52 crc kubenswrapper[4908]: I0131 07:25:52.252737 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c8k74" podStartSLOduration=3.139512705 podStartE2EDuration="1m31.252719829s" podCreationTimestamp="2026-01-31 07:24:21 +0000 UTC" firstStartedPulling="2026-01-31 07:24:23.450663763 +0000 UTC m=+170.066608417" lastFinishedPulling="2026-01-31 07:25:51.563870887 +0000 UTC m=+258.179815541" observedRunningTime="2026-01-31 07:25:52.251100415 +0000 UTC m=+258.867045069" watchObservedRunningTime="2026-01-31 07:25:52.252719829 +0000 UTC m=+258.868664483" Jan 31 07:25:53 crc kubenswrapper[4908]: I0131 07:25:53.171584 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhffb" event={"ID":"dc5d84aa-bc03-4089-be41-0f32bd1ceff4","Type":"ContainerStarted","Data":"faf18335c1dcb3fc17f927fd42fc4a03239815b0e76eb24bcbb46f56382b7ed3"} Jan 31 07:25:53 crc kubenswrapper[4908]: I0131 07:25:53.173889 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ld4tn" event={"ID":"0c0fecdd-45be-4880-9629-53c2efef8340","Type":"ContainerStarted","Data":"7f5716a9b24a86a0131b3e9e651743e25eda4e8f36bf4c5b0dfbb43f41ce76b4"} Jan 31 07:25:53 crc kubenswrapper[4908]: I0131 07:25:53.175667 4908 generic.go:334] "Generic (PLEG): container finished" podID="ce51ffb3-c332-4bb8-b574-44911178c9a1" containerID="bd3ca5f6087e38373c1f4469fdc8e713473394765321de78e92823efd9839fd1" exitCode=0 Jan 31 07:25:53 crc kubenswrapper[4908]: I0131 07:25:53.175726 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj798" event={"ID":"ce51ffb3-c332-4bb8-b574-44911178c9a1","Type":"ContainerDied","Data":"bd3ca5f6087e38373c1f4469fdc8e713473394765321de78e92823efd9839fd1"} Jan 31 07:25:53 crc kubenswrapper[4908]: I0131 07:25:53.180066 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqxmf" event={"ID":"b2db9dfc-20ec-446a-878c-db0e800be1a0","Type":"ContainerStarted","Data":"5edf6a5207cf3f6fdc6b3f5d439402a4a5f78bdeeab854d7fad5dedb20567275"} Jan 31 07:25:53 crc kubenswrapper[4908]: I0131 07:25:53.183877 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9szx" event={"ID":"05f1e995-f324-4225-a4a8-d476a4da7ff4","Type":"ContainerStarted","Data":"e8d2ed3d164cf8d554cf64c87fcbf34735cc5ea7956b9600fb4c8688ff3bcfd2"} Jan 31 07:25:53 crc kubenswrapper[4908]: I0131 07:25:53.186469 4908 generic.go:334] "Generic (PLEG): container finished" podID="18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f" containerID="56f7b139e1f19e5312675f499a9f2405beb20c008aaa3f21107a04e0533614db" exitCode=0 Jan 31 07:25:53 crc kubenswrapper[4908]: I0131 07:25:53.186501 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg9bf" event={"ID":"18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f","Type":"ContainerDied","Data":"56f7b139e1f19e5312675f499a9f2405beb20c008aaa3f21107a04e0533614db"} Jan 31 07:25:53 crc kubenswrapper[4908]: I0131 07:25:53.195315 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qhffb" podStartSLOduration=3.952866935 podStartE2EDuration="1m33.195297368s" podCreationTimestamp="2026-01-31 07:24:20 +0000 UTC" firstStartedPulling="2026-01-31 07:24:23.364062257 +0000 UTC m=+169.980006911" lastFinishedPulling="2026-01-31 07:25:52.60649269 +0000 UTC m=+259.222437344" observedRunningTime="2026-01-31 07:25:53.193115279 +0000 UTC m=+259.809059963" watchObservedRunningTime="2026-01-31 07:25:53.195297368 +0000 UTC m=+259.811242022" Jan 31 07:25:53 crc kubenswrapper[4908]: I0131 07:25:53.222693 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ld4tn" podStartSLOduration=2.943610653 podStartE2EDuration="1m32.222674374s" podCreationTimestamp="2026-01-31 07:24:21 +0000 UTC" firstStartedPulling="2026-01-31 07:24:23.383427183 +0000 UTC m=+169.999371837" lastFinishedPulling="2026-01-31 07:25:52.662490904 +0000 UTC m=+259.278435558" observedRunningTime="2026-01-31 07:25:53.220776652 +0000 UTC m=+259.836721306" watchObservedRunningTime="2026-01-31 07:25:53.222674374 +0000 UTC m=+259.838619028" Jan 31 07:25:53 crc kubenswrapper[4908]: I0131 07:25:53.262505 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hqxmf" podStartSLOduration=4.356865286 podStartE2EDuration="1m31.262486177s" podCreationTimestamp="2026-01-31 07:24:22 +0000 UTC" firstStartedPulling="2026-01-31 07:24:25.604826294 +0000 UTC m=+172.220770948" lastFinishedPulling="2026-01-31 07:25:52.510447185 +0000 UTC m=+259.126391839" observedRunningTime="2026-01-31 07:25:53.261571072 +0000 UTC m=+259.877515736" watchObservedRunningTime="2026-01-31 07:25:53.262486177 +0000 UTC m=+259.878430831" Jan 31 07:25:53 crc kubenswrapper[4908]: I0131 07:25:53.271042 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hqxmf" Jan 31 07:25:53 crc kubenswrapper[4908]: I0131 07:25:53.271309 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hqxmf" Jan 31 07:25:54 crc kubenswrapper[4908]: I0131 07:25:54.193640 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg9bf" event={"ID":"18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f","Type":"ContainerStarted","Data":"4f58c332f70b200ad731b3f2976d19507777901a607ab37c4c73a7f0715f329c"} Jan 31 07:25:54 crc kubenswrapper[4908]: I0131 07:25:54.244922 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n9szx" podStartSLOduration=4.143723661 podStartE2EDuration="1m31.244897751s" podCreationTimestamp="2026-01-31 07:24:23 +0000 UTC" firstStartedPulling="2026-01-31 07:24:25.68752571 +0000 UTC m=+172.303470364" lastFinishedPulling="2026-01-31 07:25:52.78869979 +0000 UTC m=+259.404644454" observedRunningTime="2026-01-31 07:25:54.239771221 +0000 UTC m=+260.855715885" watchObservedRunningTime="2026-01-31 07:25:54.244897751 +0000 UTC m=+260.860842405" Jan 31 07:25:54 crc kubenswrapper[4908]: I0131 07:25:54.286025 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wg9bf" podStartSLOduration=3.140141221 podStartE2EDuration="1m30.28600657s" podCreationTimestamp="2026-01-31 07:24:24 +0000 UTC" firstStartedPulling="2026-01-31 07:24:26.726025698 +0000 UTC m=+173.341970352" lastFinishedPulling="2026-01-31 07:25:53.871891047 +0000 UTC m=+260.487835701" observedRunningTime="2026-01-31 07:25:54.283516182 +0000 UTC m=+260.899460836" watchObservedRunningTime="2026-01-31 07:25:54.28600657 +0000 UTC m=+260.901951224" Jan 31 07:25:54 crc kubenswrapper[4908]: I0131 07:25:54.624480 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-hqxmf" podUID="b2db9dfc-20ec-446a-878c-db0e800be1a0" containerName="registry-server" probeResult="failure" output=< Jan 31 07:25:54 crc kubenswrapper[4908]: timeout: failed to connect service ":50051" within 1s Jan 31 07:25:54 crc kubenswrapper[4908]: > Jan 31 07:25:54 crc kubenswrapper[4908]: I0131 07:25:54.708298 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wg9bf" Jan 31 07:25:54 crc kubenswrapper[4908]: I0131 07:25:54.708353 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wg9bf" Jan 31 07:25:55 crc kubenswrapper[4908]: I0131 07:25:55.200564 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj798" event={"ID":"ce51ffb3-c332-4bb8-b574-44911178c9a1","Type":"ContainerStarted","Data":"83409aa5478b260b9fc14577514e086186725234f8ade5477fced78e2eef583b"} Jan 31 07:25:55 crc kubenswrapper[4908]: I0131 07:25:55.219300 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wj798" podStartSLOduration=5.024453853 podStartE2EDuration="1m31.219281805s" podCreationTimestamp="2026-01-31 07:24:24 +0000 UTC" firstStartedPulling="2026-01-31 07:24:27.819442221 +0000 UTC m=+174.435386875" lastFinishedPulling="2026-01-31 07:25:54.014270173 +0000 UTC m=+260.630214827" observedRunningTime="2026-01-31 07:25:55.216675424 +0000 UTC m=+261.832620088" watchObservedRunningTime="2026-01-31 07:25:55.219281805 +0000 UTC m=+261.835226459" Jan 31 07:25:55 crc kubenswrapper[4908]: I0131 07:25:55.750098 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wg9bf" podUID="18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f" containerName="registry-server" probeResult="failure" output=< Jan 31 07:25:55 crc kubenswrapper[4908]: timeout: failed to connect service ":50051" within 1s Jan 31 07:25:55 crc kubenswrapper[4908]: > Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.125793 4908 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 07:25:59 crc kubenswrapper[4908]: E0131 07:25:59.126628 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ccc69e3-f43e-40e0-8e47-da5491596d18" containerName="pruner" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.126642 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ccc69e3-f43e-40e0-8e47-da5491596d18" containerName="pruner" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.126740 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ccc69e3-f43e-40e0-8e47-da5491596d18" containerName="pruner" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.127062 4908 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.127300 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533" gracePeriod=15 Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.127384 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.127420 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360" gracePeriod=15 Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.127415 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35" gracePeriod=15 Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.127407 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01" gracePeriod=15 Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.127482 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178" gracePeriod=15 Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.128802 4908 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 07:25:59 crc kubenswrapper[4908]: E0131 07:25:59.129101 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.129116 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 07:25:59 crc kubenswrapper[4908]: E0131 07:25:59.129129 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.129135 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 07:25:59 crc kubenswrapper[4908]: E0131 07:25:59.129144 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.129149 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 07:25:59 crc kubenswrapper[4908]: E0131 07:25:59.129161 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.129167 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 07:25:59 crc kubenswrapper[4908]: E0131 07:25:59.129179 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.129185 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 07:25:59 crc kubenswrapper[4908]: E0131 07:25:59.129192 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.129198 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 07:25:59 crc kubenswrapper[4908]: E0131 07:25:59.129208 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.129213 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.129312 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.129323 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.129331 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.129342 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.129352 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.129362 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 07:25:59 crc kubenswrapper[4908]: E0131 07:25:59.129502 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.129510 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.129611 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.174867 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.288209 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.288270 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.288304 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.288347 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.288390 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.288442 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.288461 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.288481 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.390141 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.390191 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.390225 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.390251 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.390279 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.390285 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.390311 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.390316 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.390353 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.390288 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.390363 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.390418 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.390432 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.390495 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.390547 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.390622 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:25:59 crc kubenswrapper[4908]: I0131 07:25:59.471079 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:25:59 crc kubenswrapper[4908]: E0131 07:25:59.510299 4908 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.46:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fc011644dcac7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 07:25:59.509691079 +0000 UTC m=+266.125635723,LastTimestamp:2026-01-31 07:25:59.509691079 +0000 UTC m=+266.125635723,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 07:26:00 crc kubenswrapper[4908]: I0131 07:26:00.239332 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 31 07:26:00 crc kubenswrapper[4908]: I0131 07:26:00.240684 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 07:26:00 crc kubenswrapper[4908]: I0131 07:26:00.241320 4908 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178" exitCode=0 Jan 31 07:26:00 crc kubenswrapper[4908]: I0131 07:26:00.241345 4908 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35" exitCode=0 Jan 31 07:26:00 crc kubenswrapper[4908]: I0131 07:26:00.241352 4908 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01" exitCode=0 Jan 31 07:26:00 crc kubenswrapper[4908]: I0131 07:26:00.241359 4908 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360" exitCode=2 Jan 31 07:26:00 crc kubenswrapper[4908]: I0131 07:26:00.241406 4908 scope.go:117] "RemoveContainer" containerID="28474c8ba7884bffbf092292d45c2539e1e2d405e50a5315c3ee77e4ec518274" Jan 31 07:26:00 crc kubenswrapper[4908]: I0131 07:26:00.243070 4908 generic.go:334] "Generic (PLEG): container finished" podID="14dc50c5-4628-4ac4-a3e5-035acac8d1cc" containerID="cea710f2e60c90d29c9f0109ab926542c7c0fdc82be81d9c6fe5c1545f33c3ed" exitCode=0 Jan 31 07:26:00 crc kubenswrapper[4908]: I0131 07:26:00.243106 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"14dc50c5-4628-4ac4-a3e5-035acac8d1cc","Type":"ContainerDied","Data":"cea710f2e60c90d29c9f0109ab926542c7c0fdc82be81d9c6fe5c1545f33c3ed"} Jan 31 07:26:00 crc kubenswrapper[4908]: I0131 07:26:00.243597 4908 status_manager.go:851] "Failed to get status for pod" podUID="14dc50c5-4628-4ac4-a3e5-035acac8d1cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:00 crc kubenswrapper[4908]: I0131 07:26:00.243802 4908 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:00 crc kubenswrapper[4908]: I0131 07:26:00.244491 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"4636fa0145e586cfe268241c58a237d77d38c1608667e78e635508ca58ec4757"} Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.252931 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qhffb" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.253431 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qhffb" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.253805 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.256674 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"512f20d9ac032014a1150e4b488c6179c482d6f7bd8e9afe921eacbefb22e130"} Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.459873 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qhffb" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.460156 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ld4tn" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.460170 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ld4tn" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.460681 4908 status_manager.go:851] "Failed to get status for pod" podUID="14dc50c5-4628-4ac4-a3e5-035acac8d1cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.461029 4908 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.461266 4908 status_manager.go:851] "Failed to get status for pod" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" pod="openshift-marketplace/certified-operators-qhffb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qhffb\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.501900 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ld4tn" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.503804 4908 status_manager.go:851] "Failed to get status for pod" podUID="14dc50c5-4628-4ac4-a3e5-035acac8d1cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.505317 4908 status_manager.go:851] "Failed to get status for pod" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" pod="openshift-marketplace/community-operators-ld4tn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ld4tn\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.511335 4908 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.523168 4908 status_manager.go:851] "Failed to get status for pod" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" pod="openshift-marketplace/certified-operators-qhffb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qhffb\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.667069 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c8k74" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.667371 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c8k74" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.673099 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.673753 4908 status_manager.go:851] "Failed to get status for pod" podUID="14dc50c5-4628-4ac4-a3e5-035acac8d1cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.674171 4908 status_manager.go:851] "Failed to get status for pod" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" pod="openshift-marketplace/community-operators-ld4tn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ld4tn\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.674432 4908 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.674636 4908 status_manager.go:851] "Failed to get status for pod" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" pod="openshift-marketplace/certified-operators-qhffb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qhffb\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.703452 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c8k74" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.704042 4908 status_manager.go:851] "Failed to get status for pod" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" pod="openshift-marketplace/certified-operators-qhffb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qhffb\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.704511 4908 status_manager.go:851] "Failed to get status for pod" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" pod="openshift-marketplace/certified-operators-c8k74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c8k74\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.704832 4908 status_manager.go:851] "Failed to get status for pod" podUID="14dc50c5-4628-4ac4-a3e5-035acac8d1cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.705283 4908 status_manager.go:851] "Failed to get status for pod" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" pod="openshift-marketplace/community-operators-ld4tn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ld4tn\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.705739 4908 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.827393 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14dc50c5-4628-4ac4-a3e5-035acac8d1cc-kubelet-dir\") pod \"14dc50c5-4628-4ac4-a3e5-035acac8d1cc\" (UID: \"14dc50c5-4628-4ac4-a3e5-035acac8d1cc\") " Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.827518 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14dc50c5-4628-4ac4-a3e5-035acac8d1cc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "14dc50c5-4628-4ac4-a3e5-035acac8d1cc" (UID: "14dc50c5-4628-4ac4-a3e5-035acac8d1cc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.827626 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14dc50c5-4628-4ac4-a3e5-035acac8d1cc-kube-api-access\") pod \"14dc50c5-4628-4ac4-a3e5-035acac8d1cc\" (UID: \"14dc50c5-4628-4ac4-a3e5-035acac8d1cc\") " Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.827845 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/14dc50c5-4628-4ac4-a3e5-035acac8d1cc-var-lock\") pod \"14dc50c5-4628-4ac4-a3e5-035acac8d1cc\" (UID: \"14dc50c5-4628-4ac4-a3e5-035acac8d1cc\") " Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.827955 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14dc50c5-4628-4ac4-a3e5-035acac8d1cc-var-lock" (OuterVolumeSpecName: "var-lock") pod "14dc50c5-4628-4ac4-a3e5-035acac8d1cc" (UID: "14dc50c5-4628-4ac4-a3e5-035acac8d1cc"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.828581 4908 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/14dc50c5-4628-4ac4-a3e5-035acac8d1cc-var-lock\") on node \"crc\" DevicePath \"\"" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.828615 4908 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14dc50c5-4628-4ac4-a3e5-035acac8d1cc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.833201 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14dc50c5-4628-4ac4-a3e5-035acac8d1cc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "14dc50c5-4628-4ac4-a3e5-035acac8d1cc" (UID: "14dc50c5-4628-4ac4-a3e5-035acac8d1cc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.890369 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hfhwx" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.890433 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hfhwx" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.929853 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14dc50c5-4628-4ac4-a3e5-035acac8d1cc-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.937558 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hfhwx" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.938502 4908 status_manager.go:851] "Failed to get status for pod" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" pod="openshift-marketplace/certified-operators-c8k74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c8k74\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.938888 4908 status_manager.go:851] "Failed to get status for pod" podUID="14dc50c5-4628-4ac4-a3e5-035acac8d1cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.939167 4908 status_manager.go:851] "Failed to get status for pod" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" pod="openshift-marketplace/community-operators-ld4tn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ld4tn\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.939407 4908 status_manager.go:851] "Failed to get status for pod" podUID="100bafc6-355c-4131-9907-45004788f44c" pod="openshift-marketplace/community-operators-hfhwx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hfhwx\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.939632 4908 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:01 crc kubenswrapper[4908]: I0131 07:26:01.939887 4908 status_manager.go:851] "Failed to get status for pod" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" pod="openshift-marketplace/certified-operators-qhffb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qhffb\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.267590 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.268771 4908 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533" exitCode=0 Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.271007 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"14dc50c5-4628-4ac4-a3e5-035acac8d1cc","Type":"ContainerDied","Data":"8ddf7dbd3012a2f45ebbc158bbe1eba777d5cd5b5fbeb0cd0da9ee867666e413"} Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.271126 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ddf7dbd3012a2f45ebbc158bbe1eba777d5cd5b5fbeb0cd0da9ee867666e413" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.271478 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.273094 4908 status_manager.go:851] "Failed to get status for pod" podUID="100bafc6-355c-4131-9907-45004788f44c" pod="openshift-marketplace/community-operators-hfhwx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hfhwx\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.273400 4908 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.279288 4908 status_manager.go:851] "Failed to get status for pod" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" pod="openshift-marketplace/certified-operators-qhffb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qhffb\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.280022 4908 status_manager.go:851] "Failed to get status for pod" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" pod="openshift-marketplace/certified-operators-c8k74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c8k74\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.280327 4908 status_manager.go:851] "Failed to get status for pod" podUID="14dc50c5-4628-4ac4-a3e5-035acac8d1cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.280488 4908 status_manager.go:851] "Failed to get status for pod" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" pod="openshift-marketplace/community-operators-ld4tn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ld4tn\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.312178 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ld4tn" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.312341 4908 status_manager.go:851] "Failed to get status for pod" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" pod="openshift-marketplace/community-operators-ld4tn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ld4tn\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.312440 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c8k74" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.312656 4908 status_manager.go:851] "Failed to get status for pod" podUID="100bafc6-355c-4131-9907-45004788f44c" pod="openshift-marketplace/community-operators-hfhwx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hfhwx\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.313140 4908 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.313478 4908 status_manager.go:851] "Failed to get status for pod" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" pod="openshift-marketplace/certified-operators-qhffb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qhffb\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.313726 4908 status_manager.go:851] "Failed to get status for pod" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" pod="openshift-marketplace/certified-operators-c8k74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c8k74\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.314000 4908 status_manager.go:851] "Failed to get status for pod" podUID="14dc50c5-4628-4ac4-a3e5-035acac8d1cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.314250 4908 status_manager.go:851] "Failed to get status for pod" podUID="14dc50c5-4628-4ac4-a3e5-035acac8d1cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.314421 4908 status_manager.go:851] "Failed to get status for pod" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" pod="openshift-marketplace/community-operators-ld4tn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ld4tn\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.314600 4908 status_manager.go:851] "Failed to get status for pod" podUID="100bafc6-355c-4131-9907-45004788f44c" pod="openshift-marketplace/community-operators-hfhwx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hfhwx\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.314816 4908 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.315068 4908 status_manager.go:851] "Failed to get status for pod" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" pod="openshift-marketplace/certified-operators-qhffb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qhffb\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.315267 4908 status_manager.go:851] "Failed to get status for pod" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" pod="openshift-marketplace/certified-operators-c8k74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c8k74\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.317068 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.317761 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.318161 4908 status_manager.go:851] "Failed to get status for pod" podUID="14dc50c5-4628-4ac4-a3e5-035acac8d1cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.318558 4908 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.319171 4908 status_manager.go:851] "Failed to get status for pod" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" pod="openshift-marketplace/community-operators-ld4tn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ld4tn\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.319465 4908 status_manager.go:851] "Failed to get status for pod" podUID="100bafc6-355c-4131-9907-45004788f44c" pod="openshift-marketplace/community-operators-hfhwx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hfhwx\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.320050 4908 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.320400 4908 status_manager.go:851] "Failed to get status for pod" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" pod="openshift-marketplace/certified-operators-qhffb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qhffb\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.320682 4908 status_manager.go:851] "Failed to get status for pod" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" pod="openshift-marketplace/certified-operators-c8k74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c8k74\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.321515 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hfhwx" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.322014 4908 status_manager.go:851] "Failed to get status for pod" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" pod="openshift-marketplace/certified-operators-c8k74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c8k74\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.322247 4908 status_manager.go:851] "Failed to get status for pod" podUID="14dc50c5-4628-4ac4-a3e5-035acac8d1cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.322504 4908 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.322694 4908 status_manager.go:851] "Failed to get status for pod" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" pod="openshift-marketplace/community-operators-ld4tn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ld4tn\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.322899 4908 status_manager.go:851] "Failed to get status for pod" podUID="100bafc6-355c-4131-9907-45004788f44c" pod="openshift-marketplace/community-operators-hfhwx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hfhwx\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.323098 4908 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.323298 4908 status_manager.go:851] "Failed to get status for pod" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" pod="openshift-marketplace/certified-operators-qhffb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qhffb\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.331833 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qhffb" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.332595 4908 status_manager.go:851] "Failed to get status for pod" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" pod="openshift-marketplace/certified-operators-c8k74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c8k74\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.332877 4908 status_manager.go:851] "Failed to get status for pod" podUID="14dc50c5-4628-4ac4-a3e5-035acac8d1cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.333188 4908 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.333512 4908 status_manager.go:851] "Failed to get status for pod" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" pod="openshift-marketplace/community-operators-ld4tn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ld4tn\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.333717 4908 status_manager.go:851] "Failed to get status for pod" podUID="100bafc6-355c-4131-9907-45004788f44c" pod="openshift-marketplace/community-operators-hfhwx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hfhwx\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.333946 4908 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.334434 4908 status_manager.go:851] "Failed to get status for pod" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" pod="openshift-marketplace/certified-operators-qhffb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qhffb\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.438645 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.438821 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.438862 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.440274 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.440331 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.440356 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.540696 4908 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.540734 4908 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 31 07:26:02 crc kubenswrapper[4908]: I0131 07:26:02.540743 4908 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.279616 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.282216 4908 scope.go:117] "RemoveContainer" containerID="b82b3d3cf36afd6a615ac10c8b203cc1adb09e063d9a74b86615b1c5fb47d178" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.282532 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.300580 4908 scope.go:117] "RemoveContainer" containerID="1f2f5a146674a7035b1787f6b6a889ba7644beee90bd467ba83d3da82fe47b35" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.306800 4908 status_manager.go:851] "Failed to get status for pod" podUID="14dc50c5-4628-4ac4-a3e5-035acac8d1cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.307096 4908 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.307485 4908 status_manager.go:851] "Failed to get status for pod" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" pod="openshift-marketplace/community-operators-ld4tn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ld4tn\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.307760 4908 status_manager.go:851] "Failed to get status for pod" podUID="100bafc6-355c-4131-9907-45004788f44c" pod="openshift-marketplace/community-operators-hfhwx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hfhwx\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.308040 4908 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.308248 4908 status_manager.go:851] "Failed to get status for pod" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" pod="openshift-marketplace/certified-operators-qhffb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qhffb\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.308495 4908 status_manager.go:851] "Failed to get status for pod" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" pod="openshift-marketplace/certified-operators-c8k74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c8k74\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.320787 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hqxmf" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.321798 4908 status_manager.go:851] "Failed to get status for pod" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" pod="openshift-marketplace/community-operators-ld4tn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ld4tn\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.322324 4908 status_manager.go:851] "Failed to get status for pod" podUID="100bafc6-355c-4131-9907-45004788f44c" pod="openshift-marketplace/community-operators-hfhwx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hfhwx\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.322915 4908 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.323277 4908 status_manager.go:851] "Failed to get status for pod" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" pod="openshift-marketplace/certified-operators-qhffb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qhffb\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.323607 4908 status_manager.go:851] "Failed to get status for pod" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" pod="openshift-marketplace/certified-operators-c8k74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c8k74\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.323897 4908 status_manager.go:851] "Failed to get status for pod" podUID="b2db9dfc-20ec-446a-878c-db0e800be1a0" pod="openshift-marketplace/redhat-marketplace-hqxmf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hqxmf\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.324310 4908 status_manager.go:851] "Failed to get status for pod" podUID="14dc50c5-4628-4ac4-a3e5-035acac8d1cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.325011 4908 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.327392 4908 scope.go:117] "RemoveContainer" containerID="68bd29c0ee9643a202678bd5abc8884973f053e261556bb7491f4909d2187a01" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.348226 4908 scope.go:117] "RemoveContainer" containerID="b2e76f55c3d80464d38ca111b7117a06e6e1c472cd87809bd6a5711986cc2360" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.362920 4908 scope.go:117] "RemoveContainer" containerID="177850fafe9766ad734fd0d57867fb44c9dbd083817b06f7e5c971929e1c6533" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.363230 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hqxmf" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.363918 4908 status_manager.go:851] "Failed to get status for pod" podUID="14dc50c5-4628-4ac4-a3e5-035acac8d1cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.364481 4908 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.364703 4908 status_manager.go:851] "Failed to get status for pod" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" pod="openshift-marketplace/community-operators-ld4tn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ld4tn\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.364886 4908 status_manager.go:851] "Failed to get status for pod" podUID="100bafc6-355c-4131-9907-45004788f44c" pod="openshift-marketplace/community-operators-hfhwx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hfhwx\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.365081 4908 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.365328 4908 status_manager.go:851] "Failed to get status for pod" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" pod="openshift-marketplace/certified-operators-qhffb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qhffb\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.365674 4908 status_manager.go:851] "Failed to get status for pod" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" pod="openshift-marketplace/certified-operators-c8k74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c8k74\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.365921 4908 status_manager.go:851] "Failed to get status for pod" podUID="b2db9dfc-20ec-446a-878c-db0e800be1a0" pod="openshift-marketplace/redhat-marketplace-hqxmf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hqxmf\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.385031 4908 scope.go:117] "RemoveContainer" containerID="6b8351a59a39e0076a38412324bc898a0ed8a3d34d47c90ebc4a57cb41a3b45d" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.699323 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n9szx" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.699630 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n9szx" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.750544 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n9szx" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.751289 4908 status_manager.go:851] "Failed to get status for pod" podUID="14dc50c5-4628-4ac4-a3e5-035acac8d1cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.751817 4908 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.752237 4908 status_manager.go:851] "Failed to get status for pod" podUID="05f1e995-f324-4225-a4a8-d476a4da7ff4" pod="openshift-marketplace/redhat-marketplace-n9szx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-n9szx\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.752518 4908 status_manager.go:851] "Failed to get status for pod" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" pod="openshift-marketplace/community-operators-ld4tn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ld4tn\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.752803 4908 status_manager.go:851] "Failed to get status for pod" podUID="100bafc6-355c-4131-9907-45004788f44c" pod="openshift-marketplace/community-operators-hfhwx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hfhwx\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.753067 4908 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.753358 4908 status_manager.go:851] "Failed to get status for pod" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" pod="openshift-marketplace/certified-operators-qhffb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qhffb\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.753647 4908 status_manager.go:851] "Failed to get status for pod" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" pod="openshift-marketplace/certified-operators-c8k74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c8k74\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.753972 4908 status_manager.go:851] "Failed to get status for pod" podUID="b2db9dfc-20ec-446a-878c-db0e800be1a0" pod="openshift-marketplace/redhat-marketplace-hqxmf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hqxmf\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: E0131 07:26:03.857743 4908 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: E0131 07:26:03.858360 4908 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: E0131 07:26:03.859026 4908 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: E0131 07:26:03.859418 4908 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: E0131 07:26:03.859725 4908 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.859763 4908 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 31 07:26:03 crc kubenswrapper[4908]: E0131 07:26:03.860053 4908 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="200ms" Jan 31 07:26:03 crc kubenswrapper[4908]: I0131 07:26:03.951969 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 31 07:26:04 crc kubenswrapper[4908]: E0131 07:26:04.061738 4908 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="400ms" Jan 31 07:26:04 crc kubenswrapper[4908]: I0131 07:26:04.326906 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n9szx" Jan 31 07:26:04 crc kubenswrapper[4908]: I0131 07:26:04.327455 4908 status_manager.go:851] "Failed to get status for pod" podUID="b2db9dfc-20ec-446a-878c-db0e800be1a0" pod="openshift-marketplace/redhat-marketplace-hqxmf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hqxmf\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:04 crc kubenswrapper[4908]: I0131 07:26:04.328400 4908 status_manager.go:851] "Failed to get status for pod" podUID="14dc50c5-4628-4ac4-a3e5-035acac8d1cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:04 crc kubenswrapper[4908]: I0131 07:26:04.328762 4908 status_manager.go:851] "Failed to get status for pod" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" pod="openshift-marketplace/community-operators-ld4tn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ld4tn\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:04 crc kubenswrapper[4908]: I0131 07:26:04.329064 4908 status_manager.go:851] "Failed to get status for pod" podUID="05f1e995-f324-4225-a4a8-d476a4da7ff4" pod="openshift-marketplace/redhat-marketplace-n9szx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-n9szx\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:04 crc kubenswrapper[4908]: I0131 07:26:04.329328 4908 status_manager.go:851] "Failed to get status for pod" podUID="100bafc6-355c-4131-9907-45004788f44c" pod="openshift-marketplace/community-operators-hfhwx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hfhwx\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:04 crc kubenswrapper[4908]: I0131 07:26:04.329656 4908 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:04 crc kubenswrapper[4908]: I0131 07:26:04.330037 4908 status_manager.go:851] "Failed to get status for pod" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" pod="openshift-marketplace/certified-operators-qhffb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qhffb\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:04 crc kubenswrapper[4908]: I0131 07:26:04.330463 4908 status_manager.go:851] "Failed to get status for pod" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" pod="openshift-marketplace/certified-operators-c8k74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c8k74\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:04 crc kubenswrapper[4908]: E0131 07:26:04.464931 4908 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="800ms" Jan 31 07:26:04 crc kubenswrapper[4908]: I0131 07:26:04.761715 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wg9bf" Jan 31 07:26:04 crc kubenswrapper[4908]: I0131 07:26:04.762318 4908 status_manager.go:851] "Failed to get status for pod" podUID="100bafc6-355c-4131-9907-45004788f44c" pod="openshift-marketplace/community-operators-hfhwx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hfhwx\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:04 crc kubenswrapper[4908]: I0131 07:26:04.762941 4908 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:04 crc kubenswrapper[4908]: I0131 07:26:04.763506 4908 status_manager.go:851] "Failed to get status for pod" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" pod="openshift-marketplace/certified-operators-qhffb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qhffb\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:04 crc kubenswrapper[4908]: I0131 07:26:04.763805 4908 status_manager.go:851] "Failed to get status for pod" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" pod="openshift-marketplace/certified-operators-c8k74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c8k74\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:04 crc kubenswrapper[4908]: I0131 07:26:04.764180 4908 status_manager.go:851] "Failed to get status for pod" podUID="b2db9dfc-20ec-446a-878c-db0e800be1a0" pod="openshift-marketplace/redhat-marketplace-hqxmf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hqxmf\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:04 crc kubenswrapper[4908]: I0131 07:26:04.764463 4908 status_manager.go:851] "Failed to get status for pod" podUID="18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f" pod="openshift-marketplace/redhat-operators-wg9bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wg9bf\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:04 crc kubenswrapper[4908]: I0131 07:26:04.764684 4908 status_manager.go:851] "Failed to get status for pod" podUID="14dc50c5-4628-4ac4-a3e5-035acac8d1cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:04 crc kubenswrapper[4908]: I0131 07:26:04.765156 4908 status_manager.go:851] "Failed to get status for pod" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" pod="openshift-marketplace/community-operators-ld4tn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ld4tn\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:04 crc kubenswrapper[4908]: I0131 07:26:04.765379 4908 status_manager.go:851] "Failed to get status for pod" podUID="05f1e995-f324-4225-a4a8-d476a4da7ff4" pod="openshift-marketplace/redhat-marketplace-n9szx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-n9szx\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:04 crc kubenswrapper[4908]: I0131 07:26:04.800260 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wg9bf" Jan 31 07:26:04 crc kubenswrapper[4908]: I0131 07:26:04.800892 4908 status_manager.go:851] "Failed to get status for pod" podUID="b2db9dfc-20ec-446a-878c-db0e800be1a0" pod="openshift-marketplace/redhat-marketplace-hqxmf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hqxmf\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:04 crc kubenswrapper[4908]: I0131 07:26:04.801432 4908 status_manager.go:851] "Failed to get status for pod" podUID="18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f" pod="openshift-marketplace/redhat-operators-wg9bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wg9bf\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:04 crc kubenswrapper[4908]: I0131 07:26:04.801748 4908 status_manager.go:851] "Failed to get status for pod" podUID="14dc50c5-4628-4ac4-a3e5-035acac8d1cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:04 crc kubenswrapper[4908]: I0131 07:26:04.802146 4908 status_manager.go:851] "Failed to get status for pod" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" pod="openshift-marketplace/community-operators-ld4tn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ld4tn\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:04 crc kubenswrapper[4908]: I0131 07:26:04.802318 4908 status_manager.go:851] "Failed to get status for pod" podUID="05f1e995-f324-4225-a4a8-d476a4da7ff4" pod="openshift-marketplace/redhat-marketplace-n9szx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-n9szx\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:04 crc kubenswrapper[4908]: I0131 07:26:04.802488 4908 status_manager.go:851] "Failed to get status for pod" podUID="100bafc6-355c-4131-9907-45004788f44c" pod="openshift-marketplace/community-operators-hfhwx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hfhwx\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:04 crc kubenswrapper[4908]: I0131 07:26:04.802677 4908 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:04 crc kubenswrapper[4908]: I0131 07:26:04.802824 4908 status_manager.go:851] "Failed to get status for pod" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" pod="openshift-marketplace/certified-operators-qhffb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qhffb\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:04 crc kubenswrapper[4908]: I0131 07:26:04.803020 4908 status_manager.go:851] "Failed to get status for pod" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" pod="openshift-marketplace/certified-operators-c8k74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c8k74\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:05 crc kubenswrapper[4908]: I0131 07:26:05.105816 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wj798" Jan 31 07:26:05 crc kubenswrapper[4908]: I0131 07:26:05.105894 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wj798" Jan 31 07:26:05 crc kubenswrapper[4908]: I0131 07:26:05.142582 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wj798" Jan 31 07:26:05 crc kubenswrapper[4908]: I0131 07:26:05.143003 4908 status_manager.go:851] "Failed to get status for pod" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" pod="openshift-marketplace/community-operators-ld4tn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ld4tn\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:05 crc kubenswrapper[4908]: I0131 07:26:05.143271 4908 status_manager.go:851] "Failed to get status for pod" podUID="05f1e995-f324-4225-a4a8-d476a4da7ff4" pod="openshift-marketplace/redhat-marketplace-n9szx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-n9szx\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:05 crc kubenswrapper[4908]: I0131 07:26:05.143561 4908 status_manager.go:851] "Failed to get status for pod" podUID="100bafc6-355c-4131-9907-45004788f44c" pod="openshift-marketplace/community-operators-hfhwx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hfhwx\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:05 crc kubenswrapper[4908]: I0131 07:26:05.143903 4908 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:05 crc kubenswrapper[4908]: I0131 07:26:05.144208 4908 status_manager.go:851] "Failed to get status for pod" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" pod="openshift-marketplace/certified-operators-qhffb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qhffb\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:05 crc kubenswrapper[4908]: I0131 07:26:05.144630 4908 status_manager.go:851] "Failed to get status for pod" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" pod="openshift-marketplace/certified-operators-c8k74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c8k74\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:05 crc kubenswrapper[4908]: I0131 07:26:05.146228 4908 status_manager.go:851] "Failed to get status for pod" podUID="ce51ffb3-c332-4bb8-b574-44911178c9a1" pod="openshift-marketplace/redhat-operators-wj798" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wj798\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:05 crc kubenswrapper[4908]: I0131 07:26:05.146529 4908 status_manager.go:851] "Failed to get status for pod" podUID="b2db9dfc-20ec-446a-878c-db0e800be1a0" pod="openshift-marketplace/redhat-marketplace-hqxmf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hqxmf\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:05 crc kubenswrapper[4908]: I0131 07:26:05.147110 4908 status_manager.go:851] "Failed to get status for pod" podUID="18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f" pod="openshift-marketplace/redhat-operators-wg9bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wg9bf\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:05 crc kubenswrapper[4908]: I0131 07:26:05.147527 4908 status_manager.go:851] "Failed to get status for pod" podUID="14dc50c5-4628-4ac4-a3e5-035acac8d1cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:05 crc kubenswrapper[4908]: E0131 07:26:05.266177 4908 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="1.6s" Jan 31 07:26:05 crc kubenswrapper[4908]: I0131 07:26:05.359461 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wj798" Jan 31 07:26:05 crc kubenswrapper[4908]: I0131 07:26:05.360125 4908 status_manager.go:851] "Failed to get status for pod" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" pod="openshift-marketplace/certified-operators-c8k74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c8k74\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:05 crc kubenswrapper[4908]: I0131 07:26:05.360512 4908 status_manager.go:851] "Failed to get status for pod" podUID="ce51ffb3-c332-4bb8-b574-44911178c9a1" pod="openshift-marketplace/redhat-operators-wj798" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wj798\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:05 crc kubenswrapper[4908]: I0131 07:26:05.361107 4908 status_manager.go:851] "Failed to get status for pod" podUID="b2db9dfc-20ec-446a-878c-db0e800be1a0" pod="openshift-marketplace/redhat-marketplace-hqxmf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hqxmf\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:05 crc kubenswrapper[4908]: I0131 07:26:05.361392 4908 status_manager.go:851] "Failed to get status for pod" podUID="18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f" pod="openshift-marketplace/redhat-operators-wg9bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wg9bf\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:05 crc kubenswrapper[4908]: I0131 07:26:05.361693 4908 status_manager.go:851] "Failed to get status for pod" podUID="14dc50c5-4628-4ac4-a3e5-035acac8d1cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:05 crc kubenswrapper[4908]: I0131 07:26:05.362112 4908 status_manager.go:851] "Failed to get status for pod" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" pod="openshift-marketplace/community-operators-ld4tn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ld4tn\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:05 crc kubenswrapper[4908]: I0131 07:26:05.362510 4908 status_manager.go:851] "Failed to get status for pod" podUID="05f1e995-f324-4225-a4a8-d476a4da7ff4" pod="openshift-marketplace/redhat-marketplace-n9szx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-n9szx\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:05 crc kubenswrapper[4908]: I0131 07:26:05.362881 4908 status_manager.go:851] "Failed to get status for pod" podUID="100bafc6-355c-4131-9907-45004788f44c" pod="openshift-marketplace/community-operators-hfhwx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hfhwx\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:05 crc kubenswrapper[4908]: I0131 07:26:05.363272 4908 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:05 crc kubenswrapper[4908]: I0131 07:26:05.363590 4908 status_manager.go:851] "Failed to get status for pod" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" pod="openshift-marketplace/certified-operators-qhffb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qhffb\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:06 crc kubenswrapper[4908]: E0131 07:26:06.866816 4908 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="3.2s" Jan 31 07:26:06 crc kubenswrapper[4908]: E0131 07:26:06.993451 4908 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.46:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" volumeName="registry-storage" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:07.942232 4908 status_manager.go:851] "Failed to get status for pod" podUID="b2db9dfc-20ec-446a-878c-db0e800be1a0" pod="openshift-marketplace/redhat-marketplace-hqxmf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hqxmf\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:07.943083 4908 status_manager.go:851] "Failed to get status for pod" podUID="18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f" pod="openshift-marketplace/redhat-operators-wg9bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wg9bf\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:07.943517 4908 status_manager.go:851] "Failed to get status for pod" podUID="14dc50c5-4628-4ac4-a3e5-035acac8d1cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:07.944557 4908 status_manager.go:851] "Failed to get status for pod" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" pod="openshift-marketplace/community-operators-ld4tn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ld4tn\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:07.944911 4908 status_manager.go:851] "Failed to get status for pod" podUID="05f1e995-f324-4225-a4a8-d476a4da7ff4" pod="openshift-marketplace/redhat-marketplace-n9szx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-n9szx\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:07.945296 4908 status_manager.go:851] "Failed to get status for pod" podUID="100bafc6-355c-4131-9907-45004788f44c" pod="openshift-marketplace/community-operators-hfhwx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hfhwx\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:07.945623 4908 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:07.945949 4908 status_manager.go:851] "Failed to get status for pod" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" pod="openshift-marketplace/certified-operators-qhffb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qhffb\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:07.946312 4908 status_manager.go:851] "Failed to get status for pod" podUID="ce51ffb3-c332-4bb8-b574-44911178c9a1" pod="openshift-marketplace/redhat-operators-wj798" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wj798\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:07.946623 4908 status_manager.go:851] "Failed to get status for pod" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" pod="openshift-marketplace/certified-operators-c8k74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c8k74\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:14 crc kubenswrapper[4908]: E0131 07:26:09.038571 4908 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.46:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fc011644dcac7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 07:25:59.509691079 +0000 UTC m=+266.125635723,LastTimestamp:2026-01-31 07:25:59.509691079 +0000 UTC m=+266.125635723,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 07:26:14 crc kubenswrapper[4908]: E0131 07:26:10.069155 4908 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="6.4s" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:11.939879 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:11.941100 4908 status_manager.go:851] "Failed to get status for pod" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" pod="openshift-marketplace/certified-operators-qhffb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qhffb\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:11.941631 4908 status_manager.go:851] "Failed to get status for pod" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" pod="openshift-marketplace/certified-operators-c8k74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c8k74\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:11.942156 4908 status_manager.go:851] "Failed to get status for pod" podUID="ce51ffb3-c332-4bb8-b574-44911178c9a1" pod="openshift-marketplace/redhat-operators-wj798" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wj798\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:11.942439 4908 status_manager.go:851] "Failed to get status for pod" podUID="b2db9dfc-20ec-446a-878c-db0e800be1a0" pod="openshift-marketplace/redhat-marketplace-hqxmf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hqxmf\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:11.942746 4908 status_manager.go:851] "Failed to get status for pod" podUID="18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f" pod="openshift-marketplace/redhat-operators-wg9bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wg9bf\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:11.943114 4908 status_manager.go:851] "Failed to get status for pod" podUID="14dc50c5-4628-4ac4-a3e5-035acac8d1cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:11.943573 4908 status_manager.go:851] "Failed to get status for pod" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" pod="openshift-marketplace/community-operators-ld4tn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ld4tn\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:11.943857 4908 status_manager.go:851] "Failed to get status for pod" podUID="05f1e995-f324-4225-a4a8-d476a4da7ff4" pod="openshift-marketplace/redhat-marketplace-n9szx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-n9szx\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:11.944174 4908 status_manager.go:851] "Failed to get status for pod" podUID="100bafc6-355c-4131-9907-45004788f44c" pod="openshift-marketplace/community-operators-hfhwx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hfhwx\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:11.944455 4908 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:11.960873 4908 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="deb7dd11-7d10-45e2-a561-0d6941c51c43" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:11.960900 4908 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="deb7dd11-7d10-45e2-a561-0d6941c51c43" Jan 31 07:26:14 crc kubenswrapper[4908]: E0131 07:26:11.961470 4908 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:11.962014 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:26:14 crc kubenswrapper[4908]: W0131 07:26:11.981539 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-5e4390b93e26d74083ee6adf6813596d0dc6542953497895eda52aa5cf0699f2 WatchSource:0}: Error finding container 5e4390b93e26d74083ee6adf6813596d0dc6542953497895eda52aa5cf0699f2: Status 404 returned error can't find the container with id 5e4390b93e26d74083ee6adf6813596d0dc6542953497895eda52aa5cf0699f2 Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:12.344961 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5e4390b93e26d74083ee6adf6813596d0dc6542953497895eda52aa5cf0699f2"} Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:13.350669 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fe33610eec12f2f1d701de5fdd56ab42ff6303f1338ec65cb10498c8ed605ca5"} Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:14.357164 4908 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="fe33610eec12f2f1d701de5fdd56ab42ff6303f1338ec65cb10498c8ed605ca5" exitCode=0 Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:14.357229 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"fe33610eec12f2f1d701de5fdd56ab42ff6303f1338ec65cb10498c8ed605ca5"} Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:14.357656 4908 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="deb7dd11-7d10-45e2-a561-0d6941c51c43" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:14.357694 4908 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="deb7dd11-7d10-45e2-a561-0d6941c51c43" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:14.357956 4908 status_manager.go:851] "Failed to get status for pod" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" pod="openshift-marketplace/certified-operators-qhffb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qhffb\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:14.358247 4908 status_manager.go:851] "Failed to get status for pod" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" pod="openshift-marketplace/certified-operators-c8k74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c8k74\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:14 crc kubenswrapper[4908]: E0131 07:26:14.358327 4908 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:14.358474 4908 status_manager.go:851] "Failed to get status for pod" podUID="ce51ffb3-c332-4bb8-b574-44911178c9a1" pod="openshift-marketplace/redhat-operators-wj798" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wj798\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:14.358683 4908 status_manager.go:851] "Failed to get status for pod" podUID="b2db9dfc-20ec-446a-878c-db0e800be1a0" pod="openshift-marketplace/redhat-marketplace-hqxmf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hqxmf\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:14.358917 4908 status_manager.go:851] "Failed to get status for pod" podUID="18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f" pod="openshift-marketplace/redhat-operators-wg9bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wg9bf\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:14.359171 4908 status_manager.go:851] "Failed to get status for pod" podUID="14dc50c5-4628-4ac4-a3e5-035acac8d1cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:14.359358 4908 status_manager.go:851] "Failed to get status for pod" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" pod="openshift-marketplace/community-operators-ld4tn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ld4tn\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:14.359556 4908 status_manager.go:851] "Failed to get status for pod" podUID="05f1e995-f324-4225-a4a8-d476a4da7ff4" pod="openshift-marketplace/redhat-marketplace-n9szx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-n9szx\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:14.359831 4908 status_manager.go:851] "Failed to get status for pod" podUID="100bafc6-355c-4131-9907-45004788f44c" pod="openshift-marketplace/community-operators-hfhwx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hfhwx\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:14 crc kubenswrapper[4908]: I0131 07:26:14.360340 4908 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:15 crc kubenswrapper[4908]: I0131 07:26:15.372961 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 07:26:15 crc kubenswrapper[4908]: I0131 07:26:15.373230 4908 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd" exitCode=1 Jan 31 07:26:15 crc kubenswrapper[4908]: I0131 07:26:15.373256 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd"} Jan 31 07:26:15 crc kubenswrapper[4908]: I0131 07:26:15.374053 4908 status_manager.go:851] "Failed to get status for pod" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" pod="openshift-marketplace/certified-operators-qhffb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qhffb\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:15 crc kubenswrapper[4908]: I0131 07:26:15.374161 4908 scope.go:117] "RemoveContainer" containerID="334801ec37442b7534969590181ef2990745d433a534e924d4fa3ed1447774bd" Jan 31 07:26:15 crc kubenswrapper[4908]: I0131 07:26:15.374673 4908 status_manager.go:851] "Failed to get status for pod" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" pod="openshift-marketplace/certified-operators-c8k74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c8k74\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:15 crc kubenswrapper[4908]: I0131 07:26:15.375252 4908 status_manager.go:851] "Failed to get status for pod" podUID="ce51ffb3-c332-4bb8-b574-44911178c9a1" pod="openshift-marketplace/redhat-operators-wj798" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wj798\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:15 crc kubenswrapper[4908]: I0131 07:26:15.375430 4908 status_manager.go:851] "Failed to get status for pod" podUID="b2db9dfc-20ec-446a-878c-db0e800be1a0" pod="openshift-marketplace/redhat-marketplace-hqxmf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hqxmf\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:15 crc kubenswrapper[4908]: I0131 07:26:15.375566 4908 status_manager.go:851] "Failed to get status for pod" podUID="18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f" pod="openshift-marketplace/redhat-operators-wg9bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wg9bf\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:15 crc kubenswrapper[4908]: I0131 07:26:15.375794 4908 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:15 crc kubenswrapper[4908]: I0131 07:26:15.376070 4908 status_manager.go:851] "Failed to get status for pod" podUID="14dc50c5-4628-4ac4-a3e5-035acac8d1cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:15 crc kubenswrapper[4908]: I0131 07:26:15.376319 4908 status_manager.go:851] "Failed to get status for pod" podUID="05f1e995-f324-4225-a4a8-d476a4da7ff4" pod="openshift-marketplace/redhat-marketplace-n9szx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-n9szx\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:15 crc kubenswrapper[4908]: I0131 07:26:15.376651 4908 status_manager.go:851] "Failed to get status for pod" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" pod="openshift-marketplace/community-operators-ld4tn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ld4tn\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:15 crc kubenswrapper[4908]: I0131 07:26:15.376874 4908 status_manager.go:851] "Failed to get status for pod" podUID="100bafc6-355c-4131-9907-45004788f44c" pod="openshift-marketplace/community-operators-hfhwx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hfhwx\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:15 crc kubenswrapper[4908]: I0131 07:26:15.377165 4908 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 31 07:26:15 crc kubenswrapper[4908]: I0131 07:26:15.504148 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:26:16 crc kubenswrapper[4908]: I0131 07:26:16.391306 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a3b75aa4f81cedc23ecb23fbc7764baecb0375a1431d1ab2b953d3d2f9906baf"} Jan 31 07:26:16 crc kubenswrapper[4908]: I0131 07:26:16.413676 4908 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:26:17 crc kubenswrapper[4908]: I0131 07:26:17.399559 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 07:26:17 crc kubenswrapper[4908]: I0131 07:26:17.400067 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"786ee2269f5e0c6995b099c64c7385402d9ea9412f515639a414b724267b9e00"} Jan 31 07:26:17 crc kubenswrapper[4908]: I0131 07:26:17.987323 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:26:19 crc kubenswrapper[4908]: I0131 07:26:19.413357 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4f505eea30b036838e0be79bd4848f1a227ff99b159a390c378221815d9c4e4d"} Jan 31 07:26:19 crc kubenswrapper[4908]: I0131 07:26:19.413408 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ee6e646d1aeb2a93a438f515620e24800f7cc988aa0464d5d01380c123a792e9"} Jan 31 07:26:20 crc kubenswrapper[4908]: I0131 07:26:20.421514 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ad56746f3d2cc99fabe7d427fb8e44edb9969beb3bf42bccfb8e781f253f414f"} Jan 31 07:26:20 crc kubenswrapper[4908]: I0131 07:26:20.421961 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9beb52e54ea0fea1ad04cacf8474fb3c269230440be25ee861bc99277fa4a920"} Jan 31 07:26:20 crc kubenswrapper[4908]: I0131 07:26:20.421995 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:26:20 crc kubenswrapper[4908]: I0131 07:26:20.421758 4908 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="deb7dd11-7d10-45e2-a561-0d6941c51c43" Jan 31 07:26:20 crc kubenswrapper[4908]: I0131 07:26:20.422026 4908 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="deb7dd11-7d10-45e2-a561-0d6941c51c43" Jan 31 07:26:20 crc kubenswrapper[4908]: I0131 07:26:20.431467 4908 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:26:21 crc kubenswrapper[4908]: I0131 07:26:21.426206 4908 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="deb7dd11-7d10-45e2-a561-0d6941c51c43" Jan 31 07:26:21 crc kubenswrapper[4908]: I0131 07:26:21.426453 4908 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="deb7dd11-7d10-45e2-a561-0d6941c51c43" Jan 31 07:26:21 crc kubenswrapper[4908]: I0131 07:26:21.962915 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:26:21 crc kubenswrapper[4908]: I0131 07:26:21.962957 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:26:21 crc kubenswrapper[4908]: I0131 07:26:21.968318 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:26:22 crc kubenswrapper[4908]: I0131 07:26:22.430391 4908 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="deb7dd11-7d10-45e2-a561-0d6941c51c43" Jan 31 07:26:22 crc kubenswrapper[4908]: I0131 07:26:22.430420 4908 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="deb7dd11-7d10-45e2-a561-0d6941c51c43" Jan 31 07:26:22 crc kubenswrapper[4908]: I0131 07:26:22.437331 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:26:23 crc kubenswrapper[4908]: I0131 07:26:23.133129 4908 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ef7cad9f-2474-493d-82b6-eaa56afb3436" Jan 31 07:26:23 crc kubenswrapper[4908]: I0131 07:26:23.434807 4908 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="deb7dd11-7d10-45e2-a561-0d6941c51c43" Jan 31 07:26:23 crc kubenswrapper[4908]: I0131 07:26:23.435109 4908 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="deb7dd11-7d10-45e2-a561-0d6941c51c43" Jan 31 07:26:23 crc kubenswrapper[4908]: I0131 07:26:23.437388 4908 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ef7cad9f-2474-493d-82b6-eaa56afb3436" Jan 31 07:26:24 crc kubenswrapper[4908]: I0131 07:26:24.440625 4908 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="deb7dd11-7d10-45e2-a561-0d6941c51c43" Jan 31 07:26:24 crc kubenswrapper[4908]: I0131 07:26:24.440672 4908 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="deb7dd11-7d10-45e2-a561-0d6941c51c43" Jan 31 07:26:24 crc kubenswrapper[4908]: I0131 07:26:24.443897 4908 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ef7cad9f-2474-493d-82b6-eaa56afb3436" Jan 31 07:26:25 crc kubenswrapper[4908]: I0131 07:26:25.504336 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:26:27 crc kubenswrapper[4908]: I0131 07:26:27.961791 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:26:27 crc kubenswrapper[4908]: I0131 07:26:27.965255 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:26:28 crc kubenswrapper[4908]: I0131 07:26:28.467721 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:26:31 crc kubenswrapper[4908]: I0131 07:26:31.887890 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 31 07:26:32 crc kubenswrapper[4908]: I0131 07:26:32.340359 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 07:26:32 crc kubenswrapper[4908]: I0131 07:26:32.459400 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 31 07:26:32 crc kubenswrapper[4908]: I0131 07:26:32.816824 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 31 07:26:32 crc kubenswrapper[4908]: I0131 07:26:32.826794 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 07:26:33 crc kubenswrapper[4908]: I0131 07:26:33.064248 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 31 07:26:33 crc kubenswrapper[4908]: I0131 07:26:33.115902 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 31 07:26:33 crc kubenswrapper[4908]: I0131 07:26:33.145516 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 07:26:33 crc kubenswrapper[4908]: I0131 07:26:33.460339 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 31 07:26:33 crc kubenswrapper[4908]: I0131 07:26:33.460689 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 31 07:26:33 crc kubenswrapper[4908]: I0131 07:26:33.737649 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 31 07:26:34 crc kubenswrapper[4908]: I0131 07:26:34.032927 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 07:26:34 crc kubenswrapper[4908]: I0131 07:26:34.091616 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 31 07:26:34 crc kubenswrapper[4908]: I0131 07:26:34.205284 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 31 07:26:34 crc kubenswrapper[4908]: I0131 07:26:34.659333 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 31 07:26:34 crc kubenswrapper[4908]: I0131 07:26:34.800509 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 31 07:26:34 crc kubenswrapper[4908]: I0131 07:26:34.807349 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 31 07:26:35 crc kubenswrapper[4908]: I0131 07:26:35.104498 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 31 07:26:35 crc kubenswrapper[4908]: I0131 07:26:35.165406 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 31 07:26:35 crc kubenswrapper[4908]: I0131 07:26:35.268964 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 07:26:35 crc kubenswrapper[4908]: I0131 07:26:35.299834 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 31 07:26:35 crc kubenswrapper[4908]: I0131 07:26:35.382287 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 31 07:26:35 crc kubenswrapper[4908]: I0131 07:26:35.413906 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 31 07:26:35 crc kubenswrapper[4908]: I0131 07:26:35.546253 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 31 07:26:35 crc kubenswrapper[4908]: I0131 07:26:35.552789 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 07:26:35 crc kubenswrapper[4908]: I0131 07:26:35.634928 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 31 07:26:35 crc kubenswrapper[4908]: I0131 07:26:35.828827 4908 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 31 07:26:35 crc kubenswrapper[4908]: I0131 07:26:35.940876 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 31 07:26:36 crc kubenswrapper[4908]: I0131 07:26:36.125541 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 07:26:36 crc kubenswrapper[4908]: I0131 07:26:36.200278 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 31 07:26:36 crc kubenswrapper[4908]: I0131 07:26:36.277521 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 31 07:26:36 crc kubenswrapper[4908]: I0131 07:26:36.286031 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 31 07:26:36 crc kubenswrapper[4908]: I0131 07:26:36.386245 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 07:26:36 crc kubenswrapper[4908]: I0131 07:26:36.417932 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 07:26:36 crc kubenswrapper[4908]: I0131 07:26:36.429001 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 31 07:26:36 crc kubenswrapper[4908]: I0131 07:26:36.538216 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 31 07:26:36 crc kubenswrapper[4908]: I0131 07:26:36.551641 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 31 07:26:36 crc kubenswrapper[4908]: I0131 07:26:36.568049 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 07:26:36 crc kubenswrapper[4908]: I0131 07:26:36.578178 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 31 07:26:36 crc kubenswrapper[4908]: I0131 07:26:36.720756 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 31 07:26:36 crc kubenswrapper[4908]: I0131 07:26:36.763512 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 31 07:26:36 crc kubenswrapper[4908]: I0131 07:26:36.809027 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 31 07:26:36 crc kubenswrapper[4908]: I0131 07:26:36.910820 4908 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 31 07:26:36 crc kubenswrapper[4908]: I0131 07:26:36.912773 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=37.912760667 podStartE2EDuration="37.912760667s" podCreationTimestamp="2026-01-31 07:25:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:26:23.183303033 +0000 UTC m=+289.799247687" watchObservedRunningTime="2026-01-31 07:26:36.912760667 +0000 UTC m=+303.528705321" Jan 31 07:26:36 crc kubenswrapper[4908]: I0131 07:26:36.927402 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 07:26:36 crc kubenswrapper[4908]: I0131 07:26:36.927473 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 07:26:36 crc kubenswrapper[4908]: I0131 07:26:36.935058 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:26:36 crc kubenswrapper[4908]: I0131 07:26:36.954559 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=16.954541265 podStartE2EDuration="16.954541265s" podCreationTimestamp="2026-01-31 07:26:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:26:36.951354068 +0000 UTC m=+303.567298722" watchObservedRunningTime="2026-01-31 07:26:36.954541265 +0000 UTC m=+303.570485919" Jan 31 07:26:37 crc kubenswrapper[4908]: I0131 07:26:37.098304 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 31 07:26:37 crc kubenswrapper[4908]: I0131 07:26:37.380130 4908 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 07:26:37 crc kubenswrapper[4908]: I0131 07:26:37.405856 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 31 07:26:37 crc kubenswrapper[4908]: I0131 07:26:37.568686 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 31 07:26:37 crc kubenswrapper[4908]: I0131 07:26:37.592914 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 31 07:26:37 crc kubenswrapper[4908]: I0131 07:26:37.611753 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 31 07:26:37 crc kubenswrapper[4908]: I0131 07:26:37.624203 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 31 07:26:37 crc kubenswrapper[4908]: I0131 07:26:37.723881 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 31 07:26:37 crc kubenswrapper[4908]: I0131 07:26:37.745708 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 07:26:37 crc kubenswrapper[4908]: I0131 07:26:37.860597 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 31 07:26:37 crc kubenswrapper[4908]: I0131 07:26:37.893334 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 31 07:26:37 crc kubenswrapper[4908]: I0131 07:26:37.894473 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 31 07:26:38 crc kubenswrapper[4908]: I0131 07:26:38.023191 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 31 07:26:38 crc kubenswrapper[4908]: I0131 07:26:38.086057 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 31 07:26:38 crc kubenswrapper[4908]: I0131 07:26:38.166156 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 07:26:38 crc kubenswrapper[4908]: I0131 07:26:38.289655 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 07:26:38 crc kubenswrapper[4908]: I0131 07:26:38.341280 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 31 07:26:38 crc kubenswrapper[4908]: I0131 07:26:38.342892 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 31 07:26:38 crc kubenswrapper[4908]: I0131 07:26:38.405303 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 31 07:26:38 crc kubenswrapper[4908]: I0131 07:26:38.505775 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 31 07:26:38 crc kubenswrapper[4908]: I0131 07:26:38.550659 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 31 07:26:38 crc kubenswrapper[4908]: I0131 07:26:38.662735 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 31 07:26:38 crc kubenswrapper[4908]: I0131 07:26:38.680185 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 31 07:26:38 crc kubenswrapper[4908]: I0131 07:26:38.730058 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 31 07:26:38 crc kubenswrapper[4908]: I0131 07:26:38.746089 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 31 07:26:38 crc kubenswrapper[4908]: I0131 07:26:38.811121 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 31 07:26:38 crc kubenswrapper[4908]: I0131 07:26:38.861968 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 31 07:26:38 crc kubenswrapper[4908]: I0131 07:26:38.930698 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 31 07:26:39 crc kubenswrapper[4908]: I0131 07:26:39.017962 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 31 07:26:39 crc kubenswrapper[4908]: I0131 07:26:39.050738 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 07:26:39 crc kubenswrapper[4908]: I0131 07:26:39.113782 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 31 07:26:39 crc kubenswrapper[4908]: I0131 07:26:39.142075 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 31 07:26:39 crc kubenswrapper[4908]: I0131 07:26:39.205441 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 07:26:39 crc kubenswrapper[4908]: I0131 07:26:39.261517 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 07:26:39 crc kubenswrapper[4908]: I0131 07:26:39.280266 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 31 07:26:39 crc kubenswrapper[4908]: I0131 07:26:39.342063 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 31 07:26:39 crc kubenswrapper[4908]: I0131 07:26:39.349134 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 31 07:26:39 crc kubenswrapper[4908]: I0131 07:26:39.460537 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 31 07:26:39 crc kubenswrapper[4908]: I0131 07:26:39.549425 4908 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 07:26:39 crc kubenswrapper[4908]: I0131 07:26:39.645320 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 07:26:39 crc kubenswrapper[4908]: I0131 07:26:39.687992 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 31 07:26:39 crc kubenswrapper[4908]: I0131 07:26:39.743170 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 07:26:39 crc kubenswrapper[4908]: I0131 07:26:39.760504 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 07:26:39 crc kubenswrapper[4908]: I0131 07:26:39.774725 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 07:26:39 crc kubenswrapper[4908]: I0131 07:26:39.819518 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 31 07:26:39 crc kubenswrapper[4908]: I0131 07:26:39.898493 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 31 07:26:39 crc kubenswrapper[4908]: I0131 07:26:39.961819 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 31 07:26:39 crc kubenswrapper[4908]: I0131 07:26:39.976432 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 31 07:26:40 crc kubenswrapper[4908]: I0131 07:26:40.015917 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 07:26:40 crc kubenswrapper[4908]: I0131 07:26:40.059024 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 31 07:26:40 crc kubenswrapper[4908]: I0131 07:26:40.152134 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 31 07:26:40 crc kubenswrapper[4908]: I0131 07:26:40.170701 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 07:26:40 crc kubenswrapper[4908]: I0131 07:26:40.194170 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 31 07:26:40 crc kubenswrapper[4908]: I0131 07:26:40.209199 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 31 07:26:40 crc kubenswrapper[4908]: I0131 07:26:40.241571 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 31 07:26:40 crc kubenswrapper[4908]: I0131 07:26:40.252699 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 07:26:40 crc kubenswrapper[4908]: I0131 07:26:40.305903 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 31 07:26:40 crc kubenswrapper[4908]: I0131 07:26:40.355672 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 07:26:40 crc kubenswrapper[4908]: I0131 07:26:40.377375 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 07:26:40 crc kubenswrapper[4908]: I0131 07:26:40.384171 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 07:26:40 crc kubenswrapper[4908]: I0131 07:26:40.402443 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 31 07:26:40 crc kubenswrapper[4908]: I0131 07:26:40.471671 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 31 07:26:40 crc kubenswrapper[4908]: I0131 07:26:40.475101 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 31 07:26:40 crc kubenswrapper[4908]: I0131 07:26:40.551965 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 31 07:26:40 crc kubenswrapper[4908]: I0131 07:26:40.701609 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 07:26:40 crc kubenswrapper[4908]: I0131 07:26:40.705706 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 07:26:40 crc kubenswrapper[4908]: I0131 07:26:40.746123 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 31 07:26:40 crc kubenswrapper[4908]: I0131 07:26:40.773952 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 31 07:26:40 crc kubenswrapper[4908]: I0131 07:26:40.814638 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 07:26:40 crc kubenswrapper[4908]: I0131 07:26:40.826848 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 31 07:26:40 crc kubenswrapper[4908]: I0131 07:26:40.840368 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 31 07:26:40 crc kubenswrapper[4908]: I0131 07:26:40.874652 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 31 07:26:40 crc kubenswrapper[4908]: I0131 07:26:40.898518 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 07:26:40 crc kubenswrapper[4908]: I0131 07:26:40.927191 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 31 07:26:41 crc kubenswrapper[4908]: I0131 07:26:41.052701 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 31 07:26:41 crc kubenswrapper[4908]: I0131 07:26:41.062114 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 07:26:41 crc kubenswrapper[4908]: I0131 07:26:41.100241 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 31 07:26:41 crc kubenswrapper[4908]: I0131 07:26:41.147311 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 07:26:41 crc kubenswrapper[4908]: I0131 07:26:41.234989 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 31 07:26:41 crc kubenswrapper[4908]: I0131 07:26:41.343348 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 31 07:26:41 crc kubenswrapper[4908]: I0131 07:26:41.386555 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 31 07:26:41 crc kubenswrapper[4908]: I0131 07:26:41.613440 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 31 07:26:41 crc kubenswrapper[4908]: I0131 07:26:41.640372 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 07:26:41 crc kubenswrapper[4908]: I0131 07:26:41.684328 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 31 07:26:41 crc kubenswrapper[4908]: I0131 07:26:41.711501 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 31 07:26:41 crc kubenswrapper[4908]: I0131 07:26:41.907692 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 31 07:26:41 crc kubenswrapper[4908]: I0131 07:26:41.909840 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 31 07:26:41 crc kubenswrapper[4908]: I0131 07:26:41.953447 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 31 07:26:41 crc kubenswrapper[4908]: I0131 07:26:41.960554 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 31 07:26:41 crc kubenswrapper[4908]: I0131 07:26:41.980883 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 07:26:42 crc kubenswrapper[4908]: I0131 07:26:42.095454 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 31 07:26:42 crc kubenswrapper[4908]: I0131 07:26:42.110079 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 31 07:26:42 crc kubenswrapper[4908]: I0131 07:26:42.122153 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 07:26:42 crc kubenswrapper[4908]: I0131 07:26:42.152084 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 31 07:26:42 crc kubenswrapper[4908]: I0131 07:26:42.180814 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 31 07:26:42 crc kubenswrapper[4908]: I0131 07:26:42.213557 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 31 07:26:42 crc kubenswrapper[4908]: I0131 07:26:42.306929 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 31 07:26:42 crc kubenswrapper[4908]: I0131 07:26:42.341724 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 31 07:26:42 crc kubenswrapper[4908]: I0131 07:26:42.385232 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 07:26:42 crc kubenswrapper[4908]: I0131 07:26:42.407408 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 07:26:42 crc kubenswrapper[4908]: I0131 07:26:42.429480 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 31 07:26:42 crc kubenswrapper[4908]: I0131 07:26:42.449920 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 31 07:26:42 crc kubenswrapper[4908]: I0131 07:26:42.465457 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 31 07:26:42 crc kubenswrapper[4908]: I0131 07:26:42.486625 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 31 07:26:42 crc kubenswrapper[4908]: I0131 07:26:42.620439 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 31 07:26:42 crc kubenswrapper[4908]: I0131 07:26:42.636432 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 07:26:42 crc kubenswrapper[4908]: I0131 07:26:42.676755 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 07:26:42 crc kubenswrapper[4908]: I0131 07:26:42.687218 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 31 07:26:42 crc kubenswrapper[4908]: I0131 07:26:42.688809 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 31 07:26:42 crc kubenswrapper[4908]: I0131 07:26:42.777692 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 31 07:26:42 crc kubenswrapper[4908]: I0131 07:26:42.777702 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 31 07:26:42 crc kubenswrapper[4908]: I0131 07:26:42.786914 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 31 07:26:42 crc kubenswrapper[4908]: I0131 07:26:42.832926 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 31 07:26:42 crc kubenswrapper[4908]: I0131 07:26:42.940601 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 07:26:42 crc kubenswrapper[4908]: I0131 07:26:42.963919 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 31 07:26:42 crc kubenswrapper[4908]: I0131 07:26:42.971753 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 31 07:26:42 crc kubenswrapper[4908]: I0131 07:26:42.989596 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 31 07:26:43 crc kubenswrapper[4908]: I0131 07:26:43.016534 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 31 07:26:43 crc kubenswrapper[4908]: I0131 07:26:43.021637 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 31 07:26:43 crc kubenswrapper[4908]: I0131 07:26:43.027712 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 31 07:26:43 crc kubenswrapper[4908]: I0131 07:26:43.130705 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 31 07:26:43 crc kubenswrapper[4908]: I0131 07:26:43.222892 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 31 07:26:43 crc kubenswrapper[4908]: I0131 07:26:43.260506 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 07:26:43 crc kubenswrapper[4908]: I0131 07:26:43.319769 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 31 07:26:43 crc kubenswrapper[4908]: I0131 07:26:43.418713 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 07:26:43 crc kubenswrapper[4908]: I0131 07:26:43.491560 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 07:26:43 crc kubenswrapper[4908]: I0131 07:26:43.653321 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 31 07:26:43 crc kubenswrapper[4908]: I0131 07:26:43.659393 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 07:26:43 crc kubenswrapper[4908]: I0131 07:26:43.692577 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 31 07:26:43 crc kubenswrapper[4908]: I0131 07:26:43.759340 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 07:26:43 crc kubenswrapper[4908]: I0131 07:26:43.808426 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 31 07:26:43 crc kubenswrapper[4908]: I0131 07:26:43.836652 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bd9cbfdf8-r6fnr"] Jan 31 07:26:43 crc kubenswrapper[4908]: I0131 07:26:43.836890 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-bd9cbfdf8-r6fnr" podUID="ffd0651f-bbf6-45bc-9897-dd6fb28b729d" containerName="route-controller-manager" containerID="cri-o://fdb597c31867248d12a1481eab52bd497235c6b8fa80368dd62575891558100d" gracePeriod=30 Jan 31 07:26:43 crc kubenswrapper[4908]: I0131 07:26:43.847349 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-755766b95d-vhs26"] Jan 31 07:26:43 crc kubenswrapper[4908]: I0131 07:26:43.847826 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-755766b95d-vhs26" podUID="d87e2a2b-5881-4e1d-90d9-e9ead40ccad0" containerName="controller-manager" containerID="cri-o://36d9d195ad8931a3639541e97989ac4d23169cc0840507a6be7e08f15ba22578" gracePeriod=30 Jan 31 07:26:43 crc kubenswrapper[4908]: I0131 07:26:43.908508 4908 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 07:26:43 crc kubenswrapper[4908]: I0131 07:26:43.908714 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://512f20d9ac032014a1150e4b488c6179c482d6f7bd8e9afe921eacbefb22e130" gracePeriod=5 Jan 31 07:26:43 crc kubenswrapper[4908]: I0131 07:26:43.929667 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.080243 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.152483 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.283563 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.295240 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.360882 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bd9cbfdf8-r6fnr" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.367059 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-755766b95d-vhs26" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.402560 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.421315 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.422953 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.462843 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffd0651f-bbf6-45bc-9897-dd6fb28b729d-config\") pod \"ffd0651f-bbf6-45bc-9897-dd6fb28b729d\" (UID: \"ffd0651f-bbf6-45bc-9897-dd6fb28b729d\") " Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.462891 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffd0651f-bbf6-45bc-9897-dd6fb28b729d-serving-cert\") pod \"ffd0651f-bbf6-45bc-9897-dd6fb28b729d\" (UID: \"ffd0651f-bbf6-45bc-9897-dd6fb28b729d\") " Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.463037 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffd0651f-bbf6-45bc-9897-dd6fb28b729d-client-ca\") pod \"ffd0651f-bbf6-45bc-9897-dd6fb28b729d\" (UID: \"ffd0651f-bbf6-45bc-9897-dd6fb28b729d\") " Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.463086 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdttj\" (UniqueName: \"kubernetes.io/projected/ffd0651f-bbf6-45bc-9897-dd6fb28b729d-kube-api-access-jdttj\") pod \"ffd0651f-bbf6-45bc-9897-dd6fb28b729d\" (UID: \"ffd0651f-bbf6-45bc-9897-dd6fb28b729d\") " Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.463870 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffd0651f-bbf6-45bc-9897-dd6fb28b729d-client-ca" (OuterVolumeSpecName: "client-ca") pod "ffd0651f-bbf6-45bc-9897-dd6fb28b729d" (UID: "ffd0651f-bbf6-45bc-9897-dd6fb28b729d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.463878 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffd0651f-bbf6-45bc-9897-dd6fb28b729d-config" (OuterVolumeSpecName: "config") pod "ffd0651f-bbf6-45bc-9897-dd6fb28b729d" (UID: "ffd0651f-bbf6-45bc-9897-dd6fb28b729d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.468382 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffd0651f-bbf6-45bc-9897-dd6fb28b729d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ffd0651f-bbf6-45bc-9897-dd6fb28b729d" (UID: "ffd0651f-bbf6-45bc-9897-dd6fb28b729d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.468401 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffd0651f-bbf6-45bc-9897-dd6fb28b729d-kube-api-access-jdttj" (OuterVolumeSpecName: "kube-api-access-jdttj") pod "ffd0651f-bbf6-45bc-9897-dd6fb28b729d" (UID: "ffd0651f-bbf6-45bc-9897-dd6fb28b729d"). InnerVolumeSpecName "kube-api-access-jdttj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.533349 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.541617 4908 generic.go:334] "Generic (PLEG): container finished" podID="d87e2a2b-5881-4e1d-90d9-e9ead40ccad0" containerID="36d9d195ad8931a3639541e97989ac4d23169cc0840507a6be7e08f15ba22578" exitCode=0 Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.541670 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-755766b95d-vhs26" event={"ID":"d87e2a2b-5881-4e1d-90d9-e9ead40ccad0","Type":"ContainerDied","Data":"36d9d195ad8931a3639541e97989ac4d23169cc0840507a6be7e08f15ba22578"} Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.541697 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-755766b95d-vhs26" event={"ID":"d87e2a2b-5881-4e1d-90d9-e9ead40ccad0","Type":"ContainerDied","Data":"ed4ef95aedf16d6d49bb6694f9a082e61dbf6bfb25d6df3d2590fc08d6bef020"} Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.541713 4908 scope.go:117] "RemoveContainer" containerID="36d9d195ad8931a3639541e97989ac4d23169cc0840507a6be7e08f15ba22578" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.541799 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-755766b95d-vhs26" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.546860 4908 generic.go:334] "Generic (PLEG): container finished" podID="ffd0651f-bbf6-45bc-9897-dd6fb28b729d" containerID="fdb597c31867248d12a1481eab52bd497235c6b8fa80368dd62575891558100d" exitCode=0 Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.546891 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bd9cbfdf8-r6fnr" event={"ID":"ffd0651f-bbf6-45bc-9897-dd6fb28b729d","Type":"ContainerDied","Data":"fdb597c31867248d12a1481eab52bd497235c6b8fa80368dd62575891558100d"} Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.546910 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bd9cbfdf8-r6fnr" event={"ID":"ffd0651f-bbf6-45bc-9897-dd6fb28b729d","Type":"ContainerDied","Data":"7be32f708298db97bbadcdd79c0ad1548d59c872128b3d409ee8d36644b5eb96"} Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.546894 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bd9cbfdf8-r6fnr" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.550037 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.564773 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0-proxy-ca-bundles\") pod \"d87e2a2b-5881-4e1d-90d9-e9ead40ccad0\" (UID: \"d87e2a2b-5881-4e1d-90d9-e9ead40ccad0\") " Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.564845 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0-serving-cert\") pod \"d87e2a2b-5881-4e1d-90d9-e9ead40ccad0\" (UID: \"d87e2a2b-5881-4e1d-90d9-e9ead40ccad0\") " Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.564871 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0-config\") pod \"d87e2a2b-5881-4e1d-90d9-e9ead40ccad0\" (UID: \"d87e2a2b-5881-4e1d-90d9-e9ead40ccad0\") " Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.564923 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c86fq\" (UniqueName: \"kubernetes.io/projected/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0-kube-api-access-c86fq\") pod \"d87e2a2b-5881-4e1d-90d9-e9ead40ccad0\" (UID: \"d87e2a2b-5881-4e1d-90d9-e9ead40ccad0\") " Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.564970 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0-client-ca\") pod \"d87e2a2b-5881-4e1d-90d9-e9ead40ccad0\" (UID: \"d87e2a2b-5881-4e1d-90d9-e9ead40ccad0\") " Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.565194 4908 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffd0651f-bbf6-45bc-9897-dd6fb28b729d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.565206 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdttj\" (UniqueName: \"kubernetes.io/projected/ffd0651f-bbf6-45bc-9897-dd6fb28b729d-kube-api-access-jdttj\") on node \"crc\" DevicePath \"\"" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.565218 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffd0651f-bbf6-45bc-9897-dd6fb28b729d-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.565225 4908 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffd0651f-bbf6-45bc-9897-dd6fb28b729d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.565959 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d87e2a2b-5881-4e1d-90d9-e9ead40ccad0" (UID: "d87e2a2b-5881-4e1d-90d9-e9ead40ccad0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.566024 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0-config" (OuterVolumeSpecName: "config") pod "d87e2a2b-5881-4e1d-90d9-e9ead40ccad0" (UID: "d87e2a2b-5881-4e1d-90d9-e9ead40ccad0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.566462 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0-client-ca" (OuterVolumeSpecName: "client-ca") pod "d87e2a2b-5881-4e1d-90d9-e9ead40ccad0" (UID: "d87e2a2b-5881-4e1d-90d9-e9ead40ccad0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.568190 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0-kube-api-access-c86fq" (OuterVolumeSpecName: "kube-api-access-c86fq") pod "d87e2a2b-5881-4e1d-90d9-e9ead40ccad0" (UID: "d87e2a2b-5881-4e1d-90d9-e9ead40ccad0"). InnerVolumeSpecName "kube-api-access-c86fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.568205 4908 scope.go:117] "RemoveContainer" containerID="36d9d195ad8931a3639541e97989ac4d23169cc0840507a6be7e08f15ba22578" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.568298 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d87e2a2b-5881-4e1d-90d9-e9ead40ccad0" (UID: "d87e2a2b-5881-4e1d-90d9-e9ead40ccad0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:26:44 crc kubenswrapper[4908]: E0131 07:26:44.568636 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36d9d195ad8931a3639541e97989ac4d23169cc0840507a6be7e08f15ba22578\": container with ID starting with 36d9d195ad8931a3639541e97989ac4d23169cc0840507a6be7e08f15ba22578 not found: ID does not exist" containerID="36d9d195ad8931a3639541e97989ac4d23169cc0840507a6be7e08f15ba22578" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.568672 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36d9d195ad8931a3639541e97989ac4d23169cc0840507a6be7e08f15ba22578"} err="failed to get container status \"36d9d195ad8931a3639541e97989ac4d23169cc0840507a6be7e08f15ba22578\": rpc error: code = NotFound desc = could not find container \"36d9d195ad8931a3639541e97989ac4d23169cc0840507a6be7e08f15ba22578\": container with ID starting with 36d9d195ad8931a3639541e97989ac4d23169cc0840507a6be7e08f15ba22578 not found: ID does not exist" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.568698 4908 scope.go:117] "RemoveContainer" containerID="fdb597c31867248d12a1481eab52bd497235c6b8fa80368dd62575891558100d" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.569929 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.579029 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bd9cbfdf8-r6fnr"] Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.583672 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bd9cbfdf8-r6fnr"] Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.585235 4908 scope.go:117] "RemoveContainer" containerID="fdb597c31867248d12a1481eab52bd497235c6b8fa80368dd62575891558100d" Jan 31 07:26:44 crc kubenswrapper[4908]: E0131 07:26:44.592208 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdb597c31867248d12a1481eab52bd497235c6b8fa80368dd62575891558100d\": container with ID starting with fdb597c31867248d12a1481eab52bd497235c6b8fa80368dd62575891558100d not found: ID does not exist" containerID="fdb597c31867248d12a1481eab52bd497235c6b8fa80368dd62575891558100d" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.592232 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdb597c31867248d12a1481eab52bd497235c6b8fa80368dd62575891558100d"} err="failed to get container status \"fdb597c31867248d12a1481eab52bd497235c6b8fa80368dd62575891558100d\": rpc error: code = NotFound desc = could not find container \"fdb597c31867248d12a1481eab52bd497235c6b8fa80368dd62575891558100d\": container with ID starting with fdb597c31867248d12a1481eab52bd497235c6b8fa80368dd62575891558100d not found: ID does not exist" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.593329 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.657440 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.666377 4908 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.666408 4908 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.666420 4908 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.666431 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.666443 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c86fq\" (UniqueName: \"kubernetes.io/projected/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0-kube-api-access-c86fq\") on node \"crc\" DevicePath \"\"" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.696086 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.705018 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.745861 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.756250 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.869924 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-755766b95d-vhs26"] Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.873006 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-755766b95d-vhs26"] Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.891181 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.967135 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55d6f856d9-jr8wh"] Jan 31 07:26:44 crc kubenswrapper[4908]: E0131 07:26:44.967396 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14dc50c5-4628-4ac4-a3e5-035acac8d1cc" containerName="installer" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.967412 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="14dc50c5-4628-4ac4-a3e5-035acac8d1cc" containerName="installer" Jan 31 07:26:44 crc kubenswrapper[4908]: E0131 07:26:44.967421 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d87e2a2b-5881-4e1d-90d9-e9ead40ccad0" containerName="controller-manager" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.967431 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="d87e2a2b-5881-4e1d-90d9-e9ead40ccad0" containerName="controller-manager" Jan 31 07:26:44 crc kubenswrapper[4908]: E0131 07:26:44.967444 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffd0651f-bbf6-45bc-9897-dd6fb28b729d" containerName="route-controller-manager" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.967451 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffd0651f-bbf6-45bc-9897-dd6fb28b729d" containerName="route-controller-manager" Jan 31 07:26:44 crc kubenswrapper[4908]: E0131 07:26:44.967463 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.967469 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.967576 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.967588 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="14dc50c5-4628-4ac4-a3e5-035acac8d1cc" containerName="installer" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.967596 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="d87e2a2b-5881-4e1d-90d9-e9ead40ccad0" containerName="controller-manager" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.967606 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffd0651f-bbf6-45bc-9897-dd6fb28b729d" containerName="route-controller-manager" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.968088 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55d6f856d9-jr8wh" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.970245 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.970486 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.970909 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.971044 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.971181 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.971361 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.976584 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7"] Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.977812 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.980179 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.980290 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.980447 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.980871 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.986554 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.988109 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55d6f856d9-jr8wh"] Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.992147 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.992535 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 07:26:44 crc kubenswrapper[4908]: I0131 07:26:44.993926 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7"] Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.019054 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.070583 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a5bfc4f-edfb-4dd1-a262-21304fde3645-client-ca\") pod \"route-controller-manager-55d6f856d9-jr8wh\" (UID: \"7a5bfc4f-edfb-4dd1-a262-21304fde3645\") " pod="openshift-route-controller-manager/route-controller-manager-55d6f856d9-jr8wh" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.070630 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85e9aa1d-27cc-4650-8124-8928002133fb-proxy-ca-bundles\") pod \"controller-manager-588b9b6fdc-wfrf7\" (UID: \"85e9aa1d-27cc-4650-8124-8928002133fb\") " pod="openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.070663 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhdmn\" (UniqueName: \"kubernetes.io/projected/7a5bfc4f-edfb-4dd1-a262-21304fde3645-kube-api-access-lhdmn\") pod \"route-controller-manager-55d6f856d9-jr8wh\" (UID: \"7a5bfc4f-edfb-4dd1-a262-21304fde3645\") " pod="openshift-route-controller-manager/route-controller-manager-55d6f856d9-jr8wh" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.070694 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a5bfc4f-edfb-4dd1-a262-21304fde3645-serving-cert\") pod \"route-controller-manager-55d6f856d9-jr8wh\" (UID: \"7a5bfc4f-edfb-4dd1-a262-21304fde3645\") " pod="openshift-route-controller-manager/route-controller-manager-55d6f856d9-jr8wh" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.070708 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85e9aa1d-27cc-4650-8124-8928002133fb-serving-cert\") pod \"controller-manager-588b9b6fdc-wfrf7\" (UID: \"85e9aa1d-27cc-4650-8124-8928002133fb\") " pod="openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.070736 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a5bfc4f-edfb-4dd1-a262-21304fde3645-config\") pod \"route-controller-manager-55d6f856d9-jr8wh\" (UID: \"7a5bfc4f-edfb-4dd1-a262-21304fde3645\") " pod="openshift-route-controller-manager/route-controller-manager-55d6f856d9-jr8wh" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.071140 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85e9aa1d-27cc-4650-8124-8928002133fb-config\") pod \"controller-manager-588b9b6fdc-wfrf7\" (UID: \"85e9aa1d-27cc-4650-8124-8928002133fb\") " pod="openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.071162 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcddp\" (UniqueName: \"kubernetes.io/projected/85e9aa1d-27cc-4650-8124-8928002133fb-kube-api-access-wcddp\") pod \"controller-manager-588b9b6fdc-wfrf7\" (UID: \"85e9aa1d-27cc-4650-8124-8928002133fb\") " pod="openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.071187 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85e9aa1d-27cc-4650-8124-8928002133fb-client-ca\") pod \"controller-manager-588b9b6fdc-wfrf7\" (UID: \"85e9aa1d-27cc-4650-8124-8928002133fb\") " pod="openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.074491 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.104445 4908 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.171787 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a5bfc4f-edfb-4dd1-a262-21304fde3645-config\") pod \"route-controller-manager-55d6f856d9-jr8wh\" (UID: \"7a5bfc4f-edfb-4dd1-a262-21304fde3645\") " pod="openshift-route-controller-manager/route-controller-manager-55d6f856d9-jr8wh" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.171905 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85e9aa1d-27cc-4650-8124-8928002133fb-config\") pod \"controller-manager-588b9b6fdc-wfrf7\" (UID: \"85e9aa1d-27cc-4650-8124-8928002133fb\") " pod="openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.171954 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcddp\" (UniqueName: \"kubernetes.io/projected/85e9aa1d-27cc-4650-8124-8928002133fb-kube-api-access-wcddp\") pod \"controller-manager-588b9b6fdc-wfrf7\" (UID: \"85e9aa1d-27cc-4650-8124-8928002133fb\") " pod="openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.171974 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85e9aa1d-27cc-4650-8124-8928002133fb-client-ca\") pod \"controller-manager-588b9b6fdc-wfrf7\" (UID: \"85e9aa1d-27cc-4650-8124-8928002133fb\") " pod="openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.172029 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a5bfc4f-edfb-4dd1-a262-21304fde3645-client-ca\") pod \"route-controller-manager-55d6f856d9-jr8wh\" (UID: \"7a5bfc4f-edfb-4dd1-a262-21304fde3645\") " pod="openshift-route-controller-manager/route-controller-manager-55d6f856d9-jr8wh" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.172046 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85e9aa1d-27cc-4650-8124-8928002133fb-proxy-ca-bundles\") pod \"controller-manager-588b9b6fdc-wfrf7\" (UID: \"85e9aa1d-27cc-4650-8124-8928002133fb\") " pod="openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.172063 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhdmn\" (UniqueName: \"kubernetes.io/projected/7a5bfc4f-edfb-4dd1-a262-21304fde3645-kube-api-access-lhdmn\") pod \"route-controller-manager-55d6f856d9-jr8wh\" (UID: \"7a5bfc4f-edfb-4dd1-a262-21304fde3645\") " pod="openshift-route-controller-manager/route-controller-manager-55d6f856d9-jr8wh" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.172087 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a5bfc4f-edfb-4dd1-a262-21304fde3645-serving-cert\") pod \"route-controller-manager-55d6f856d9-jr8wh\" (UID: \"7a5bfc4f-edfb-4dd1-a262-21304fde3645\") " pod="openshift-route-controller-manager/route-controller-manager-55d6f856d9-jr8wh" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.172105 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85e9aa1d-27cc-4650-8124-8928002133fb-serving-cert\") pod \"controller-manager-588b9b6fdc-wfrf7\" (UID: \"85e9aa1d-27cc-4650-8124-8928002133fb\") " pod="openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.173133 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85e9aa1d-27cc-4650-8124-8928002133fb-client-ca\") pod \"controller-manager-588b9b6fdc-wfrf7\" (UID: \"85e9aa1d-27cc-4650-8124-8928002133fb\") " pod="openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.173186 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a5bfc4f-edfb-4dd1-a262-21304fde3645-client-ca\") pod \"route-controller-manager-55d6f856d9-jr8wh\" (UID: \"7a5bfc4f-edfb-4dd1-a262-21304fde3645\") " pod="openshift-route-controller-manager/route-controller-manager-55d6f856d9-jr8wh" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.173437 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85e9aa1d-27cc-4650-8124-8928002133fb-proxy-ca-bundles\") pod \"controller-manager-588b9b6fdc-wfrf7\" (UID: \"85e9aa1d-27cc-4650-8124-8928002133fb\") " pod="openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.173477 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a5bfc4f-edfb-4dd1-a262-21304fde3645-config\") pod \"route-controller-manager-55d6f856d9-jr8wh\" (UID: \"7a5bfc4f-edfb-4dd1-a262-21304fde3645\") " pod="openshift-route-controller-manager/route-controller-manager-55d6f856d9-jr8wh" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.173635 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85e9aa1d-27cc-4650-8124-8928002133fb-config\") pod \"controller-manager-588b9b6fdc-wfrf7\" (UID: \"85e9aa1d-27cc-4650-8124-8928002133fb\") " pod="openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.176451 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85e9aa1d-27cc-4650-8124-8928002133fb-serving-cert\") pod \"controller-manager-588b9b6fdc-wfrf7\" (UID: \"85e9aa1d-27cc-4650-8124-8928002133fb\") " pod="openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.181908 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a5bfc4f-edfb-4dd1-a262-21304fde3645-serving-cert\") pod \"route-controller-manager-55d6f856d9-jr8wh\" (UID: \"7a5bfc4f-edfb-4dd1-a262-21304fde3645\") " pod="openshift-route-controller-manager/route-controller-manager-55d6f856d9-jr8wh" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.199630 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcddp\" (UniqueName: \"kubernetes.io/projected/85e9aa1d-27cc-4650-8124-8928002133fb-kube-api-access-wcddp\") pod \"controller-manager-588b9b6fdc-wfrf7\" (UID: \"85e9aa1d-27cc-4650-8124-8928002133fb\") " pod="openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.206672 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhdmn\" (UniqueName: \"kubernetes.io/projected/7a5bfc4f-edfb-4dd1-a262-21304fde3645-kube-api-access-lhdmn\") pod \"route-controller-manager-55d6f856d9-jr8wh\" (UID: \"7a5bfc4f-edfb-4dd1-a262-21304fde3645\") " pod="openshift-route-controller-manager/route-controller-manager-55d6f856d9-jr8wh" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.234295 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.290027 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55d6f856d9-jr8wh" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.312555 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.322461 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.336448 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.369949 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.432970 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.502519 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.572723 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.616742 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.690549 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55d6f856d9-jr8wh"] Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.743561 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7"] Jan 31 07:26:45 crc kubenswrapper[4908]: W0131 07:26:45.750871 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85e9aa1d_27cc_4650_8124_8928002133fb.slice/crio-7ad6176b7efa4731dbd393e6d028d91e145abe3ae843c34a5f808817499fb549 WatchSource:0}: Error finding container 7ad6176b7efa4731dbd393e6d028d91e145abe3ae843c34a5f808817499fb549: Status 404 returned error can't find the container with id 7ad6176b7efa4731dbd393e6d028d91e145abe3ae843c34a5f808817499fb549 Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.752890 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.805056 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.936904 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.948119 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d87e2a2b-5881-4e1d-90d9-e9ead40ccad0" path="/var/lib/kubelet/pods/d87e2a2b-5881-4e1d-90d9-e9ead40ccad0/volumes" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.948841 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffd0651f-bbf6-45bc-9897-dd6fb28b729d" path="/var/lib/kubelet/pods/ffd0651f-bbf6-45bc-9897-dd6fb28b729d/volumes" Jan 31 07:26:45 crc kubenswrapper[4908]: I0131 07:26:45.977027 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 31 07:26:46 crc kubenswrapper[4908]: I0131 07:26:46.045184 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 31 07:26:46 crc kubenswrapper[4908]: I0131 07:26:46.195264 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 07:26:46 crc kubenswrapper[4908]: I0131 07:26:46.285290 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 07:26:46 crc kubenswrapper[4908]: I0131 07:26:46.326650 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 07:26:46 crc kubenswrapper[4908]: I0131 07:26:46.390890 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 31 07:26:46 crc kubenswrapper[4908]: I0131 07:26:46.438281 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 07:26:46 crc kubenswrapper[4908]: I0131 07:26:46.498784 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 07:26:46 crc kubenswrapper[4908]: I0131 07:26:46.562583 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7" event={"ID":"85e9aa1d-27cc-4650-8124-8928002133fb","Type":"ContainerStarted","Data":"06a27db502798867a2ee5477f3f37a8d9a46f757efdf212c2b1c0450be725882"} Jan 31 07:26:46 crc kubenswrapper[4908]: I0131 07:26:46.563002 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7" Jan 31 07:26:46 crc kubenswrapper[4908]: I0131 07:26:46.563020 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7" event={"ID":"85e9aa1d-27cc-4650-8124-8928002133fb","Type":"ContainerStarted","Data":"7ad6176b7efa4731dbd393e6d028d91e145abe3ae843c34a5f808817499fb549"} Jan 31 07:26:46 crc kubenswrapper[4908]: I0131 07:26:46.566079 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55d6f856d9-jr8wh" event={"ID":"7a5bfc4f-edfb-4dd1-a262-21304fde3645","Type":"ContainerStarted","Data":"d893f60a3dfba695c08c5b76af49fcff498ee92f776fa82bbff504d940693fc3"} Jan 31 07:26:46 crc kubenswrapper[4908]: I0131 07:26:46.566192 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-55d6f856d9-jr8wh" Jan 31 07:26:46 crc kubenswrapper[4908]: I0131 07:26:46.566231 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55d6f856d9-jr8wh" event={"ID":"7a5bfc4f-edfb-4dd1-a262-21304fde3645","Type":"ContainerStarted","Data":"e845199eeae8a82fdc92b284c510b350a3e5d0d3f7dbd9b51cebbc43f532fc94"} Jan 31 07:26:46 crc kubenswrapper[4908]: I0131 07:26:46.570081 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-55d6f856d9-jr8wh" Jan 31 07:26:46 crc kubenswrapper[4908]: I0131 07:26:46.570809 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7" Jan 31 07:26:46 crc kubenswrapper[4908]: I0131 07:26:46.610606 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7" podStartSLOduration=3.610581232 podStartE2EDuration="3.610581232s" podCreationTimestamp="2026-01-31 07:26:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:26:46.5914035 +0000 UTC m=+313.207348154" watchObservedRunningTime="2026-01-31 07:26:46.610581232 +0000 UTC m=+313.226525876" Jan 31 07:26:46 crc kubenswrapper[4908]: I0131 07:26:46.616528 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 07:26:46 crc kubenswrapper[4908]: I0131 07:26:46.632128 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-55d6f856d9-jr8wh" podStartSLOduration=3.632106188 podStartE2EDuration="3.632106188s" podCreationTimestamp="2026-01-31 07:26:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:26:46.613017069 +0000 UTC m=+313.228961723" watchObservedRunningTime="2026-01-31 07:26:46.632106188 +0000 UTC m=+313.248050842" Jan 31 07:26:46 crc kubenswrapper[4908]: I0131 07:26:46.648426 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 31 07:26:46 crc kubenswrapper[4908]: I0131 07:26:46.725423 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 31 07:26:46 crc kubenswrapper[4908]: I0131 07:26:46.829634 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 07:26:46 crc kubenswrapper[4908]: I0131 07:26:46.922254 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 31 07:26:46 crc kubenswrapper[4908]: I0131 07:26:46.998268 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 31 07:26:47 crc kubenswrapper[4908]: I0131 07:26:47.012318 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 31 07:26:47 crc kubenswrapper[4908]: I0131 07:26:47.038061 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 07:26:47 crc kubenswrapper[4908]: I0131 07:26:47.055498 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 07:26:47 crc kubenswrapper[4908]: I0131 07:26:47.106429 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 31 07:26:47 crc kubenswrapper[4908]: I0131 07:26:47.161211 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 31 07:26:47 crc kubenswrapper[4908]: I0131 07:26:47.167830 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 31 07:26:47 crc kubenswrapper[4908]: I0131 07:26:47.355763 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 31 07:26:47 crc kubenswrapper[4908]: I0131 07:26:47.497783 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 31 07:26:47 crc kubenswrapper[4908]: I0131 07:26:47.549096 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 31 07:26:47 crc kubenswrapper[4908]: I0131 07:26:47.643689 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 31 07:26:47 crc kubenswrapper[4908]: I0131 07:26:47.733322 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 31 07:26:47 crc kubenswrapper[4908]: I0131 07:26:47.765715 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 31 07:26:48 crc kubenswrapper[4908]: I0131 07:26:48.006247 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 31 07:26:48 crc kubenswrapper[4908]: I0131 07:26:48.235085 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 31 07:26:48 crc kubenswrapper[4908]: I0131 07:26:48.318574 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 31 07:26:48 crc kubenswrapper[4908]: I0131 07:26:48.500762 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 31 07:26:48 crc kubenswrapper[4908]: I0131 07:26:48.502590 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 31 07:26:48 crc kubenswrapper[4908]: I0131 07:26:48.685345 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 31 07:26:48 crc kubenswrapper[4908]: I0131 07:26:48.889686 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 31 07:26:49 crc kubenswrapper[4908]: I0131 07:26:49.015151 4908 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 07:26:49 crc kubenswrapper[4908]: I0131 07:26:49.505856 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 07:26:49 crc kubenswrapper[4908]: I0131 07:26:49.505968 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:26:49 crc kubenswrapper[4908]: I0131 07:26:49.583518 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 07:26:49 crc kubenswrapper[4908]: I0131 07:26:49.583605 4908 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="512f20d9ac032014a1150e4b488c6179c482d6f7bd8e9afe921eacbefb22e130" exitCode=137 Jan 31 07:26:49 crc kubenswrapper[4908]: I0131 07:26:49.583680 4908 scope.go:117] "RemoveContainer" containerID="512f20d9ac032014a1150e4b488c6179c482d6f7bd8e9afe921eacbefb22e130" Jan 31 07:26:49 crc kubenswrapper[4908]: I0131 07:26:49.583700 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:26:49 crc kubenswrapper[4908]: I0131 07:26:49.611826 4908 scope.go:117] "RemoveContainer" containerID="512f20d9ac032014a1150e4b488c6179c482d6f7bd8e9afe921eacbefb22e130" Jan 31 07:26:49 crc kubenswrapper[4908]: E0131 07:26:49.612814 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"512f20d9ac032014a1150e4b488c6179c482d6f7bd8e9afe921eacbefb22e130\": container with ID starting with 512f20d9ac032014a1150e4b488c6179c482d6f7bd8e9afe921eacbefb22e130 not found: ID does not exist" containerID="512f20d9ac032014a1150e4b488c6179c482d6f7bd8e9afe921eacbefb22e130" Jan 31 07:26:49 crc kubenswrapper[4908]: I0131 07:26:49.612879 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"512f20d9ac032014a1150e4b488c6179c482d6f7bd8e9afe921eacbefb22e130"} err="failed to get container status \"512f20d9ac032014a1150e4b488c6179c482d6f7bd8e9afe921eacbefb22e130\": rpc error: code = NotFound desc = could not find container \"512f20d9ac032014a1150e4b488c6179c482d6f7bd8e9afe921eacbefb22e130\": container with ID starting with 512f20d9ac032014a1150e4b488c6179c482d6f7bd8e9afe921eacbefb22e130 not found: ID does not exist" Jan 31 07:26:49 crc kubenswrapper[4908]: I0131 07:26:49.628029 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 07:26:49 crc kubenswrapper[4908]: I0131 07:26:49.628130 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 07:26:49 crc kubenswrapper[4908]: I0131 07:26:49.628162 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 07:26:49 crc kubenswrapper[4908]: I0131 07:26:49.628293 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:26:49 crc kubenswrapper[4908]: I0131 07:26:49.628359 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 07:26:49 crc kubenswrapper[4908]: I0131 07:26:49.628386 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 07:26:49 crc kubenswrapper[4908]: I0131 07:26:49.628381 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:26:49 crc kubenswrapper[4908]: I0131 07:26:49.628518 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:26:49 crc kubenswrapper[4908]: I0131 07:26:49.628787 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:26:49 crc kubenswrapper[4908]: I0131 07:26:49.629116 4908 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 31 07:26:49 crc kubenswrapper[4908]: I0131 07:26:49.629144 4908 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 31 07:26:49 crc kubenswrapper[4908]: I0131 07:26:49.629154 4908 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 07:26:49 crc kubenswrapper[4908]: I0131 07:26:49.638226 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:26:49 crc kubenswrapper[4908]: I0131 07:26:49.730435 4908 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 07:26:49 crc kubenswrapper[4908]: I0131 07:26:49.730755 4908 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 31 07:26:49 crc kubenswrapper[4908]: I0131 07:26:49.957005 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 31 07:26:49 crc kubenswrapper[4908]: I0131 07:26:49.959642 4908 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 31 07:26:49 crc kubenswrapper[4908]: I0131 07:26:49.971896 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 07:26:49 crc kubenswrapper[4908]: I0131 07:26:49.971945 4908 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="9302d221-bf3f-4650-b672-3bff29c30699" Jan 31 07:26:49 crc kubenswrapper[4908]: I0131 07:26:49.976009 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 07:26:49 crc kubenswrapper[4908]: I0131 07:26:49.976058 4908 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="9302d221-bf3f-4650-b672-3bff29c30699" Jan 31 07:27:02 crc kubenswrapper[4908]: I0131 07:27:02.990095 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 31 07:27:03 crc kubenswrapper[4908]: I0131 07:27:03.792839 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7"] Jan 31 07:27:03 crc kubenswrapper[4908]: I0131 07:27:03.793085 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7" podUID="85e9aa1d-27cc-4650-8124-8928002133fb" containerName="controller-manager" containerID="cri-o://06a27db502798867a2ee5477f3f37a8d9a46f757efdf212c2b1c0450be725882" gracePeriod=30 Jan 31 07:27:03 crc kubenswrapper[4908]: I0131 07:27:03.819325 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55d6f856d9-jr8wh"] Jan 31 07:27:03 crc kubenswrapper[4908]: I0131 07:27:03.819541 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-55d6f856d9-jr8wh" podUID="7a5bfc4f-edfb-4dd1-a262-21304fde3645" containerName="route-controller-manager" containerID="cri-o://d893f60a3dfba695c08c5b76af49fcff498ee92f776fa82bbff504d940693fc3" gracePeriod=30 Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.297330 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55d6f856d9-jr8wh" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.315889 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a5bfc4f-edfb-4dd1-a262-21304fde3645-serving-cert\") pod \"7a5bfc4f-edfb-4dd1-a262-21304fde3645\" (UID: \"7a5bfc4f-edfb-4dd1-a262-21304fde3645\") " Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.320777 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a5bfc4f-edfb-4dd1-a262-21304fde3645-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7a5bfc4f-edfb-4dd1-a262-21304fde3645" (UID: "7a5bfc4f-edfb-4dd1-a262-21304fde3645"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.351858 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.417088 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhdmn\" (UniqueName: \"kubernetes.io/projected/7a5bfc4f-edfb-4dd1-a262-21304fde3645-kube-api-access-lhdmn\") pod \"7a5bfc4f-edfb-4dd1-a262-21304fde3645\" (UID: \"7a5bfc4f-edfb-4dd1-a262-21304fde3645\") " Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.417168 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a5bfc4f-edfb-4dd1-a262-21304fde3645-config\") pod \"7a5bfc4f-edfb-4dd1-a262-21304fde3645\" (UID: \"7a5bfc4f-edfb-4dd1-a262-21304fde3645\") " Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.417517 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a5bfc4f-edfb-4dd1-a262-21304fde3645-client-ca\") pod \"7a5bfc4f-edfb-4dd1-a262-21304fde3645\" (UID: \"7a5bfc4f-edfb-4dd1-a262-21304fde3645\") " Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.417563 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85e9aa1d-27cc-4650-8124-8928002133fb-serving-cert\") pod \"85e9aa1d-27cc-4650-8124-8928002133fb\" (UID: \"85e9aa1d-27cc-4650-8124-8928002133fb\") " Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.417913 4908 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a5bfc4f-edfb-4dd1-a262-21304fde3645-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.418090 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a5bfc4f-edfb-4dd1-a262-21304fde3645-client-ca" (OuterVolumeSpecName: "client-ca") pod "7a5bfc4f-edfb-4dd1-a262-21304fde3645" (UID: "7a5bfc4f-edfb-4dd1-a262-21304fde3645"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.418132 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a5bfc4f-edfb-4dd1-a262-21304fde3645-config" (OuterVolumeSpecName: "config") pod "7a5bfc4f-edfb-4dd1-a262-21304fde3645" (UID: "7a5bfc4f-edfb-4dd1-a262-21304fde3645"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.419559 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a5bfc4f-edfb-4dd1-a262-21304fde3645-kube-api-access-lhdmn" (OuterVolumeSpecName: "kube-api-access-lhdmn") pod "7a5bfc4f-edfb-4dd1-a262-21304fde3645" (UID: "7a5bfc4f-edfb-4dd1-a262-21304fde3645"). InnerVolumeSpecName "kube-api-access-lhdmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.421083 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e9aa1d-27cc-4650-8124-8928002133fb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "85e9aa1d-27cc-4650-8124-8928002133fb" (UID: "85e9aa1d-27cc-4650-8124-8928002133fb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.518539 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85e9aa1d-27cc-4650-8124-8928002133fb-client-ca\") pod \"85e9aa1d-27cc-4650-8124-8928002133fb\" (UID: \"85e9aa1d-27cc-4650-8124-8928002133fb\") " Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.518630 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85e9aa1d-27cc-4650-8124-8928002133fb-config\") pod \"85e9aa1d-27cc-4650-8124-8928002133fb\" (UID: \"85e9aa1d-27cc-4650-8124-8928002133fb\") " Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.518680 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcddp\" (UniqueName: \"kubernetes.io/projected/85e9aa1d-27cc-4650-8124-8928002133fb-kube-api-access-wcddp\") pod \"85e9aa1d-27cc-4650-8124-8928002133fb\" (UID: \"85e9aa1d-27cc-4650-8124-8928002133fb\") " Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.518717 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85e9aa1d-27cc-4650-8124-8928002133fb-proxy-ca-bundles\") pod \"85e9aa1d-27cc-4650-8124-8928002133fb\" (UID: \"85e9aa1d-27cc-4650-8124-8928002133fb\") " Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.518876 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhdmn\" (UniqueName: \"kubernetes.io/projected/7a5bfc4f-edfb-4dd1-a262-21304fde3645-kube-api-access-lhdmn\") on node \"crc\" DevicePath \"\"" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.518892 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a5bfc4f-edfb-4dd1-a262-21304fde3645-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.518902 4908 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a5bfc4f-edfb-4dd1-a262-21304fde3645-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.518913 4908 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85e9aa1d-27cc-4650-8124-8928002133fb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.519365 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85e9aa1d-27cc-4650-8124-8928002133fb-client-ca" (OuterVolumeSpecName: "client-ca") pod "85e9aa1d-27cc-4650-8124-8928002133fb" (UID: "85e9aa1d-27cc-4650-8124-8928002133fb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.519465 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85e9aa1d-27cc-4650-8124-8928002133fb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "85e9aa1d-27cc-4650-8124-8928002133fb" (UID: "85e9aa1d-27cc-4650-8124-8928002133fb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.519504 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85e9aa1d-27cc-4650-8124-8928002133fb-config" (OuterVolumeSpecName: "config") pod "85e9aa1d-27cc-4650-8124-8928002133fb" (UID: "85e9aa1d-27cc-4650-8124-8928002133fb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.523234 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85e9aa1d-27cc-4650-8124-8928002133fb-kube-api-access-wcddp" (OuterVolumeSpecName: "kube-api-access-wcddp") pod "85e9aa1d-27cc-4650-8124-8928002133fb" (UID: "85e9aa1d-27cc-4650-8124-8928002133fb"). InnerVolumeSpecName "kube-api-access-wcddp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.619717 4908 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85e9aa1d-27cc-4650-8124-8928002133fb-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.619790 4908 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85e9aa1d-27cc-4650-8124-8928002133fb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.619800 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85e9aa1d-27cc-4650-8124-8928002133fb-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.619809 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcddp\" (UniqueName: \"kubernetes.io/projected/85e9aa1d-27cc-4650-8124-8928002133fb-kube-api-access-wcddp\") on node \"crc\" DevicePath \"\"" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.672700 4908 generic.go:334] "Generic (PLEG): container finished" podID="7a5bfc4f-edfb-4dd1-a262-21304fde3645" containerID="d893f60a3dfba695c08c5b76af49fcff498ee92f776fa82bbff504d940693fc3" exitCode=0 Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.672758 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55d6f856d9-jr8wh" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.672797 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55d6f856d9-jr8wh" event={"ID":"7a5bfc4f-edfb-4dd1-a262-21304fde3645","Type":"ContainerDied","Data":"d893f60a3dfba695c08c5b76af49fcff498ee92f776fa82bbff504d940693fc3"} Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.672859 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55d6f856d9-jr8wh" event={"ID":"7a5bfc4f-edfb-4dd1-a262-21304fde3645","Type":"ContainerDied","Data":"e845199eeae8a82fdc92b284c510b350a3e5d0d3f7dbd9b51cebbc43f532fc94"} Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.672889 4908 scope.go:117] "RemoveContainer" containerID="d893f60a3dfba695c08c5b76af49fcff498ee92f776fa82bbff504d940693fc3" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.674742 4908 generic.go:334] "Generic (PLEG): container finished" podID="85e9aa1d-27cc-4650-8124-8928002133fb" containerID="06a27db502798867a2ee5477f3f37a8d9a46f757efdf212c2b1c0450be725882" exitCode=0 Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.674787 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7" event={"ID":"85e9aa1d-27cc-4650-8124-8928002133fb","Type":"ContainerDied","Data":"06a27db502798867a2ee5477f3f37a8d9a46f757efdf212c2b1c0450be725882"} Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.674796 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.674816 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7" event={"ID":"85e9aa1d-27cc-4650-8124-8928002133fb","Type":"ContainerDied","Data":"7ad6176b7efa4731dbd393e6d028d91e145abe3ae843c34a5f808817499fb549"} Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.699232 4908 scope.go:117] "RemoveContainer" containerID="d893f60a3dfba695c08c5b76af49fcff498ee92f776fa82bbff504d940693fc3" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.699315 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55d6f856d9-jr8wh"] Jan 31 07:27:04 crc kubenswrapper[4908]: E0131 07:27:04.700033 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d893f60a3dfba695c08c5b76af49fcff498ee92f776fa82bbff504d940693fc3\": container with ID starting with d893f60a3dfba695c08c5b76af49fcff498ee92f776fa82bbff504d940693fc3 not found: ID does not exist" containerID="d893f60a3dfba695c08c5b76af49fcff498ee92f776fa82bbff504d940693fc3" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.700072 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d893f60a3dfba695c08c5b76af49fcff498ee92f776fa82bbff504d940693fc3"} err="failed to get container status \"d893f60a3dfba695c08c5b76af49fcff498ee92f776fa82bbff504d940693fc3\": rpc error: code = NotFound desc = could not find container \"d893f60a3dfba695c08c5b76af49fcff498ee92f776fa82bbff504d940693fc3\": container with ID starting with d893f60a3dfba695c08c5b76af49fcff498ee92f776fa82bbff504d940693fc3 not found: ID does not exist" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.700097 4908 scope.go:117] "RemoveContainer" containerID="06a27db502798867a2ee5477f3f37a8d9a46f757efdf212c2b1c0450be725882" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.703034 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55d6f856d9-jr8wh"] Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.710827 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7"] Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.714895 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-588b9b6fdc-wfrf7"] Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.725036 4908 scope.go:117] "RemoveContainer" containerID="06a27db502798867a2ee5477f3f37a8d9a46f757efdf212c2b1c0450be725882" Jan 31 07:27:04 crc kubenswrapper[4908]: E0131 07:27:04.729236 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06a27db502798867a2ee5477f3f37a8d9a46f757efdf212c2b1c0450be725882\": container with ID starting with 06a27db502798867a2ee5477f3f37a8d9a46f757efdf212c2b1c0450be725882 not found: ID does not exist" containerID="06a27db502798867a2ee5477f3f37a8d9a46f757efdf212c2b1c0450be725882" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.729269 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06a27db502798867a2ee5477f3f37a8d9a46f757efdf212c2b1c0450be725882"} err="failed to get container status \"06a27db502798867a2ee5477f3f37a8d9a46f757efdf212c2b1c0450be725882\": rpc error: code = NotFound desc = could not find container \"06a27db502798867a2ee5477f3f37a8d9a46f757efdf212c2b1c0450be725882\": container with ID starting with 06a27db502798867a2ee5477f3f37a8d9a46f757efdf212c2b1c0450be725882 not found: ID does not exist" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.978617 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-69b7475fb7-m7lwm"] Jan 31 07:27:04 crc kubenswrapper[4908]: E0131 07:27:04.979134 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85e9aa1d-27cc-4650-8124-8928002133fb" containerName="controller-manager" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.979147 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="85e9aa1d-27cc-4650-8124-8928002133fb" containerName="controller-manager" Jan 31 07:27:04 crc kubenswrapper[4908]: E0131 07:27:04.979169 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a5bfc4f-edfb-4dd1-a262-21304fde3645" containerName="route-controller-manager" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.979177 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a5bfc4f-edfb-4dd1-a262-21304fde3645" containerName="route-controller-manager" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.979268 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="85e9aa1d-27cc-4650-8124-8928002133fb" containerName="controller-manager" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.979280 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a5bfc4f-edfb-4dd1-a262-21304fde3645" containerName="route-controller-manager" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.979629 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69b7475fb7-m7lwm" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.981100 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.982320 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.982887 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.983052 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64bd4fc849-ptftd"] Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.983173 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.983275 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.983678 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-ptftd" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.984122 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.986598 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.987187 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.987278 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.987617 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.987676 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.987775 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 07:27:04 crc kubenswrapper[4908]: I0131 07:27:04.997010 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.000028 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69b7475fb7-m7lwm"] Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.004598 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64bd4fc849-ptftd"] Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.124720 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7806bd0e-1937-4939-9990-4f0ecc1b6a77-serving-cert\") pod \"controller-manager-69b7475fb7-m7lwm\" (UID: \"7806bd0e-1937-4939-9990-4f0ecc1b6a77\") " pod="openshift-controller-manager/controller-manager-69b7475fb7-m7lwm" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.124780 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7806bd0e-1937-4939-9990-4f0ecc1b6a77-client-ca\") pod \"controller-manager-69b7475fb7-m7lwm\" (UID: \"7806bd0e-1937-4939-9990-4f0ecc1b6a77\") " pod="openshift-controller-manager/controller-manager-69b7475fb7-m7lwm" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.124809 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d548b94-b0d9-4ee6-b77d-48392eb5d892-client-ca\") pod \"route-controller-manager-64bd4fc849-ptftd\" (UID: \"1d548b94-b0d9-4ee6-b77d-48392eb5d892\") " pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-ptftd" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.124857 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7806bd0e-1937-4939-9990-4f0ecc1b6a77-config\") pod \"controller-manager-69b7475fb7-m7lwm\" (UID: \"7806bd0e-1937-4939-9990-4f0ecc1b6a77\") " pod="openshift-controller-manager/controller-manager-69b7475fb7-m7lwm" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.125373 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q79xw\" (UniqueName: \"kubernetes.io/projected/1d548b94-b0d9-4ee6-b77d-48392eb5d892-kube-api-access-q79xw\") pod \"route-controller-manager-64bd4fc849-ptftd\" (UID: \"1d548b94-b0d9-4ee6-b77d-48392eb5d892\") " pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-ptftd" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.125430 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7806bd0e-1937-4939-9990-4f0ecc1b6a77-proxy-ca-bundles\") pod \"controller-manager-69b7475fb7-m7lwm\" (UID: \"7806bd0e-1937-4939-9990-4f0ecc1b6a77\") " pod="openshift-controller-manager/controller-manager-69b7475fb7-m7lwm" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.125583 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d548b94-b0d9-4ee6-b77d-48392eb5d892-config\") pod \"route-controller-manager-64bd4fc849-ptftd\" (UID: \"1d548b94-b0d9-4ee6-b77d-48392eb5d892\") " pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-ptftd" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.125621 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d548b94-b0d9-4ee6-b77d-48392eb5d892-serving-cert\") pod \"route-controller-manager-64bd4fc849-ptftd\" (UID: \"1d548b94-b0d9-4ee6-b77d-48392eb5d892\") " pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-ptftd" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.125674 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zjs8\" (UniqueName: \"kubernetes.io/projected/7806bd0e-1937-4939-9990-4f0ecc1b6a77-kube-api-access-7zjs8\") pod \"controller-manager-69b7475fb7-m7lwm\" (UID: \"7806bd0e-1937-4939-9990-4f0ecc1b6a77\") " pod="openshift-controller-manager/controller-manager-69b7475fb7-m7lwm" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.226521 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7806bd0e-1937-4939-9990-4f0ecc1b6a77-config\") pod \"controller-manager-69b7475fb7-m7lwm\" (UID: \"7806bd0e-1937-4939-9990-4f0ecc1b6a77\") " pod="openshift-controller-manager/controller-manager-69b7475fb7-m7lwm" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.226571 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q79xw\" (UniqueName: \"kubernetes.io/projected/1d548b94-b0d9-4ee6-b77d-48392eb5d892-kube-api-access-q79xw\") pod \"route-controller-manager-64bd4fc849-ptftd\" (UID: \"1d548b94-b0d9-4ee6-b77d-48392eb5d892\") " pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-ptftd" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.226600 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7806bd0e-1937-4939-9990-4f0ecc1b6a77-proxy-ca-bundles\") pod \"controller-manager-69b7475fb7-m7lwm\" (UID: \"7806bd0e-1937-4939-9990-4f0ecc1b6a77\") " pod="openshift-controller-manager/controller-manager-69b7475fb7-m7lwm" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.226630 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d548b94-b0d9-4ee6-b77d-48392eb5d892-config\") pod \"route-controller-manager-64bd4fc849-ptftd\" (UID: \"1d548b94-b0d9-4ee6-b77d-48392eb5d892\") " pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-ptftd" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.226652 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d548b94-b0d9-4ee6-b77d-48392eb5d892-serving-cert\") pod \"route-controller-manager-64bd4fc849-ptftd\" (UID: \"1d548b94-b0d9-4ee6-b77d-48392eb5d892\") " pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-ptftd" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.226680 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zjs8\" (UniqueName: \"kubernetes.io/projected/7806bd0e-1937-4939-9990-4f0ecc1b6a77-kube-api-access-7zjs8\") pod \"controller-manager-69b7475fb7-m7lwm\" (UID: \"7806bd0e-1937-4939-9990-4f0ecc1b6a77\") " pod="openshift-controller-manager/controller-manager-69b7475fb7-m7lwm" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.226708 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7806bd0e-1937-4939-9990-4f0ecc1b6a77-serving-cert\") pod \"controller-manager-69b7475fb7-m7lwm\" (UID: \"7806bd0e-1937-4939-9990-4f0ecc1b6a77\") " pod="openshift-controller-manager/controller-manager-69b7475fb7-m7lwm" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.226722 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7806bd0e-1937-4939-9990-4f0ecc1b6a77-client-ca\") pod \"controller-manager-69b7475fb7-m7lwm\" (UID: \"7806bd0e-1937-4939-9990-4f0ecc1b6a77\") " pod="openshift-controller-manager/controller-manager-69b7475fb7-m7lwm" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.226737 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d548b94-b0d9-4ee6-b77d-48392eb5d892-client-ca\") pod \"route-controller-manager-64bd4fc849-ptftd\" (UID: \"1d548b94-b0d9-4ee6-b77d-48392eb5d892\") " pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-ptftd" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.228191 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d548b94-b0d9-4ee6-b77d-48392eb5d892-config\") pod \"route-controller-manager-64bd4fc849-ptftd\" (UID: \"1d548b94-b0d9-4ee6-b77d-48392eb5d892\") " pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-ptftd" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.228215 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7806bd0e-1937-4939-9990-4f0ecc1b6a77-client-ca\") pod \"controller-manager-69b7475fb7-m7lwm\" (UID: \"7806bd0e-1937-4939-9990-4f0ecc1b6a77\") " pod="openshift-controller-manager/controller-manager-69b7475fb7-m7lwm" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.228704 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d548b94-b0d9-4ee6-b77d-48392eb5d892-client-ca\") pod \"route-controller-manager-64bd4fc849-ptftd\" (UID: \"1d548b94-b0d9-4ee6-b77d-48392eb5d892\") " pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-ptftd" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.228841 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7806bd0e-1937-4939-9990-4f0ecc1b6a77-config\") pod \"controller-manager-69b7475fb7-m7lwm\" (UID: \"7806bd0e-1937-4939-9990-4f0ecc1b6a77\") " pod="openshift-controller-manager/controller-manager-69b7475fb7-m7lwm" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.228944 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7806bd0e-1937-4939-9990-4f0ecc1b6a77-proxy-ca-bundles\") pod \"controller-manager-69b7475fb7-m7lwm\" (UID: \"7806bd0e-1937-4939-9990-4f0ecc1b6a77\") " pod="openshift-controller-manager/controller-manager-69b7475fb7-m7lwm" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.230744 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d548b94-b0d9-4ee6-b77d-48392eb5d892-serving-cert\") pod \"route-controller-manager-64bd4fc849-ptftd\" (UID: \"1d548b94-b0d9-4ee6-b77d-48392eb5d892\") " pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-ptftd" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.235600 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7806bd0e-1937-4939-9990-4f0ecc1b6a77-serving-cert\") pod \"controller-manager-69b7475fb7-m7lwm\" (UID: \"7806bd0e-1937-4939-9990-4f0ecc1b6a77\") " pod="openshift-controller-manager/controller-manager-69b7475fb7-m7lwm" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.242839 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zjs8\" (UniqueName: \"kubernetes.io/projected/7806bd0e-1937-4939-9990-4f0ecc1b6a77-kube-api-access-7zjs8\") pod \"controller-manager-69b7475fb7-m7lwm\" (UID: \"7806bd0e-1937-4939-9990-4f0ecc1b6a77\") " pod="openshift-controller-manager/controller-manager-69b7475fb7-m7lwm" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.243535 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q79xw\" (UniqueName: \"kubernetes.io/projected/1d548b94-b0d9-4ee6-b77d-48392eb5d892-kube-api-access-q79xw\") pod \"route-controller-manager-64bd4fc849-ptftd\" (UID: \"1d548b94-b0d9-4ee6-b77d-48392eb5d892\") " pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-ptftd" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.312634 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69b7475fb7-m7lwm" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.321602 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-ptftd" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.499903 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69b7475fb7-m7lwm"] Jan 31 07:27:05 crc kubenswrapper[4908]: W0131 07:27:05.504941 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7806bd0e_1937_4939_9990_4f0ecc1b6a77.slice/crio-781fe80b842a4eee3bb2165f931ed71e20319d6628986927e10daf2d1afd53ac WatchSource:0}: Error finding container 781fe80b842a4eee3bb2165f931ed71e20319d6628986927e10daf2d1afd53ac: Status 404 returned error can't find the container with id 781fe80b842a4eee3bb2165f931ed71e20319d6628986927e10daf2d1afd53ac Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.683593 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69b7475fb7-m7lwm" event={"ID":"7806bd0e-1937-4939-9990-4f0ecc1b6a77","Type":"ContainerStarted","Data":"d89b06939784b4190364655e0f709fb9b3ec54a5bcbacea9dd31001dc6b6a1e0"} Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.683878 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69b7475fb7-m7lwm" event={"ID":"7806bd0e-1937-4939-9990-4f0ecc1b6a77","Type":"ContainerStarted","Data":"781fe80b842a4eee3bb2165f931ed71e20319d6628986927e10daf2d1afd53ac"} Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.683895 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-69b7475fb7-m7lwm" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.684996 4908 patch_prober.go:28] interesting pod/controller-manager-69b7475fb7-m7lwm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.685052 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-69b7475fb7-m7lwm" podUID="7806bd0e-1937-4939-9990-4f0ecc1b6a77" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.699589 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-69b7475fb7-m7lwm" podStartSLOduration=2.699573584 podStartE2EDuration="2.699573584s" podCreationTimestamp="2026-01-31 07:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:27:05.695968049 +0000 UTC m=+332.311912703" watchObservedRunningTime="2026-01-31 07:27:05.699573584 +0000 UTC m=+332.315518238" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.762330 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64bd4fc849-ptftd"] Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.946618 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a5bfc4f-edfb-4dd1-a262-21304fde3645" path="/var/lib/kubelet/pods/7a5bfc4f-edfb-4dd1-a262-21304fde3645/volumes" Jan 31 07:27:05 crc kubenswrapper[4908]: I0131 07:27:05.947423 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85e9aa1d-27cc-4650-8124-8928002133fb" path="/var/lib/kubelet/pods/85e9aa1d-27cc-4650-8124-8928002133fb/volumes" Jan 31 07:27:06 crc kubenswrapper[4908]: I0131 07:27:06.692651 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-ptftd" event={"ID":"1d548b94-b0d9-4ee6-b77d-48392eb5d892","Type":"ContainerStarted","Data":"54d6032e544b90d28c9d42287e040d72ba53d8d238e5be35773b66adfcd20b6a"} Jan 31 07:27:06 crc kubenswrapper[4908]: I0131 07:27:06.693008 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-ptftd" Jan 31 07:27:06 crc kubenswrapper[4908]: I0131 07:27:06.693029 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-ptftd" event={"ID":"1d548b94-b0d9-4ee6-b77d-48392eb5d892","Type":"ContainerStarted","Data":"3f40ed65f1ef86fc3dfebb190a6b8b87c193d354a61fa52926e6bebf841e3f82"} Jan 31 07:27:06 crc kubenswrapper[4908]: I0131 07:27:06.696307 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-69b7475fb7-m7lwm" Jan 31 07:27:06 crc kubenswrapper[4908]: I0131 07:27:06.697537 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-ptftd" Jan 31 07:27:06 crc kubenswrapper[4908]: I0131 07:27:06.709692 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-ptftd" podStartSLOduration=3.7096749300000003 podStartE2EDuration="3.70967493s" podCreationTimestamp="2026-01-31 07:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:27:06.707295867 +0000 UTC m=+333.323240521" watchObservedRunningTime="2026-01-31 07:27:06.70967493 +0000 UTC m=+333.325619574" Jan 31 07:27:08 crc kubenswrapper[4908]: I0131 07:27:08.813238 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 31 07:27:16 crc kubenswrapper[4908]: I0131 07:27:16.481788 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 07:27:18 crc kubenswrapper[4908]: I0131 07:27:18.282928 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 07:27:18 crc kubenswrapper[4908]: I0131 07:27:18.295950 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 31 07:27:18 crc kubenswrapper[4908]: I0131 07:27:18.953874 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 07:27:19 crc kubenswrapper[4908]: I0131 07:27:19.124557 4908 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 31 07:27:19 crc kubenswrapper[4908]: I0131 07:27:19.412283 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 07:27:24 crc kubenswrapper[4908]: I0131 07:27:24.325435 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64bd4fc849-ptftd"] Jan 31 07:27:24 crc kubenswrapper[4908]: I0131 07:27:24.325863 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-ptftd" podUID="1d548b94-b0d9-4ee6-b77d-48392eb5d892" containerName="route-controller-manager" containerID="cri-o://54d6032e544b90d28c9d42287e040d72ba53d8d238e5be35773b66adfcd20b6a" gracePeriod=30 Jan 31 07:27:24 crc kubenswrapper[4908]: I0131 07:27:24.801371 4908 generic.go:334] "Generic (PLEG): container finished" podID="1d548b94-b0d9-4ee6-b77d-48392eb5d892" containerID="54d6032e544b90d28c9d42287e040d72ba53d8d238e5be35773b66adfcd20b6a" exitCode=0 Jan 31 07:27:24 crc kubenswrapper[4908]: I0131 07:27:24.801465 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-ptftd" event={"ID":"1d548b94-b0d9-4ee6-b77d-48392eb5d892","Type":"ContainerDied","Data":"54d6032e544b90d28c9d42287e040d72ba53d8d238e5be35773b66adfcd20b6a"} Jan 31 07:27:24 crc kubenswrapper[4908]: I0131 07:27:24.860646 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-ptftd" Jan 31 07:27:24 crc kubenswrapper[4908]: I0131 07:27:24.988044 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d548b94-b0d9-4ee6-b77d-48392eb5d892-serving-cert\") pod \"1d548b94-b0d9-4ee6-b77d-48392eb5d892\" (UID: \"1d548b94-b0d9-4ee6-b77d-48392eb5d892\") " Jan 31 07:27:24 crc kubenswrapper[4908]: I0131 07:27:24.988186 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d548b94-b0d9-4ee6-b77d-48392eb5d892-config\") pod \"1d548b94-b0d9-4ee6-b77d-48392eb5d892\" (UID: \"1d548b94-b0d9-4ee6-b77d-48392eb5d892\") " Jan 31 07:27:24 crc kubenswrapper[4908]: I0131 07:27:24.988568 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q79xw\" (UniqueName: \"kubernetes.io/projected/1d548b94-b0d9-4ee6-b77d-48392eb5d892-kube-api-access-q79xw\") pod \"1d548b94-b0d9-4ee6-b77d-48392eb5d892\" (UID: \"1d548b94-b0d9-4ee6-b77d-48392eb5d892\") " Jan 31 07:27:24 crc kubenswrapper[4908]: I0131 07:27:24.988672 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d548b94-b0d9-4ee6-b77d-48392eb5d892-client-ca\") pod \"1d548b94-b0d9-4ee6-b77d-48392eb5d892\" (UID: \"1d548b94-b0d9-4ee6-b77d-48392eb5d892\") " Jan 31 07:27:24 crc kubenswrapper[4908]: I0131 07:27:24.989466 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d548b94-b0d9-4ee6-b77d-48392eb5d892-client-ca" (OuterVolumeSpecName: "client-ca") pod "1d548b94-b0d9-4ee6-b77d-48392eb5d892" (UID: "1d548b94-b0d9-4ee6-b77d-48392eb5d892"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:27:24 crc kubenswrapper[4908]: I0131 07:27:24.989526 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d548b94-b0d9-4ee6-b77d-48392eb5d892-config" (OuterVolumeSpecName: "config") pod "1d548b94-b0d9-4ee6-b77d-48392eb5d892" (UID: "1d548b94-b0d9-4ee6-b77d-48392eb5d892"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:27:24 crc kubenswrapper[4908]: I0131 07:27:24.995833 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d548b94-b0d9-4ee6-b77d-48392eb5d892-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1d548b94-b0d9-4ee6-b77d-48392eb5d892" (UID: "1d548b94-b0d9-4ee6-b77d-48392eb5d892"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:27:24 crc kubenswrapper[4908]: I0131 07:27:24.996617 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d548b94-b0d9-4ee6-b77d-48392eb5d892-kube-api-access-q79xw" (OuterVolumeSpecName: "kube-api-access-q79xw") pod "1d548b94-b0d9-4ee6-b77d-48392eb5d892" (UID: "1d548b94-b0d9-4ee6-b77d-48392eb5d892"). InnerVolumeSpecName "kube-api-access-q79xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:27:25 crc kubenswrapper[4908]: I0131 07:27:25.091101 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q79xw\" (UniqueName: \"kubernetes.io/projected/1d548b94-b0d9-4ee6-b77d-48392eb5d892-kube-api-access-q79xw\") on node \"crc\" DevicePath \"\"" Jan 31 07:27:25 crc kubenswrapper[4908]: I0131 07:27:25.091406 4908 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d548b94-b0d9-4ee6-b77d-48392eb5d892-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:27:25 crc kubenswrapper[4908]: I0131 07:27:25.091562 4908 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d548b94-b0d9-4ee6-b77d-48392eb5d892-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:27:25 crc kubenswrapper[4908]: I0131 07:27:25.091699 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d548b94-b0d9-4ee6-b77d-48392eb5d892-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:27:25 crc kubenswrapper[4908]: I0131 07:27:25.743883 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c8k74"] Jan 31 07:27:25 crc kubenswrapper[4908]: I0131 07:27:25.744133 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c8k74" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" containerName="registry-server" containerID="cri-o://2d9391ec85825d92309fb7d475e9051a80259e61c1fd5d99e6560db3aea8fadf" gracePeriod=2 Jan 31 07:27:25 crc kubenswrapper[4908]: I0131 07:27:25.808244 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-ptftd" event={"ID":"1d548b94-b0d9-4ee6-b77d-48392eb5d892","Type":"ContainerDied","Data":"3f40ed65f1ef86fc3dfebb190a6b8b87c193d354a61fa52926e6bebf841e3f82"} Jan 31 07:27:25 crc kubenswrapper[4908]: I0131 07:27:25.808302 4908 scope.go:117] "RemoveContainer" containerID="54d6032e544b90d28c9d42287e040d72ba53d8d238e5be35773b66adfcd20b6a" Jan 31 07:27:25 crc kubenswrapper[4908]: I0131 07:27:25.808310 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-ptftd" Jan 31 07:27:25 crc kubenswrapper[4908]: I0131 07:27:25.853636 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64bd4fc849-ptftd"] Jan 31 07:27:25 crc kubenswrapper[4908]: I0131 07:27:25.858293 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64bd4fc849-ptftd"] Jan 31 07:27:25 crc kubenswrapper[4908]: I0131 07:27:25.963313 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d548b94-b0d9-4ee6-b77d-48392eb5d892" path="/var/lib/kubelet/pods/1d548b94-b0d9-4ee6-b77d-48392eb5d892/volumes" Jan 31 07:27:25 crc kubenswrapper[4908]: I0131 07:27:25.964096 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hfhwx"] Jan 31 07:27:25 crc kubenswrapper[4908]: I0131 07:27:25.964381 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hfhwx" podUID="100bafc6-355c-4131-9907-45004788f44c" containerName="registry-server" containerID="cri-o://121568c6be516eb96e2c067813ea1469502affabee53442a201a8aad08b8daae" gracePeriod=2 Jan 31 07:27:25 crc kubenswrapper[4908]: I0131 07:27:25.997407 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bd6bbd66d-xt6h8"] Jan 31 07:27:25 crc kubenswrapper[4908]: E0131 07:27:25.997936 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d548b94-b0d9-4ee6-b77d-48392eb5d892" containerName="route-controller-manager" Jan 31 07:27:25 crc kubenswrapper[4908]: I0131 07:27:25.997949 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d548b94-b0d9-4ee6-b77d-48392eb5d892" containerName="route-controller-manager" Jan 31 07:27:25 crc kubenswrapper[4908]: I0131 07:27:25.998096 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d548b94-b0d9-4ee6-b77d-48392eb5d892" containerName="route-controller-manager" Jan 31 07:27:25 crc kubenswrapper[4908]: I0131 07:27:25.998499 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bd6bbd66d-xt6h8" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.005219 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dcf45cd4-3a59-47c7-a63f-9aa22e247021-client-ca\") pod \"route-controller-manager-7bd6bbd66d-xt6h8\" (UID: \"dcf45cd4-3a59-47c7-a63f-9aa22e247021\") " pod="openshift-route-controller-manager/route-controller-manager-7bd6bbd66d-xt6h8" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.005288 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcf45cd4-3a59-47c7-a63f-9aa22e247021-serving-cert\") pod \"route-controller-manager-7bd6bbd66d-xt6h8\" (UID: \"dcf45cd4-3a59-47c7-a63f-9aa22e247021\") " pod="openshift-route-controller-manager/route-controller-manager-7bd6bbd66d-xt6h8" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.005312 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcf45cd4-3a59-47c7-a63f-9aa22e247021-config\") pod \"route-controller-manager-7bd6bbd66d-xt6h8\" (UID: \"dcf45cd4-3a59-47c7-a63f-9aa22e247021\") " pod="openshift-route-controller-manager/route-controller-manager-7bd6bbd66d-xt6h8" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.005356 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vt7k\" (UniqueName: \"kubernetes.io/projected/dcf45cd4-3a59-47c7-a63f-9aa22e247021-kube-api-access-6vt7k\") pod \"route-controller-manager-7bd6bbd66d-xt6h8\" (UID: \"dcf45cd4-3a59-47c7-a63f-9aa22e247021\") " pod="openshift-route-controller-manager/route-controller-manager-7bd6bbd66d-xt6h8" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.007361 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.008084 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.008298 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.008447 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.008698 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.008798 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.012618 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bd6bbd66d-xt6h8"] Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.106231 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dcf45cd4-3a59-47c7-a63f-9aa22e247021-client-ca\") pod \"route-controller-manager-7bd6bbd66d-xt6h8\" (UID: \"dcf45cd4-3a59-47c7-a63f-9aa22e247021\") " pod="openshift-route-controller-manager/route-controller-manager-7bd6bbd66d-xt6h8" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.106331 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcf45cd4-3a59-47c7-a63f-9aa22e247021-serving-cert\") pod \"route-controller-manager-7bd6bbd66d-xt6h8\" (UID: \"dcf45cd4-3a59-47c7-a63f-9aa22e247021\") " pod="openshift-route-controller-manager/route-controller-manager-7bd6bbd66d-xt6h8" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.106397 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcf45cd4-3a59-47c7-a63f-9aa22e247021-config\") pod \"route-controller-manager-7bd6bbd66d-xt6h8\" (UID: \"dcf45cd4-3a59-47c7-a63f-9aa22e247021\") " pod="openshift-route-controller-manager/route-controller-manager-7bd6bbd66d-xt6h8" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.106492 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vt7k\" (UniqueName: \"kubernetes.io/projected/dcf45cd4-3a59-47c7-a63f-9aa22e247021-kube-api-access-6vt7k\") pod \"route-controller-manager-7bd6bbd66d-xt6h8\" (UID: \"dcf45cd4-3a59-47c7-a63f-9aa22e247021\") " pod="openshift-route-controller-manager/route-controller-manager-7bd6bbd66d-xt6h8" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.108431 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dcf45cd4-3a59-47c7-a63f-9aa22e247021-client-ca\") pod \"route-controller-manager-7bd6bbd66d-xt6h8\" (UID: \"dcf45cd4-3a59-47c7-a63f-9aa22e247021\") " pod="openshift-route-controller-manager/route-controller-manager-7bd6bbd66d-xt6h8" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.109526 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcf45cd4-3a59-47c7-a63f-9aa22e247021-config\") pod \"route-controller-manager-7bd6bbd66d-xt6h8\" (UID: \"dcf45cd4-3a59-47c7-a63f-9aa22e247021\") " pod="openshift-route-controller-manager/route-controller-manager-7bd6bbd66d-xt6h8" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.120436 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcf45cd4-3a59-47c7-a63f-9aa22e247021-serving-cert\") pod \"route-controller-manager-7bd6bbd66d-xt6h8\" (UID: \"dcf45cd4-3a59-47c7-a63f-9aa22e247021\") " pod="openshift-route-controller-manager/route-controller-manager-7bd6bbd66d-xt6h8" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.123510 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vt7k\" (UniqueName: \"kubernetes.io/projected/dcf45cd4-3a59-47c7-a63f-9aa22e247021-kube-api-access-6vt7k\") pod \"route-controller-manager-7bd6bbd66d-xt6h8\" (UID: \"dcf45cd4-3a59-47c7-a63f-9aa22e247021\") " pod="openshift-route-controller-manager/route-controller-manager-7bd6bbd66d-xt6h8" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.196733 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c8k74" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.309298 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba3d735e-ca4d-48b1-90c2-2edbcfa582ac-catalog-content\") pod \"ba3d735e-ca4d-48b1-90c2-2edbcfa582ac\" (UID: \"ba3d735e-ca4d-48b1-90c2-2edbcfa582ac\") " Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.309374 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba3d735e-ca4d-48b1-90c2-2edbcfa582ac-utilities\") pod \"ba3d735e-ca4d-48b1-90c2-2edbcfa582ac\" (UID: \"ba3d735e-ca4d-48b1-90c2-2edbcfa582ac\") " Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.309448 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzgbh\" (UniqueName: \"kubernetes.io/projected/ba3d735e-ca4d-48b1-90c2-2edbcfa582ac-kube-api-access-zzgbh\") pod \"ba3d735e-ca4d-48b1-90c2-2edbcfa582ac\" (UID: \"ba3d735e-ca4d-48b1-90c2-2edbcfa582ac\") " Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.310895 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba3d735e-ca4d-48b1-90c2-2edbcfa582ac-utilities" (OuterVolumeSpecName: "utilities") pod "ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" (UID: "ba3d735e-ca4d-48b1-90c2-2edbcfa582ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.312659 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba3d735e-ca4d-48b1-90c2-2edbcfa582ac-kube-api-access-zzgbh" (OuterVolumeSpecName: "kube-api-access-zzgbh") pod "ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" (UID: "ba3d735e-ca4d-48b1-90c2-2edbcfa582ac"). InnerVolumeSpecName "kube-api-access-zzgbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.362907 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba3d735e-ca4d-48b1-90c2-2edbcfa582ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" (UID: "ba3d735e-ca4d-48b1-90c2-2edbcfa582ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.396268 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bd6bbd66d-xt6h8" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.397775 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.410672 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba3d735e-ca4d-48b1-90c2-2edbcfa582ac-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.410702 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba3d735e-ca4d-48b1-90c2-2edbcfa582ac-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.410714 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzgbh\" (UniqueName: \"kubernetes.io/projected/ba3d735e-ca4d-48b1-90c2-2edbcfa582ac-kube-api-access-zzgbh\") on node \"crc\" DevicePath \"\"" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.443882 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hfhwx" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.615504 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/100bafc6-355c-4131-9907-45004788f44c-catalog-content\") pod \"100bafc6-355c-4131-9907-45004788f44c\" (UID: \"100bafc6-355c-4131-9907-45004788f44c\") " Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.615826 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wbrx\" (UniqueName: \"kubernetes.io/projected/100bafc6-355c-4131-9907-45004788f44c-kube-api-access-5wbrx\") pod \"100bafc6-355c-4131-9907-45004788f44c\" (UID: \"100bafc6-355c-4131-9907-45004788f44c\") " Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.615898 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/100bafc6-355c-4131-9907-45004788f44c-utilities\") pod \"100bafc6-355c-4131-9907-45004788f44c\" (UID: \"100bafc6-355c-4131-9907-45004788f44c\") " Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.616838 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/100bafc6-355c-4131-9907-45004788f44c-utilities" (OuterVolumeSpecName: "utilities") pod "100bafc6-355c-4131-9907-45004788f44c" (UID: "100bafc6-355c-4131-9907-45004788f44c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.620514 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/100bafc6-355c-4131-9907-45004788f44c-kube-api-access-5wbrx" (OuterVolumeSpecName: "kube-api-access-5wbrx") pod "100bafc6-355c-4131-9907-45004788f44c" (UID: "100bafc6-355c-4131-9907-45004788f44c"). InnerVolumeSpecName "kube-api-access-5wbrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.661342 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/100bafc6-355c-4131-9907-45004788f44c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "100bafc6-355c-4131-9907-45004788f44c" (UID: "100bafc6-355c-4131-9907-45004788f44c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.717367 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wbrx\" (UniqueName: \"kubernetes.io/projected/100bafc6-355c-4131-9907-45004788f44c-kube-api-access-5wbrx\") on node \"crc\" DevicePath \"\"" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.717441 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/100bafc6-355c-4131-9907-45004788f44c-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.717456 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/100bafc6-355c-4131-9907-45004788f44c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.793365 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bd6bbd66d-xt6h8"] Jan 31 07:27:26 crc kubenswrapper[4908]: W0131 07:27:26.794744 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcf45cd4_3a59_47c7_a63f_9aa22e247021.slice/crio-105173bf637915d28b55a2a0adb3b61a8310d37a55974869e1f523405aa1f6b4 WatchSource:0}: Error finding container 105173bf637915d28b55a2a0adb3b61a8310d37a55974869e1f523405aa1f6b4: Status 404 returned error can't find the container with id 105173bf637915d28b55a2a0adb3b61a8310d37a55974869e1f523405aa1f6b4 Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.819656 4908 generic.go:334] "Generic (PLEG): container finished" podID="100bafc6-355c-4131-9907-45004788f44c" containerID="121568c6be516eb96e2c067813ea1469502affabee53442a201a8aad08b8daae" exitCode=0 Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.819764 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfhwx" event={"ID":"100bafc6-355c-4131-9907-45004788f44c","Type":"ContainerDied","Data":"121568c6be516eb96e2c067813ea1469502affabee53442a201a8aad08b8daae"} Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.819808 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfhwx" event={"ID":"100bafc6-355c-4131-9907-45004788f44c","Type":"ContainerDied","Data":"5335251c873b37870693eaf7f4ba281f9a14a83bd7bc3ccb10d36c46aedd1bcf"} Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.819800 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hfhwx" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.819838 4908 scope.go:117] "RemoveContainer" containerID="121568c6be516eb96e2c067813ea1469502affabee53442a201a8aad08b8daae" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.821641 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bd6bbd66d-xt6h8" event={"ID":"dcf45cd4-3a59-47c7-a63f-9aa22e247021","Type":"ContainerStarted","Data":"105173bf637915d28b55a2a0adb3b61a8310d37a55974869e1f523405aa1f6b4"} Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.825673 4908 generic.go:334] "Generic (PLEG): container finished" podID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" containerID="2d9391ec85825d92309fb7d475e9051a80259e61c1fd5d99e6560db3aea8fadf" exitCode=0 Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.825723 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c8k74" event={"ID":"ba3d735e-ca4d-48b1-90c2-2edbcfa582ac","Type":"ContainerDied","Data":"2d9391ec85825d92309fb7d475e9051a80259e61c1fd5d99e6560db3aea8fadf"} Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.825741 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c8k74" event={"ID":"ba3d735e-ca4d-48b1-90c2-2edbcfa582ac","Type":"ContainerDied","Data":"a1da9aa2345801d9df2aa7c782295856a95eda5415a0dafe5d80879717327b8b"} Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.825768 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c8k74" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.855082 4908 scope.go:117] "RemoveContainer" containerID="76b90fffc34716e5e6a8a8e178d489d2fe352042a3dac36e2a621b7c992089a4" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.904795 4908 scope.go:117] "RemoveContainer" containerID="bbd003566d1a9309aeb64e31face697eb37e9dc8dc96de24f3947bd2733485ba" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.951081 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hfhwx"] Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.957085 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hfhwx"] Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.957733 4908 scope.go:117] "RemoveContainer" containerID="121568c6be516eb96e2c067813ea1469502affabee53442a201a8aad08b8daae" Jan 31 07:27:26 crc kubenswrapper[4908]: E0131 07:27:26.958159 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"121568c6be516eb96e2c067813ea1469502affabee53442a201a8aad08b8daae\": container with ID starting with 121568c6be516eb96e2c067813ea1469502affabee53442a201a8aad08b8daae not found: ID does not exist" containerID="121568c6be516eb96e2c067813ea1469502affabee53442a201a8aad08b8daae" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.958199 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"121568c6be516eb96e2c067813ea1469502affabee53442a201a8aad08b8daae"} err="failed to get container status \"121568c6be516eb96e2c067813ea1469502affabee53442a201a8aad08b8daae\": rpc error: code = NotFound desc = could not find container \"121568c6be516eb96e2c067813ea1469502affabee53442a201a8aad08b8daae\": container with ID starting with 121568c6be516eb96e2c067813ea1469502affabee53442a201a8aad08b8daae not found: ID does not exist" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.958229 4908 scope.go:117] "RemoveContainer" containerID="76b90fffc34716e5e6a8a8e178d489d2fe352042a3dac36e2a621b7c992089a4" Jan 31 07:27:26 crc kubenswrapper[4908]: E0131 07:27:26.958864 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76b90fffc34716e5e6a8a8e178d489d2fe352042a3dac36e2a621b7c992089a4\": container with ID starting with 76b90fffc34716e5e6a8a8e178d489d2fe352042a3dac36e2a621b7c992089a4 not found: ID does not exist" containerID="76b90fffc34716e5e6a8a8e178d489d2fe352042a3dac36e2a621b7c992089a4" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.958893 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76b90fffc34716e5e6a8a8e178d489d2fe352042a3dac36e2a621b7c992089a4"} err="failed to get container status \"76b90fffc34716e5e6a8a8e178d489d2fe352042a3dac36e2a621b7c992089a4\": rpc error: code = NotFound desc = could not find container \"76b90fffc34716e5e6a8a8e178d489d2fe352042a3dac36e2a621b7c992089a4\": container with ID starting with 76b90fffc34716e5e6a8a8e178d489d2fe352042a3dac36e2a621b7c992089a4 not found: ID does not exist" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.958915 4908 scope.go:117] "RemoveContainer" containerID="bbd003566d1a9309aeb64e31face697eb37e9dc8dc96de24f3947bd2733485ba" Jan 31 07:27:26 crc kubenswrapper[4908]: E0131 07:27:26.959177 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbd003566d1a9309aeb64e31face697eb37e9dc8dc96de24f3947bd2733485ba\": container with ID starting with bbd003566d1a9309aeb64e31face697eb37e9dc8dc96de24f3947bd2733485ba not found: ID does not exist" containerID="bbd003566d1a9309aeb64e31face697eb37e9dc8dc96de24f3947bd2733485ba" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.959202 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbd003566d1a9309aeb64e31face697eb37e9dc8dc96de24f3947bd2733485ba"} err="failed to get container status \"bbd003566d1a9309aeb64e31face697eb37e9dc8dc96de24f3947bd2733485ba\": rpc error: code = NotFound desc = could not find container \"bbd003566d1a9309aeb64e31face697eb37e9dc8dc96de24f3947bd2733485ba\": container with ID starting with bbd003566d1a9309aeb64e31face697eb37e9dc8dc96de24f3947bd2733485ba not found: ID does not exist" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.959217 4908 scope.go:117] "RemoveContainer" containerID="2d9391ec85825d92309fb7d475e9051a80259e61c1fd5d99e6560db3aea8fadf" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.961260 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c8k74"] Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.969182 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c8k74"] Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.976724 4908 scope.go:117] "RemoveContainer" containerID="a1afa65ca189140daf9888a78b1f8fd1193bae0602c0972099450673a8745d2d" Jan 31 07:27:26 crc kubenswrapper[4908]: I0131 07:27:26.994922 4908 scope.go:117] "RemoveContainer" containerID="59ef281ca63a4f4c54fdf6d2b28b4d29bc451b8eaaff7060de89c24f89dae2d5" Jan 31 07:27:27 crc kubenswrapper[4908]: I0131 07:27:27.011138 4908 scope.go:117] "RemoveContainer" containerID="2d9391ec85825d92309fb7d475e9051a80259e61c1fd5d99e6560db3aea8fadf" Jan 31 07:27:27 crc kubenswrapper[4908]: E0131 07:27:27.012070 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d9391ec85825d92309fb7d475e9051a80259e61c1fd5d99e6560db3aea8fadf\": container with ID starting with 2d9391ec85825d92309fb7d475e9051a80259e61c1fd5d99e6560db3aea8fadf not found: ID does not exist" containerID="2d9391ec85825d92309fb7d475e9051a80259e61c1fd5d99e6560db3aea8fadf" Jan 31 07:27:27 crc kubenswrapper[4908]: I0131 07:27:27.012111 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d9391ec85825d92309fb7d475e9051a80259e61c1fd5d99e6560db3aea8fadf"} err="failed to get container status \"2d9391ec85825d92309fb7d475e9051a80259e61c1fd5d99e6560db3aea8fadf\": rpc error: code = NotFound desc = could not find container \"2d9391ec85825d92309fb7d475e9051a80259e61c1fd5d99e6560db3aea8fadf\": container with ID starting with 2d9391ec85825d92309fb7d475e9051a80259e61c1fd5d99e6560db3aea8fadf not found: ID does not exist" Jan 31 07:27:27 crc kubenswrapper[4908]: I0131 07:27:27.012139 4908 scope.go:117] "RemoveContainer" containerID="a1afa65ca189140daf9888a78b1f8fd1193bae0602c0972099450673a8745d2d" Jan 31 07:27:27 crc kubenswrapper[4908]: E0131 07:27:27.012377 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1afa65ca189140daf9888a78b1f8fd1193bae0602c0972099450673a8745d2d\": container with ID starting with a1afa65ca189140daf9888a78b1f8fd1193bae0602c0972099450673a8745d2d not found: ID does not exist" containerID="a1afa65ca189140daf9888a78b1f8fd1193bae0602c0972099450673a8745d2d" Jan 31 07:27:27 crc kubenswrapper[4908]: I0131 07:27:27.012408 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1afa65ca189140daf9888a78b1f8fd1193bae0602c0972099450673a8745d2d"} err="failed to get container status \"a1afa65ca189140daf9888a78b1f8fd1193bae0602c0972099450673a8745d2d\": rpc error: code = NotFound desc = could not find container \"a1afa65ca189140daf9888a78b1f8fd1193bae0602c0972099450673a8745d2d\": container with ID starting with a1afa65ca189140daf9888a78b1f8fd1193bae0602c0972099450673a8745d2d not found: ID does not exist" Jan 31 07:27:27 crc kubenswrapper[4908]: I0131 07:27:27.012431 4908 scope.go:117] "RemoveContainer" containerID="59ef281ca63a4f4c54fdf6d2b28b4d29bc451b8eaaff7060de89c24f89dae2d5" Jan 31 07:27:27 crc kubenswrapper[4908]: E0131 07:27:27.012699 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59ef281ca63a4f4c54fdf6d2b28b4d29bc451b8eaaff7060de89c24f89dae2d5\": container with ID starting with 59ef281ca63a4f4c54fdf6d2b28b4d29bc451b8eaaff7060de89c24f89dae2d5 not found: ID does not exist" containerID="59ef281ca63a4f4c54fdf6d2b28b4d29bc451b8eaaff7060de89c24f89dae2d5" Jan 31 07:27:27 crc kubenswrapper[4908]: I0131 07:27:27.012722 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59ef281ca63a4f4c54fdf6d2b28b4d29bc451b8eaaff7060de89c24f89dae2d5"} err="failed to get container status \"59ef281ca63a4f4c54fdf6d2b28b4d29bc451b8eaaff7060de89c24f89dae2d5\": rpc error: code = NotFound desc = could not find container \"59ef281ca63a4f4c54fdf6d2b28b4d29bc451b8eaaff7060de89c24f89dae2d5\": container with ID starting with 59ef281ca63a4f4c54fdf6d2b28b4d29bc451b8eaaff7060de89c24f89dae2d5 not found: ID does not exist" Jan 31 07:27:27 crc kubenswrapper[4908]: I0131 07:27:27.469545 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 31 07:27:27 crc kubenswrapper[4908]: I0131 07:27:27.836884 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bd6bbd66d-xt6h8" event={"ID":"dcf45cd4-3a59-47c7-a63f-9aa22e247021","Type":"ContainerStarted","Data":"8e023bb73faaf00ab7d1c5dbda88cf978643ca569e7807c2c8c028b51cb3875a"} Jan 31 07:27:27 crc kubenswrapper[4908]: I0131 07:27:27.857658 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7bd6bbd66d-xt6h8" podStartSLOduration=3.8576284100000002 podStartE2EDuration="3.85762841s" podCreationTimestamp="2026-01-31 07:27:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:27:27.854139168 +0000 UTC m=+354.470083842" watchObservedRunningTime="2026-01-31 07:27:27.85762841 +0000 UTC m=+354.473573064" Jan 31 07:27:27 crc kubenswrapper[4908]: I0131 07:27:27.948540 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="100bafc6-355c-4131-9907-45004788f44c" path="/var/lib/kubelet/pods/100bafc6-355c-4131-9907-45004788f44c/volumes" Jan 31 07:27:27 crc kubenswrapper[4908]: I0131 07:27:27.950182 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" path="/var/lib/kubelet/pods/ba3d735e-ca4d-48b1-90c2-2edbcfa582ac/volumes" Jan 31 07:27:28 crc kubenswrapper[4908]: I0131 07:27:28.146659 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9szx"] Jan 31 07:27:28 crc kubenswrapper[4908]: I0131 07:27:28.147200 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n9szx" podUID="05f1e995-f324-4225-a4a8-d476a4da7ff4" containerName="registry-server" containerID="cri-o://e8d2ed3d164cf8d554cf64c87fcbf34735cc5ea7956b9600fb4c8688ff3bcfd2" gracePeriod=2 Jan 31 07:27:28 crc kubenswrapper[4908]: I0131 07:27:28.346094 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wj798"] Jan 31 07:27:28 crc kubenswrapper[4908]: I0131 07:27:28.346668 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wj798" podUID="ce51ffb3-c332-4bb8-b574-44911178c9a1" containerName="registry-server" containerID="cri-o://83409aa5478b260b9fc14577514e086186725234f8ade5477fced78e2eef583b" gracePeriod=2 Jan 31 07:27:28 crc kubenswrapper[4908]: I0131 07:27:28.854426 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n9szx" Jan 31 07:27:28 crc kubenswrapper[4908]: I0131 07:27:28.854852 4908 generic.go:334] "Generic (PLEG): container finished" podID="05f1e995-f324-4225-a4a8-d476a4da7ff4" containerID="e8d2ed3d164cf8d554cf64c87fcbf34735cc5ea7956b9600fb4c8688ff3bcfd2" exitCode=0 Jan 31 07:27:28 crc kubenswrapper[4908]: I0131 07:27:28.855275 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9szx" event={"ID":"05f1e995-f324-4225-a4a8-d476a4da7ff4","Type":"ContainerDied","Data":"e8d2ed3d164cf8d554cf64c87fcbf34735cc5ea7956b9600fb4c8688ff3bcfd2"} Jan 31 07:27:28 crc kubenswrapper[4908]: I0131 07:27:28.855308 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9szx" event={"ID":"05f1e995-f324-4225-a4a8-d476a4da7ff4","Type":"ContainerDied","Data":"ccd1121ca22ed2accf8fc4361b87e0d7857ba6b2605f066b56d28b60cc53c98d"} Jan 31 07:27:28 crc kubenswrapper[4908]: I0131 07:27:28.855331 4908 scope.go:117] "RemoveContainer" containerID="e8d2ed3d164cf8d554cf64c87fcbf34735cc5ea7956b9600fb4c8688ff3bcfd2" Jan 31 07:27:28 crc kubenswrapper[4908]: I0131 07:27:28.866235 4908 generic.go:334] "Generic (PLEG): container finished" podID="ce51ffb3-c332-4bb8-b574-44911178c9a1" containerID="83409aa5478b260b9fc14577514e086186725234f8ade5477fced78e2eef583b" exitCode=0 Jan 31 07:27:28 crc kubenswrapper[4908]: I0131 07:27:28.866313 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj798" event={"ID":"ce51ffb3-c332-4bb8-b574-44911178c9a1","Type":"ContainerDied","Data":"83409aa5478b260b9fc14577514e086186725234f8ade5477fced78e2eef583b"} Jan 31 07:27:28 crc kubenswrapper[4908]: I0131 07:27:28.866527 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7bd6bbd66d-xt6h8" Jan 31 07:27:28 crc kubenswrapper[4908]: I0131 07:27:28.879333 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7bd6bbd66d-xt6h8" Jan 31 07:27:28 crc kubenswrapper[4908]: I0131 07:27:28.887406 4908 scope.go:117] "RemoveContainer" containerID="fa93e920cebc06331c818632d1e1188e86b8a6af40e7568598381566f9dfb4e4" Jan 31 07:27:28 crc kubenswrapper[4908]: I0131 07:27:28.915811 4908 scope.go:117] "RemoveContainer" containerID="b21481e10502d4608d2a9dcc61c6b5f508f2c68cb128b6292cd26dd66992c43f" Jan 31 07:27:28 crc kubenswrapper[4908]: I0131 07:27:28.939465 4908 scope.go:117] "RemoveContainer" containerID="e8d2ed3d164cf8d554cf64c87fcbf34735cc5ea7956b9600fb4c8688ff3bcfd2" Jan 31 07:27:28 crc kubenswrapper[4908]: E0131 07:27:28.941371 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8d2ed3d164cf8d554cf64c87fcbf34735cc5ea7956b9600fb4c8688ff3bcfd2\": container with ID starting with e8d2ed3d164cf8d554cf64c87fcbf34735cc5ea7956b9600fb4c8688ff3bcfd2 not found: ID does not exist" containerID="e8d2ed3d164cf8d554cf64c87fcbf34735cc5ea7956b9600fb4c8688ff3bcfd2" Jan 31 07:27:28 crc kubenswrapper[4908]: I0131 07:27:28.941403 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8d2ed3d164cf8d554cf64c87fcbf34735cc5ea7956b9600fb4c8688ff3bcfd2"} err="failed to get container status \"e8d2ed3d164cf8d554cf64c87fcbf34735cc5ea7956b9600fb4c8688ff3bcfd2\": rpc error: code = NotFound desc = could not find container \"e8d2ed3d164cf8d554cf64c87fcbf34735cc5ea7956b9600fb4c8688ff3bcfd2\": container with ID starting with e8d2ed3d164cf8d554cf64c87fcbf34735cc5ea7956b9600fb4c8688ff3bcfd2 not found: ID does not exist" Jan 31 07:27:28 crc kubenswrapper[4908]: I0131 07:27:28.941425 4908 scope.go:117] "RemoveContainer" containerID="fa93e920cebc06331c818632d1e1188e86b8a6af40e7568598381566f9dfb4e4" Jan 31 07:27:28 crc kubenswrapper[4908]: E0131 07:27:28.941778 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa93e920cebc06331c818632d1e1188e86b8a6af40e7568598381566f9dfb4e4\": container with ID starting with fa93e920cebc06331c818632d1e1188e86b8a6af40e7568598381566f9dfb4e4 not found: ID does not exist" containerID="fa93e920cebc06331c818632d1e1188e86b8a6af40e7568598381566f9dfb4e4" Jan 31 07:27:28 crc kubenswrapper[4908]: I0131 07:27:28.941806 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa93e920cebc06331c818632d1e1188e86b8a6af40e7568598381566f9dfb4e4"} err="failed to get container status \"fa93e920cebc06331c818632d1e1188e86b8a6af40e7568598381566f9dfb4e4\": rpc error: code = NotFound desc = could not find container \"fa93e920cebc06331c818632d1e1188e86b8a6af40e7568598381566f9dfb4e4\": container with ID starting with fa93e920cebc06331c818632d1e1188e86b8a6af40e7568598381566f9dfb4e4 not found: ID does not exist" Jan 31 07:27:28 crc kubenswrapper[4908]: I0131 07:27:28.941824 4908 scope.go:117] "RemoveContainer" containerID="b21481e10502d4608d2a9dcc61c6b5f508f2c68cb128b6292cd26dd66992c43f" Jan 31 07:27:28 crc kubenswrapper[4908]: E0131 07:27:28.942141 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b21481e10502d4608d2a9dcc61c6b5f508f2c68cb128b6292cd26dd66992c43f\": container with ID starting with b21481e10502d4608d2a9dcc61c6b5f508f2c68cb128b6292cd26dd66992c43f not found: ID does not exist" containerID="b21481e10502d4608d2a9dcc61c6b5f508f2c68cb128b6292cd26dd66992c43f" Jan 31 07:27:28 crc kubenswrapper[4908]: I0131 07:27:28.942177 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b21481e10502d4608d2a9dcc61c6b5f508f2c68cb128b6292cd26dd66992c43f"} err="failed to get container status \"b21481e10502d4608d2a9dcc61c6b5f508f2c68cb128b6292cd26dd66992c43f\": rpc error: code = NotFound desc = could not find container \"b21481e10502d4608d2a9dcc61c6b5f508f2c68cb128b6292cd26dd66992c43f\": container with ID starting with b21481e10502d4608d2a9dcc61c6b5f508f2c68cb128b6292cd26dd66992c43f not found: ID does not exist" Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.009562 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wj798" Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.047934 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05f1e995-f324-4225-a4a8-d476a4da7ff4-catalog-content\") pod \"05f1e995-f324-4225-a4a8-d476a4da7ff4\" (UID: \"05f1e995-f324-4225-a4a8-d476a4da7ff4\") " Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.047996 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8m4nx\" (UniqueName: \"kubernetes.io/projected/05f1e995-f324-4225-a4a8-d476a4da7ff4-kube-api-access-8m4nx\") pod \"05f1e995-f324-4225-a4a8-d476a4da7ff4\" (UID: \"05f1e995-f324-4225-a4a8-d476a4da7ff4\") " Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.048055 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05f1e995-f324-4225-a4a8-d476a4da7ff4-utilities\") pod \"05f1e995-f324-4225-a4a8-d476a4da7ff4\" (UID: \"05f1e995-f324-4225-a4a8-d476a4da7ff4\") " Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.048937 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05f1e995-f324-4225-a4a8-d476a4da7ff4-utilities" (OuterVolumeSpecName: "utilities") pod "05f1e995-f324-4225-a4a8-d476a4da7ff4" (UID: "05f1e995-f324-4225-a4a8-d476a4da7ff4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.052874 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05f1e995-f324-4225-a4a8-d476a4da7ff4-kube-api-access-8m4nx" (OuterVolumeSpecName: "kube-api-access-8m4nx") pod "05f1e995-f324-4225-a4a8-d476a4da7ff4" (UID: "05f1e995-f324-4225-a4a8-d476a4da7ff4"). InnerVolumeSpecName "kube-api-access-8m4nx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.077835 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05f1e995-f324-4225-a4a8-d476a4da7ff4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05f1e995-f324-4225-a4a8-d476a4da7ff4" (UID: "05f1e995-f324-4225-a4a8-d476a4da7ff4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.148704 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce51ffb3-c332-4bb8-b574-44911178c9a1-utilities\") pod \"ce51ffb3-c332-4bb8-b574-44911178c9a1\" (UID: \"ce51ffb3-c332-4bb8-b574-44911178c9a1\") " Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.148790 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qczk\" (UniqueName: \"kubernetes.io/projected/ce51ffb3-c332-4bb8-b574-44911178c9a1-kube-api-access-8qczk\") pod \"ce51ffb3-c332-4bb8-b574-44911178c9a1\" (UID: \"ce51ffb3-c332-4bb8-b574-44911178c9a1\") " Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.148924 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce51ffb3-c332-4bb8-b574-44911178c9a1-catalog-content\") pod \"ce51ffb3-c332-4bb8-b574-44911178c9a1\" (UID: \"ce51ffb3-c332-4bb8-b574-44911178c9a1\") " Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.149199 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05f1e995-f324-4225-a4a8-d476a4da7ff4-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.149221 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05f1e995-f324-4225-a4a8-d476a4da7ff4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.149233 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8m4nx\" (UniqueName: \"kubernetes.io/projected/05f1e995-f324-4225-a4a8-d476a4da7ff4-kube-api-access-8m4nx\") on node \"crc\" DevicePath \"\"" Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.149404 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce51ffb3-c332-4bb8-b574-44911178c9a1-utilities" (OuterVolumeSpecName: "utilities") pod "ce51ffb3-c332-4bb8-b574-44911178c9a1" (UID: "ce51ffb3-c332-4bb8-b574-44911178c9a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.151289 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce51ffb3-c332-4bb8-b574-44911178c9a1-kube-api-access-8qczk" (OuterVolumeSpecName: "kube-api-access-8qczk") pod "ce51ffb3-c332-4bb8-b574-44911178c9a1" (UID: "ce51ffb3-c332-4bb8-b574-44911178c9a1"). InnerVolumeSpecName "kube-api-access-8qczk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.250865 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce51ffb3-c332-4bb8-b574-44911178c9a1-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.250928 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qczk\" (UniqueName: \"kubernetes.io/projected/ce51ffb3-c332-4bb8-b574-44911178c9a1-kube-api-access-8qczk\") on node \"crc\" DevicePath \"\"" Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.258295 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce51ffb3-c332-4bb8-b574-44911178c9a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce51ffb3-c332-4bb8-b574-44911178c9a1" (UID: "ce51ffb3-c332-4bb8-b574-44911178c9a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.351641 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce51ffb3-c332-4bb8-b574-44911178c9a1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.877620 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n9szx" Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.882188 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wj798" Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.882234 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj798" event={"ID":"ce51ffb3-c332-4bb8-b574-44911178c9a1","Type":"ContainerDied","Data":"de3fcbcf87aae94fae8d852867e7c387d43705f1be79c433d49652c49f46ce71"} Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.882264 4908 scope.go:117] "RemoveContainer" containerID="83409aa5478b260b9fc14577514e086186725234f8ade5477fced78e2eef583b" Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.899081 4908 scope.go:117] "RemoveContainer" containerID="bd3ca5f6087e38373c1f4469fdc8e713473394765321de78e92823efd9839fd1" Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.916327 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9szx"] Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.921386 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9szx"] Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.930774 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wj798"] Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.934301 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wj798"] Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.935782 4908 scope.go:117] "RemoveContainer" containerID="83a8352a694ce2d73f6f211580a05550c0e675cb58219a424c23e3c7dfcc7446" Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.946916 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05f1e995-f324-4225-a4a8-d476a4da7ff4" path="/var/lib/kubelet/pods/05f1e995-f324-4225-a4a8-d476a4da7ff4/volumes" Jan 31 07:27:29 crc kubenswrapper[4908]: I0131 07:27:29.947848 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce51ffb3-c332-4bb8-b574-44911178c9a1" path="/var/lib/kubelet/pods/ce51ffb3-c332-4bb8-b574-44911178c9a1/volumes" Jan 31 07:27:31 crc kubenswrapper[4908]: I0131 07:27:31.855910 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 07:27:43 crc kubenswrapper[4908]: I0131 07:27:43.807623 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bd6bbd66d-xt6h8"] Jan 31 07:27:43 crc kubenswrapper[4908]: I0131 07:27:43.808206 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7bd6bbd66d-xt6h8" podUID="dcf45cd4-3a59-47c7-a63f-9aa22e247021" containerName="route-controller-manager" containerID="cri-o://8e023bb73faaf00ab7d1c5dbda88cf978643ca569e7807c2c8c028b51cb3875a" gracePeriod=30 Jan 31 07:27:44 crc kubenswrapper[4908]: I0131 07:27:44.781306 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bd6bbd66d-xt6h8" Jan 31 07:27:44 crc kubenswrapper[4908]: I0131 07:27:44.950289 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcf45cd4-3a59-47c7-a63f-9aa22e247021-config\") pod \"dcf45cd4-3a59-47c7-a63f-9aa22e247021\" (UID: \"dcf45cd4-3a59-47c7-a63f-9aa22e247021\") " Jan 31 07:27:44 crc kubenswrapper[4908]: I0131 07:27:44.950355 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vt7k\" (UniqueName: \"kubernetes.io/projected/dcf45cd4-3a59-47c7-a63f-9aa22e247021-kube-api-access-6vt7k\") pod \"dcf45cd4-3a59-47c7-a63f-9aa22e247021\" (UID: \"dcf45cd4-3a59-47c7-a63f-9aa22e247021\") " Jan 31 07:27:44 crc kubenswrapper[4908]: I0131 07:27:44.950415 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcf45cd4-3a59-47c7-a63f-9aa22e247021-serving-cert\") pod \"dcf45cd4-3a59-47c7-a63f-9aa22e247021\" (UID: \"dcf45cd4-3a59-47c7-a63f-9aa22e247021\") " Jan 31 07:27:44 crc kubenswrapper[4908]: I0131 07:27:44.950446 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dcf45cd4-3a59-47c7-a63f-9aa22e247021-client-ca\") pod \"dcf45cd4-3a59-47c7-a63f-9aa22e247021\" (UID: \"dcf45cd4-3a59-47c7-a63f-9aa22e247021\") " Jan 31 07:27:44 crc kubenswrapper[4908]: I0131 07:27:44.951399 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcf45cd4-3a59-47c7-a63f-9aa22e247021-config" (OuterVolumeSpecName: "config") pod "dcf45cd4-3a59-47c7-a63f-9aa22e247021" (UID: "dcf45cd4-3a59-47c7-a63f-9aa22e247021"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:27:44 crc kubenswrapper[4908]: I0131 07:27:44.951429 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcf45cd4-3a59-47c7-a63f-9aa22e247021-client-ca" (OuterVolumeSpecName: "client-ca") pod "dcf45cd4-3a59-47c7-a63f-9aa22e247021" (UID: "dcf45cd4-3a59-47c7-a63f-9aa22e247021"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:27:44 crc kubenswrapper[4908]: I0131 07:27:44.955386 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcf45cd4-3a59-47c7-a63f-9aa22e247021-kube-api-access-6vt7k" (OuterVolumeSpecName: "kube-api-access-6vt7k") pod "dcf45cd4-3a59-47c7-a63f-9aa22e247021" (UID: "dcf45cd4-3a59-47c7-a63f-9aa22e247021"). InnerVolumeSpecName "kube-api-access-6vt7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:27:44 crc kubenswrapper[4908]: I0131 07:27:44.955792 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcf45cd4-3a59-47c7-a63f-9aa22e247021-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcf45cd4-3a59-47c7-a63f-9aa22e247021" (UID: "dcf45cd4-3a59-47c7-a63f-9aa22e247021"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:27:44 crc kubenswrapper[4908]: I0131 07:27:44.958753 4908 generic.go:334] "Generic (PLEG): container finished" podID="dcf45cd4-3a59-47c7-a63f-9aa22e247021" containerID="8e023bb73faaf00ab7d1c5dbda88cf978643ca569e7807c2c8c028b51cb3875a" exitCode=0 Jan 31 07:27:44 crc kubenswrapper[4908]: I0131 07:27:44.958792 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bd6bbd66d-xt6h8" event={"ID":"dcf45cd4-3a59-47c7-a63f-9aa22e247021","Type":"ContainerDied","Data":"8e023bb73faaf00ab7d1c5dbda88cf978643ca569e7807c2c8c028b51cb3875a"} Jan 31 07:27:44 crc kubenswrapper[4908]: I0131 07:27:44.958874 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bd6bbd66d-xt6h8" event={"ID":"dcf45cd4-3a59-47c7-a63f-9aa22e247021","Type":"ContainerDied","Data":"105173bf637915d28b55a2a0adb3b61a8310d37a55974869e1f523405aa1f6b4"} Jan 31 07:27:44 crc kubenswrapper[4908]: I0131 07:27:44.958910 4908 scope.go:117] "RemoveContainer" containerID="8e023bb73faaf00ab7d1c5dbda88cf978643ca569e7807c2c8c028b51cb3875a" Jan 31 07:27:44 crc kubenswrapper[4908]: I0131 07:27:44.959397 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bd6bbd66d-xt6h8" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:44.999970 4908 scope.go:117] "RemoveContainer" containerID="8e023bb73faaf00ab7d1c5dbda88cf978643ca569e7807c2c8c028b51cb3875a" Jan 31 07:27:45 crc kubenswrapper[4908]: E0131 07:27:45.000570 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e023bb73faaf00ab7d1c5dbda88cf978643ca569e7807c2c8c028b51cb3875a\": container with ID starting with 8e023bb73faaf00ab7d1c5dbda88cf978643ca569e7807c2c8c028b51cb3875a not found: ID does not exist" containerID="8e023bb73faaf00ab7d1c5dbda88cf978643ca569e7807c2c8c028b51cb3875a" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.000615 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e023bb73faaf00ab7d1c5dbda88cf978643ca569e7807c2c8c028b51cb3875a"} err="failed to get container status \"8e023bb73faaf00ab7d1c5dbda88cf978643ca569e7807c2c8c028b51cb3875a\": rpc error: code = NotFound desc = could not find container \"8e023bb73faaf00ab7d1c5dbda88cf978643ca569e7807c2c8c028b51cb3875a\": container with ID starting with 8e023bb73faaf00ab7d1c5dbda88cf978643ca569e7807c2c8c028b51cb3875a not found: ID does not exist" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.001317 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bd6bbd66d-xt6h8"] Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.004424 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bd6bbd66d-xt6h8"] Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.052470 4908 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcf45cd4-3a59-47c7-a63f-9aa22e247021-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.052532 4908 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dcf45cd4-3a59-47c7-a63f-9aa22e247021-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.052544 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcf45cd4-3a59-47c7-a63f-9aa22e247021-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.052558 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vt7k\" (UniqueName: \"kubernetes.io/projected/dcf45cd4-3a59-47c7-a63f-9aa22e247021-kube-api-access-6vt7k\") on node \"crc\" DevicePath \"\"" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.063563 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64bd4fc849-tlv79"] Jan 31 07:27:45 crc kubenswrapper[4908]: E0131 07:27:45.063827 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" containerName="extract-content" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.063846 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" containerName="extract-content" Jan 31 07:27:45 crc kubenswrapper[4908]: E0131 07:27:45.063856 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="100bafc6-355c-4131-9907-45004788f44c" containerName="extract-content" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.063864 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="100bafc6-355c-4131-9907-45004788f44c" containerName="extract-content" Jan 31 07:27:45 crc kubenswrapper[4908]: E0131 07:27:45.063874 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="100bafc6-355c-4131-9907-45004788f44c" containerName="extract-utilities" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.063882 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="100bafc6-355c-4131-9907-45004788f44c" containerName="extract-utilities" Jan 31 07:27:45 crc kubenswrapper[4908]: E0131 07:27:45.063894 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05f1e995-f324-4225-a4a8-d476a4da7ff4" containerName="registry-server" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.063901 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="05f1e995-f324-4225-a4a8-d476a4da7ff4" containerName="registry-server" Jan 31 07:27:45 crc kubenswrapper[4908]: E0131 07:27:45.063912 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="100bafc6-355c-4131-9907-45004788f44c" containerName="registry-server" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.063920 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="100bafc6-355c-4131-9907-45004788f44c" containerName="registry-server" Jan 31 07:27:45 crc kubenswrapper[4908]: E0131 07:27:45.063930 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" containerName="extract-utilities" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.063937 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" containerName="extract-utilities" Jan 31 07:27:45 crc kubenswrapper[4908]: E0131 07:27:45.063950 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcf45cd4-3a59-47c7-a63f-9aa22e247021" containerName="route-controller-manager" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.063957 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcf45cd4-3a59-47c7-a63f-9aa22e247021" containerName="route-controller-manager" Jan 31 07:27:45 crc kubenswrapper[4908]: E0131 07:27:45.063965 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce51ffb3-c332-4bb8-b574-44911178c9a1" containerName="extract-utilities" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.063971 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce51ffb3-c332-4bb8-b574-44911178c9a1" containerName="extract-utilities" Jan 31 07:27:45 crc kubenswrapper[4908]: E0131 07:27:45.064096 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce51ffb3-c332-4bb8-b574-44911178c9a1" containerName="registry-server" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.064132 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce51ffb3-c332-4bb8-b574-44911178c9a1" containerName="registry-server" Jan 31 07:27:45 crc kubenswrapper[4908]: E0131 07:27:45.064143 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" containerName="registry-server" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.064148 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" containerName="registry-server" Jan 31 07:27:45 crc kubenswrapper[4908]: E0131 07:27:45.064157 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce51ffb3-c332-4bb8-b574-44911178c9a1" containerName="extract-content" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.064180 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce51ffb3-c332-4bb8-b574-44911178c9a1" containerName="extract-content" Jan 31 07:27:45 crc kubenswrapper[4908]: E0131 07:27:45.064188 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05f1e995-f324-4225-a4a8-d476a4da7ff4" containerName="extract-utilities" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.064194 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="05f1e995-f324-4225-a4a8-d476a4da7ff4" containerName="extract-utilities" Jan 31 07:27:45 crc kubenswrapper[4908]: E0131 07:27:45.064215 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05f1e995-f324-4225-a4a8-d476a4da7ff4" containerName="extract-content" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.064221 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="05f1e995-f324-4225-a4a8-d476a4da7ff4" containerName="extract-content" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.064330 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce51ffb3-c332-4bb8-b574-44911178c9a1" containerName="registry-server" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.064339 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcf45cd4-3a59-47c7-a63f-9aa22e247021" containerName="route-controller-manager" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.064348 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="100bafc6-355c-4131-9907-45004788f44c" containerName="registry-server" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.064358 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba3d735e-ca4d-48b1-90c2-2edbcfa582ac" containerName="registry-server" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.064368 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="05f1e995-f324-4225-a4a8-d476a4da7ff4" containerName="registry-server" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.064794 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-tlv79" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.068271 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.069103 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.069951 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.070255 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.070894 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.071245 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.076079 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64bd4fc849-tlv79"] Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.254704 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec1796c2-6325-4d8e-88d9-debf92d26f71-client-ca\") pod \"route-controller-manager-64bd4fc849-tlv79\" (UID: \"ec1796c2-6325-4d8e-88d9-debf92d26f71\") " pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-tlv79" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.254761 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec1796c2-6325-4d8e-88d9-debf92d26f71-serving-cert\") pod \"route-controller-manager-64bd4fc849-tlv79\" (UID: \"ec1796c2-6325-4d8e-88d9-debf92d26f71\") " pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-tlv79" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.254858 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lnsw\" (UniqueName: \"kubernetes.io/projected/ec1796c2-6325-4d8e-88d9-debf92d26f71-kube-api-access-4lnsw\") pod \"route-controller-manager-64bd4fc849-tlv79\" (UID: \"ec1796c2-6325-4d8e-88d9-debf92d26f71\") " pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-tlv79" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.255062 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec1796c2-6325-4d8e-88d9-debf92d26f71-config\") pod \"route-controller-manager-64bd4fc849-tlv79\" (UID: \"ec1796c2-6325-4d8e-88d9-debf92d26f71\") " pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-tlv79" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.356181 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec1796c2-6325-4d8e-88d9-debf92d26f71-client-ca\") pod \"route-controller-manager-64bd4fc849-tlv79\" (UID: \"ec1796c2-6325-4d8e-88d9-debf92d26f71\") " pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-tlv79" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.356247 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec1796c2-6325-4d8e-88d9-debf92d26f71-serving-cert\") pod \"route-controller-manager-64bd4fc849-tlv79\" (UID: \"ec1796c2-6325-4d8e-88d9-debf92d26f71\") " pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-tlv79" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.356286 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lnsw\" (UniqueName: \"kubernetes.io/projected/ec1796c2-6325-4d8e-88d9-debf92d26f71-kube-api-access-4lnsw\") pod \"route-controller-manager-64bd4fc849-tlv79\" (UID: \"ec1796c2-6325-4d8e-88d9-debf92d26f71\") " pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-tlv79" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.356330 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec1796c2-6325-4d8e-88d9-debf92d26f71-config\") pod \"route-controller-manager-64bd4fc849-tlv79\" (UID: \"ec1796c2-6325-4d8e-88d9-debf92d26f71\") " pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-tlv79" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.357369 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec1796c2-6325-4d8e-88d9-debf92d26f71-client-ca\") pod \"route-controller-manager-64bd4fc849-tlv79\" (UID: \"ec1796c2-6325-4d8e-88d9-debf92d26f71\") " pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-tlv79" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.357615 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec1796c2-6325-4d8e-88d9-debf92d26f71-config\") pod \"route-controller-manager-64bd4fc849-tlv79\" (UID: \"ec1796c2-6325-4d8e-88d9-debf92d26f71\") " pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-tlv79" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.361168 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec1796c2-6325-4d8e-88d9-debf92d26f71-serving-cert\") pod \"route-controller-manager-64bd4fc849-tlv79\" (UID: \"ec1796c2-6325-4d8e-88d9-debf92d26f71\") " pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-tlv79" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.379929 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lnsw\" (UniqueName: \"kubernetes.io/projected/ec1796c2-6325-4d8e-88d9-debf92d26f71-kube-api-access-4lnsw\") pod \"route-controller-manager-64bd4fc849-tlv79\" (UID: \"ec1796c2-6325-4d8e-88d9-debf92d26f71\") " pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-tlv79" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.389579 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-tlv79" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.795534 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64bd4fc849-tlv79"] Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.948699 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcf45cd4-3a59-47c7-a63f-9aa22e247021" path="/var/lib/kubelet/pods/dcf45cd4-3a59-47c7-a63f-9aa22e247021/volumes" Jan 31 07:27:45 crc kubenswrapper[4908]: I0131 07:27:45.966218 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-tlv79" event={"ID":"ec1796c2-6325-4d8e-88d9-debf92d26f71","Type":"ContainerStarted","Data":"c5a58c06358a7f1738541c8fdd56791d369e4bd56062247fcc48f96d343ba9a2"} Jan 31 07:27:46 crc kubenswrapper[4908]: I0131 07:27:46.977060 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-tlv79" event={"ID":"ec1796c2-6325-4d8e-88d9-debf92d26f71","Type":"ContainerStarted","Data":"eecefe4b8d0cf82cc2420ab7e5333d3b234604269ef219848fe0a67b36e1ab83"} Jan 31 07:27:46 crc kubenswrapper[4908]: I0131 07:27:46.977377 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-tlv79" Jan 31 07:27:46 crc kubenswrapper[4908]: I0131 07:27:46.984090 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-tlv79" Jan 31 07:27:46 crc kubenswrapper[4908]: I0131 07:27:46.999067 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-64bd4fc849-tlv79" podStartSLOduration=3.999052319 podStartE2EDuration="3.999052319s" podCreationTimestamp="2026-01-31 07:27:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:27:46.996581334 +0000 UTC m=+373.612525998" watchObservedRunningTime="2026-01-31 07:27:46.999052319 +0000 UTC m=+373.614996973" Jan 31 07:28:10 crc kubenswrapper[4908]: I0131 07:28:10.431327 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:28:10 crc kubenswrapper[4908]: I0131 07:28:10.433113 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.174009 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qhffb"] Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.174904 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qhffb" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" containerName="registry-server" containerID="cri-o://faf18335c1dcb3fc17f927fd42fc4a03239815b0e76eb24bcbb46f56382b7ed3" gracePeriod=30 Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.194201 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ld4tn"] Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.194473 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ld4tn" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" containerName="registry-server" containerID="cri-o://7f5716a9b24a86a0131b3e9e651743e25eda4e8f36bf4c5b0dfbb43f41ce76b4" gracePeriod=30 Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.207333 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7cskt"] Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.207571 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-7cskt" podUID="c294adb2-f360-4af0-9919-dc678235c37d" containerName="marketplace-operator" containerID="cri-o://81c68dd225c27de490fa72542f296f137444119d14355f66d67e51ec9b3bf6fe" gracePeriod=30 Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.221920 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqxmf"] Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.222283 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hqxmf" podUID="b2db9dfc-20ec-446a-878c-db0e800be1a0" containerName="registry-server" containerID="cri-o://5edf6a5207cf3f6fdc6b3f5d439402a4a5f78bdeeab854d7fad5dedb20567275" gracePeriod=30 Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.231402 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wfl8d"] Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.232189 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wfl8d" Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.237051 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wg9bf"] Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.237784 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wg9bf" podUID="18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f" containerName="registry-server" containerID="cri-o://4f58c332f70b200ad731b3f2976d19507777901a607ab37c4c73a7f0715f329c" gracePeriod=30 Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.239780 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wfl8d"] Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.317492 4908 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7cskt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.317563 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7cskt" podUID="c294adb2-f360-4af0-9919-dc678235c37d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.365204 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/148ec21b-11ac-46af-b840-f814f86ff031-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wfl8d\" (UID: \"148ec21b-11ac-46af-b840-f814f86ff031\") " pod="openshift-marketplace/marketplace-operator-79b997595-wfl8d" Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.365396 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzsb2\" (UniqueName: \"kubernetes.io/projected/148ec21b-11ac-46af-b840-f814f86ff031-kube-api-access-wzsb2\") pod \"marketplace-operator-79b997595-wfl8d\" (UID: \"148ec21b-11ac-46af-b840-f814f86ff031\") " pod="openshift-marketplace/marketplace-operator-79b997595-wfl8d" Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.365671 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/148ec21b-11ac-46af-b840-f814f86ff031-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wfl8d\" (UID: \"148ec21b-11ac-46af-b840-f814f86ff031\") " pod="openshift-marketplace/marketplace-operator-79b997595-wfl8d" Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.467644 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/148ec21b-11ac-46af-b840-f814f86ff031-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wfl8d\" (UID: \"148ec21b-11ac-46af-b840-f814f86ff031\") " pod="openshift-marketplace/marketplace-operator-79b997595-wfl8d" Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.468031 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzsb2\" (UniqueName: \"kubernetes.io/projected/148ec21b-11ac-46af-b840-f814f86ff031-kube-api-access-wzsb2\") pod \"marketplace-operator-79b997595-wfl8d\" (UID: \"148ec21b-11ac-46af-b840-f814f86ff031\") " pod="openshift-marketplace/marketplace-operator-79b997595-wfl8d" Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.468179 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/148ec21b-11ac-46af-b840-f814f86ff031-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wfl8d\" (UID: \"148ec21b-11ac-46af-b840-f814f86ff031\") " pod="openshift-marketplace/marketplace-operator-79b997595-wfl8d" Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.468770 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/148ec21b-11ac-46af-b840-f814f86ff031-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wfl8d\" (UID: \"148ec21b-11ac-46af-b840-f814f86ff031\") " pod="openshift-marketplace/marketplace-operator-79b997595-wfl8d" Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.479102 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/148ec21b-11ac-46af-b840-f814f86ff031-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wfl8d\" (UID: \"148ec21b-11ac-46af-b840-f814f86ff031\") " pod="openshift-marketplace/marketplace-operator-79b997595-wfl8d" Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.488192 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzsb2\" (UniqueName: \"kubernetes.io/projected/148ec21b-11ac-46af-b840-f814f86ff031-kube-api-access-wzsb2\") pod \"marketplace-operator-79b997595-wfl8d\" (UID: \"148ec21b-11ac-46af-b840-f814f86ff031\") " pod="openshift-marketplace/marketplace-operator-79b997595-wfl8d" Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.561364 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wfl8d" Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.703124 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wg9bf" Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.871871 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f-utilities\") pod \"18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f\" (UID: \"18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f\") " Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.871941 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f-catalog-content\") pod \"18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f\" (UID: \"18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f\") " Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.872040 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b56tl\" (UniqueName: \"kubernetes.io/projected/18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f-kube-api-access-b56tl\") pod \"18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f\" (UID: \"18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f\") " Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.873417 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f-utilities" (OuterVolumeSpecName: "utilities") pod "18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f" (UID: "18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.876442 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f-kube-api-access-b56tl" (OuterVolumeSpecName: "kube-api-access-b56tl") pod "18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f" (UID: "18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f"). InnerVolumeSpecName "kube-api-access-b56tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.974216 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.974261 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b56tl\" (UniqueName: \"kubernetes.io/projected/18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f-kube-api-access-b56tl\") on node \"crc\" DevicePath \"\"" Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.989796 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wfl8d"] Jan 31 07:28:26 crc kubenswrapper[4908]: I0131 07:28:26.990375 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f" (UID: "18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.075580 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.129672 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qhffb" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.197364 4908 generic.go:334] "Generic (PLEG): container finished" podID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" containerID="faf18335c1dcb3fc17f927fd42fc4a03239815b0e76eb24bcbb46f56382b7ed3" exitCode=0 Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.197446 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhffb" event={"ID":"dc5d84aa-bc03-4089-be41-0f32bd1ceff4","Type":"ContainerDied","Data":"faf18335c1dcb3fc17f927fd42fc4a03239815b0e76eb24bcbb46f56382b7ed3"} Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.197464 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qhffb" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.197482 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhffb" event={"ID":"dc5d84aa-bc03-4089-be41-0f32bd1ceff4","Type":"ContainerDied","Data":"78b04d97699af8a0461fa1adef1512ef98870ac4eaf5995b7e28c1daa67ad1bc"} Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.197506 4908 scope.go:117] "RemoveContainer" containerID="faf18335c1dcb3fc17f927fd42fc4a03239815b0e76eb24bcbb46f56382b7ed3" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.200679 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wfl8d" event={"ID":"148ec21b-11ac-46af-b840-f814f86ff031","Type":"ContainerStarted","Data":"b0a9fbb30e4896f7a73d1a2571aa66f52c5572f38fb7733dcf1e689fae75bfd1"} Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.213738 4908 generic.go:334] "Generic (PLEG): container finished" podID="0c0fecdd-45be-4880-9629-53c2efef8340" containerID="7f5716a9b24a86a0131b3e9e651743e25eda4e8f36bf4c5b0dfbb43f41ce76b4" exitCode=0 Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.213852 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ld4tn" event={"ID":"0c0fecdd-45be-4880-9629-53c2efef8340","Type":"ContainerDied","Data":"7f5716a9b24a86a0131b3e9e651743e25eda4e8f36bf4c5b0dfbb43f41ce76b4"} Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.217139 4908 generic.go:334] "Generic (PLEG): container finished" podID="c294adb2-f360-4af0-9919-dc678235c37d" containerID="81c68dd225c27de490fa72542f296f137444119d14355f66d67e51ec9b3bf6fe" exitCode=0 Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.217246 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7cskt" event={"ID":"c294adb2-f360-4af0-9919-dc678235c37d","Type":"ContainerDied","Data":"81c68dd225c27de490fa72542f296f137444119d14355f66d67e51ec9b3bf6fe"} Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.218767 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ld4tn" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.220128 4908 scope.go:117] "RemoveContainer" containerID="37c40e34c6f4526aa23df0169bfc1040d05545332b87bfd10a435851534cb90e" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.221798 4908 generic.go:334] "Generic (PLEG): container finished" podID="b2db9dfc-20ec-446a-878c-db0e800be1a0" containerID="5edf6a5207cf3f6fdc6b3f5d439402a4a5f78bdeeab854d7fad5dedb20567275" exitCode=0 Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.221866 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqxmf" event={"ID":"b2db9dfc-20ec-446a-878c-db0e800be1a0","Type":"ContainerDied","Data":"5edf6a5207cf3f6fdc6b3f5d439402a4a5f78bdeeab854d7fad5dedb20567275"} Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.224786 4908 generic.go:334] "Generic (PLEG): container finished" podID="18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f" containerID="4f58c332f70b200ad731b3f2976d19507777901a607ab37c4c73a7f0715f329c" exitCode=0 Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.224830 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg9bf" event={"ID":"18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f","Type":"ContainerDied","Data":"4f58c332f70b200ad731b3f2976d19507777901a607ab37c4c73a7f0715f329c"} Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.224854 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg9bf" event={"ID":"18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f","Type":"ContainerDied","Data":"ba9f4b714e0de9f3c637ef3484d179dc4542fb72645a47e1964b820601cee349"} Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.224926 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wg9bf" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.234647 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7cskt" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.242743 4908 scope.go:117] "RemoveContainer" containerID="41ea6e3e6e35952afe65f0021c18892f47a7781d9a3a3b728c98799bcd77a1a7" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.269632 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hqxmf" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.278472 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc5d84aa-bc03-4089-be41-0f32bd1ceff4-catalog-content\") pod \"dc5d84aa-bc03-4089-be41-0f32bd1ceff4\" (UID: \"dc5d84aa-bc03-4089-be41-0f32bd1ceff4\") " Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.278555 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfczh\" (UniqueName: \"kubernetes.io/projected/dc5d84aa-bc03-4089-be41-0f32bd1ceff4-kube-api-access-zfczh\") pod \"dc5d84aa-bc03-4089-be41-0f32bd1ceff4\" (UID: \"dc5d84aa-bc03-4089-be41-0f32bd1ceff4\") " Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.278586 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc5d84aa-bc03-4089-be41-0f32bd1ceff4-utilities\") pod \"dc5d84aa-bc03-4089-be41-0f32bd1ceff4\" (UID: \"dc5d84aa-bc03-4089-be41-0f32bd1ceff4\") " Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.280318 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc5d84aa-bc03-4089-be41-0f32bd1ceff4-utilities" (OuterVolumeSpecName: "utilities") pod "dc5d84aa-bc03-4089-be41-0f32bd1ceff4" (UID: "dc5d84aa-bc03-4089-be41-0f32bd1ceff4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.305217 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc5d84aa-bc03-4089-be41-0f32bd1ceff4-kube-api-access-zfczh" (OuterVolumeSpecName: "kube-api-access-zfczh") pod "dc5d84aa-bc03-4089-be41-0f32bd1ceff4" (UID: "dc5d84aa-bc03-4089-be41-0f32bd1ceff4"). InnerVolumeSpecName "kube-api-access-zfczh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.308699 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wg9bf"] Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.310502 4908 scope.go:117] "RemoveContainer" containerID="faf18335c1dcb3fc17f927fd42fc4a03239815b0e76eb24bcbb46f56382b7ed3" Jan 31 07:28:27 crc kubenswrapper[4908]: E0131 07:28:27.310935 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faf18335c1dcb3fc17f927fd42fc4a03239815b0e76eb24bcbb46f56382b7ed3\": container with ID starting with faf18335c1dcb3fc17f927fd42fc4a03239815b0e76eb24bcbb46f56382b7ed3 not found: ID does not exist" containerID="faf18335c1dcb3fc17f927fd42fc4a03239815b0e76eb24bcbb46f56382b7ed3" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.310986 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faf18335c1dcb3fc17f927fd42fc4a03239815b0e76eb24bcbb46f56382b7ed3"} err="failed to get container status \"faf18335c1dcb3fc17f927fd42fc4a03239815b0e76eb24bcbb46f56382b7ed3\": rpc error: code = NotFound desc = could not find container \"faf18335c1dcb3fc17f927fd42fc4a03239815b0e76eb24bcbb46f56382b7ed3\": container with ID starting with faf18335c1dcb3fc17f927fd42fc4a03239815b0e76eb24bcbb46f56382b7ed3 not found: ID does not exist" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.311015 4908 scope.go:117] "RemoveContainer" containerID="37c40e34c6f4526aa23df0169bfc1040d05545332b87bfd10a435851534cb90e" Jan 31 07:28:27 crc kubenswrapper[4908]: E0131 07:28:27.311285 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37c40e34c6f4526aa23df0169bfc1040d05545332b87bfd10a435851534cb90e\": container with ID starting with 37c40e34c6f4526aa23df0169bfc1040d05545332b87bfd10a435851534cb90e not found: ID does not exist" containerID="37c40e34c6f4526aa23df0169bfc1040d05545332b87bfd10a435851534cb90e" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.311312 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37c40e34c6f4526aa23df0169bfc1040d05545332b87bfd10a435851534cb90e"} err="failed to get container status \"37c40e34c6f4526aa23df0169bfc1040d05545332b87bfd10a435851534cb90e\": rpc error: code = NotFound desc = could not find container \"37c40e34c6f4526aa23df0169bfc1040d05545332b87bfd10a435851534cb90e\": container with ID starting with 37c40e34c6f4526aa23df0169bfc1040d05545332b87bfd10a435851534cb90e not found: ID does not exist" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.311331 4908 scope.go:117] "RemoveContainer" containerID="41ea6e3e6e35952afe65f0021c18892f47a7781d9a3a3b728c98799bcd77a1a7" Jan 31 07:28:27 crc kubenswrapper[4908]: E0131 07:28:27.311751 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41ea6e3e6e35952afe65f0021c18892f47a7781d9a3a3b728c98799bcd77a1a7\": container with ID starting with 41ea6e3e6e35952afe65f0021c18892f47a7781d9a3a3b728c98799bcd77a1a7 not found: ID does not exist" containerID="41ea6e3e6e35952afe65f0021c18892f47a7781d9a3a3b728c98799bcd77a1a7" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.311775 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41ea6e3e6e35952afe65f0021c18892f47a7781d9a3a3b728c98799bcd77a1a7"} err="failed to get container status \"41ea6e3e6e35952afe65f0021c18892f47a7781d9a3a3b728c98799bcd77a1a7\": rpc error: code = NotFound desc = could not find container \"41ea6e3e6e35952afe65f0021c18892f47a7781d9a3a3b728c98799bcd77a1a7\": container with ID starting with 41ea6e3e6e35952afe65f0021c18892f47a7781d9a3a3b728c98799bcd77a1a7 not found: ID does not exist" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.311791 4908 scope.go:117] "RemoveContainer" containerID="4f58c332f70b200ad731b3f2976d19507777901a607ab37c4c73a7f0715f329c" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.321617 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wg9bf"] Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.337572 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc5d84aa-bc03-4089-be41-0f32bd1ceff4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc5d84aa-bc03-4089-be41-0f32bd1ceff4" (UID: "dc5d84aa-bc03-4089-be41-0f32bd1ceff4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.347973 4908 scope.go:117] "RemoveContainer" containerID="56f7b139e1f19e5312675f499a9f2405beb20c008aaa3f21107a04e0533614db" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.375122 4908 scope.go:117] "RemoveContainer" containerID="fa12f5f5dca3b5b6d36ad74786bb11a991fdf7f0879fdf9aacda88db5435fc98" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.380349 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2db9dfc-20ec-446a-878c-db0e800be1a0-utilities\") pod \"b2db9dfc-20ec-446a-878c-db0e800be1a0\" (UID: \"b2db9dfc-20ec-446a-878c-db0e800be1a0\") " Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.380407 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c0fecdd-45be-4880-9629-53c2efef8340-catalog-content\") pod \"0c0fecdd-45be-4880-9629-53c2efef8340\" (UID: \"0c0fecdd-45be-4880-9629-53c2efef8340\") " Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.380449 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c294adb2-f360-4af0-9919-dc678235c37d-marketplace-trusted-ca\") pod \"c294adb2-f360-4af0-9919-dc678235c37d\" (UID: \"c294adb2-f360-4af0-9919-dc678235c37d\") " Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.380482 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c294adb2-f360-4af0-9919-dc678235c37d-marketplace-operator-metrics\") pod \"c294adb2-f360-4af0-9919-dc678235c37d\" (UID: \"c294adb2-f360-4af0-9919-dc678235c37d\") " Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.380518 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfml7\" (UniqueName: \"kubernetes.io/projected/c294adb2-f360-4af0-9919-dc678235c37d-kube-api-access-mfml7\") pod \"c294adb2-f360-4af0-9919-dc678235c37d\" (UID: \"c294adb2-f360-4af0-9919-dc678235c37d\") " Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.380571 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggjrz\" (UniqueName: \"kubernetes.io/projected/0c0fecdd-45be-4880-9629-53c2efef8340-kube-api-access-ggjrz\") pod \"0c0fecdd-45be-4880-9629-53c2efef8340\" (UID: \"0c0fecdd-45be-4880-9629-53c2efef8340\") " Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.380632 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhwjd\" (UniqueName: \"kubernetes.io/projected/b2db9dfc-20ec-446a-878c-db0e800be1a0-kube-api-access-nhwjd\") pod \"b2db9dfc-20ec-446a-878c-db0e800be1a0\" (UID: \"b2db9dfc-20ec-446a-878c-db0e800be1a0\") " Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.380657 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c0fecdd-45be-4880-9629-53c2efef8340-utilities\") pod \"0c0fecdd-45be-4880-9629-53c2efef8340\" (UID: \"0c0fecdd-45be-4880-9629-53c2efef8340\") " Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.380689 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2db9dfc-20ec-446a-878c-db0e800be1a0-catalog-content\") pod \"b2db9dfc-20ec-446a-878c-db0e800be1a0\" (UID: \"b2db9dfc-20ec-446a-878c-db0e800be1a0\") " Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.381572 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc5d84aa-bc03-4089-be41-0f32bd1ceff4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.381604 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfczh\" (UniqueName: \"kubernetes.io/projected/dc5d84aa-bc03-4089-be41-0f32bd1ceff4-kube-api-access-zfczh\") on node \"crc\" DevicePath \"\"" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.381619 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc5d84aa-bc03-4089-be41-0f32bd1ceff4-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.382499 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2db9dfc-20ec-446a-878c-db0e800be1a0-utilities" (OuterVolumeSpecName: "utilities") pod "b2db9dfc-20ec-446a-878c-db0e800be1a0" (UID: "b2db9dfc-20ec-446a-878c-db0e800be1a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.384415 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c294adb2-f360-4af0-9919-dc678235c37d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "c294adb2-f360-4af0-9919-dc678235c37d" (UID: "c294adb2-f360-4af0-9919-dc678235c37d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.385131 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c294adb2-f360-4af0-9919-dc678235c37d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "c294adb2-f360-4af0-9919-dc678235c37d" (UID: "c294adb2-f360-4af0-9919-dc678235c37d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.385354 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c0fecdd-45be-4880-9629-53c2efef8340-utilities" (OuterVolumeSpecName: "utilities") pod "0c0fecdd-45be-4880-9629-53c2efef8340" (UID: "0c0fecdd-45be-4880-9629-53c2efef8340"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.386567 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c0fecdd-45be-4880-9629-53c2efef8340-kube-api-access-ggjrz" (OuterVolumeSpecName: "kube-api-access-ggjrz") pod "0c0fecdd-45be-4880-9629-53c2efef8340" (UID: "0c0fecdd-45be-4880-9629-53c2efef8340"). InnerVolumeSpecName "kube-api-access-ggjrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.386908 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2db9dfc-20ec-446a-878c-db0e800be1a0-kube-api-access-nhwjd" (OuterVolumeSpecName: "kube-api-access-nhwjd") pod "b2db9dfc-20ec-446a-878c-db0e800be1a0" (UID: "b2db9dfc-20ec-446a-878c-db0e800be1a0"). InnerVolumeSpecName "kube-api-access-nhwjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.397361 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c294adb2-f360-4af0-9919-dc678235c37d-kube-api-access-mfml7" (OuterVolumeSpecName: "kube-api-access-mfml7") pod "c294adb2-f360-4af0-9919-dc678235c37d" (UID: "c294adb2-f360-4af0-9919-dc678235c37d"). InnerVolumeSpecName "kube-api-access-mfml7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.425631 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2db9dfc-20ec-446a-878c-db0e800be1a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2db9dfc-20ec-446a-878c-db0e800be1a0" (UID: "b2db9dfc-20ec-446a-878c-db0e800be1a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.438274 4908 scope.go:117] "RemoveContainer" containerID="4f58c332f70b200ad731b3f2976d19507777901a607ab37c4c73a7f0715f329c" Jan 31 07:28:27 crc kubenswrapper[4908]: E0131 07:28:27.438806 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f58c332f70b200ad731b3f2976d19507777901a607ab37c4c73a7f0715f329c\": container with ID starting with 4f58c332f70b200ad731b3f2976d19507777901a607ab37c4c73a7f0715f329c not found: ID does not exist" containerID="4f58c332f70b200ad731b3f2976d19507777901a607ab37c4c73a7f0715f329c" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.438854 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f58c332f70b200ad731b3f2976d19507777901a607ab37c4c73a7f0715f329c"} err="failed to get container status \"4f58c332f70b200ad731b3f2976d19507777901a607ab37c4c73a7f0715f329c\": rpc error: code = NotFound desc = could not find container \"4f58c332f70b200ad731b3f2976d19507777901a607ab37c4c73a7f0715f329c\": container with ID starting with 4f58c332f70b200ad731b3f2976d19507777901a607ab37c4c73a7f0715f329c not found: ID does not exist" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.438886 4908 scope.go:117] "RemoveContainer" containerID="56f7b139e1f19e5312675f499a9f2405beb20c008aaa3f21107a04e0533614db" Jan 31 07:28:27 crc kubenswrapper[4908]: E0131 07:28:27.439411 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56f7b139e1f19e5312675f499a9f2405beb20c008aaa3f21107a04e0533614db\": container with ID starting with 56f7b139e1f19e5312675f499a9f2405beb20c008aaa3f21107a04e0533614db not found: ID does not exist" containerID="56f7b139e1f19e5312675f499a9f2405beb20c008aaa3f21107a04e0533614db" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.439446 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56f7b139e1f19e5312675f499a9f2405beb20c008aaa3f21107a04e0533614db"} err="failed to get container status \"56f7b139e1f19e5312675f499a9f2405beb20c008aaa3f21107a04e0533614db\": rpc error: code = NotFound desc = could not find container \"56f7b139e1f19e5312675f499a9f2405beb20c008aaa3f21107a04e0533614db\": container with ID starting with 56f7b139e1f19e5312675f499a9f2405beb20c008aaa3f21107a04e0533614db not found: ID does not exist" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.439475 4908 scope.go:117] "RemoveContainer" containerID="fa12f5f5dca3b5b6d36ad74786bb11a991fdf7f0879fdf9aacda88db5435fc98" Jan 31 07:28:27 crc kubenswrapper[4908]: E0131 07:28:27.439986 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa12f5f5dca3b5b6d36ad74786bb11a991fdf7f0879fdf9aacda88db5435fc98\": container with ID starting with fa12f5f5dca3b5b6d36ad74786bb11a991fdf7f0879fdf9aacda88db5435fc98 not found: ID does not exist" containerID="fa12f5f5dca3b5b6d36ad74786bb11a991fdf7f0879fdf9aacda88db5435fc98" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.440021 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa12f5f5dca3b5b6d36ad74786bb11a991fdf7f0879fdf9aacda88db5435fc98"} err="failed to get container status \"fa12f5f5dca3b5b6d36ad74786bb11a991fdf7f0879fdf9aacda88db5435fc98\": rpc error: code = NotFound desc = could not find container \"fa12f5f5dca3b5b6d36ad74786bb11a991fdf7f0879fdf9aacda88db5435fc98\": container with ID starting with fa12f5f5dca3b5b6d36ad74786bb11a991fdf7f0879fdf9aacda88db5435fc98 not found: ID does not exist" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.441797 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c0fecdd-45be-4880-9629-53c2efef8340-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c0fecdd-45be-4880-9629-53c2efef8340" (UID: "0c0fecdd-45be-4880-9629-53c2efef8340"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.483139 4908 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c294adb2-f360-4af0-9919-dc678235c37d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.483186 4908 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c294adb2-f360-4af0-9919-dc678235c37d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.483202 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfml7\" (UniqueName: \"kubernetes.io/projected/c294adb2-f360-4af0-9919-dc678235c37d-kube-api-access-mfml7\") on node \"crc\" DevicePath \"\"" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.483251 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggjrz\" (UniqueName: \"kubernetes.io/projected/0c0fecdd-45be-4880-9629-53c2efef8340-kube-api-access-ggjrz\") on node \"crc\" DevicePath \"\"" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.483267 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhwjd\" (UniqueName: \"kubernetes.io/projected/b2db9dfc-20ec-446a-878c-db0e800be1a0-kube-api-access-nhwjd\") on node \"crc\" DevicePath \"\"" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.483282 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c0fecdd-45be-4880-9629-53c2efef8340-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.483296 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2db9dfc-20ec-446a-878c-db0e800be1a0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.483308 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2db9dfc-20ec-446a-878c-db0e800be1a0-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.483320 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c0fecdd-45be-4880-9629-53c2efef8340-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.527588 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qhffb"] Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.531919 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qhffb"] Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.934457 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ltcmm"] Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.960550 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f" path="/var/lib/kubelet/pods/18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f/volumes" Jan 31 07:28:27 crc kubenswrapper[4908]: I0131 07:28:27.961289 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" path="/var/lib/kubelet/pods/dc5d84aa-bc03-4089-be41-0f32bd1ceff4/volumes" Jan 31 07:28:28 crc kubenswrapper[4908]: I0131 07:28:28.231504 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7cskt" Jan 31 07:28:28 crc kubenswrapper[4908]: I0131 07:28:28.231630 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7cskt" event={"ID":"c294adb2-f360-4af0-9919-dc678235c37d","Type":"ContainerDied","Data":"42f1a017586cb978399c44ee3c076cab5f4879c4c8d0a21ff8ff9a0de54a2ec2"} Jan 31 07:28:28 crc kubenswrapper[4908]: I0131 07:28:28.231685 4908 scope.go:117] "RemoveContainer" containerID="81c68dd225c27de490fa72542f296f137444119d14355f66d67e51ec9b3bf6fe" Jan 31 07:28:28 crc kubenswrapper[4908]: I0131 07:28:28.234674 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hqxmf" Jan 31 07:28:28 crc kubenswrapper[4908]: I0131 07:28:28.234664 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqxmf" event={"ID":"b2db9dfc-20ec-446a-878c-db0e800be1a0","Type":"ContainerDied","Data":"4ad118a6cdfdb81da7850f7a9e64cd0f3c6f489997da2fa4b8f83e62981fd88c"} Jan 31 07:28:28 crc kubenswrapper[4908]: I0131 07:28:28.238088 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wfl8d" event={"ID":"148ec21b-11ac-46af-b840-f814f86ff031","Type":"ContainerStarted","Data":"8fab96239b66a58302d2da47d77a95956a36941f5dd40ce80e31dd976877bb38"} Jan 31 07:28:28 crc kubenswrapper[4908]: I0131 07:28:28.239164 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wfl8d" Jan 31 07:28:28 crc kubenswrapper[4908]: I0131 07:28:28.245459 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wfl8d" Jan 31 07:28:28 crc kubenswrapper[4908]: I0131 07:28:28.245504 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ld4tn" event={"ID":"0c0fecdd-45be-4880-9629-53c2efef8340","Type":"ContainerDied","Data":"d41c75f0a90b95d5b5376214e975283f5c765977c07413210dc0a762ecd7e334"} Jan 31 07:28:28 crc kubenswrapper[4908]: I0131 07:28:28.245521 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ld4tn" Jan 31 07:28:28 crc kubenswrapper[4908]: I0131 07:28:28.249445 4908 scope.go:117] "RemoveContainer" containerID="5edf6a5207cf3f6fdc6b3f5d439402a4a5f78bdeeab854d7fad5dedb20567275" Jan 31 07:28:28 crc kubenswrapper[4908]: I0131 07:28:28.263608 4908 scope.go:117] "RemoveContainer" containerID="8c9a14e976942b45358ee87f97f81fc9980c4eca8a7ac71d68d52b744e2ba0ed" Jan 31 07:28:28 crc kubenswrapper[4908]: I0131 07:28:28.276564 4908 scope.go:117] "RemoveContainer" containerID="89e3b579cecb6f161ecdf1c9b57c063b22660b51b27da058b9e6039632d6775d" Jan 31 07:28:28 crc kubenswrapper[4908]: I0131 07:28:28.284456 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-wfl8d" podStartSLOduration=2.284434953 podStartE2EDuration="2.284434953s" podCreationTimestamp="2026-01-31 07:28:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:28:28.269962037 +0000 UTC m=+414.885906691" watchObservedRunningTime="2026-01-31 07:28:28.284434953 +0000 UTC m=+414.900379597" Jan 31 07:28:28 crc kubenswrapper[4908]: I0131 07:28:28.291167 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7cskt"] Jan 31 07:28:28 crc kubenswrapper[4908]: I0131 07:28:28.295150 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7cskt"] Jan 31 07:28:28 crc kubenswrapper[4908]: I0131 07:28:28.307888 4908 scope.go:117] "RemoveContainer" containerID="7f5716a9b24a86a0131b3e9e651743e25eda4e8f36bf4c5b0dfbb43f41ce76b4" Jan 31 07:28:28 crc kubenswrapper[4908]: I0131 07:28:28.308056 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqxmf"] Jan 31 07:28:28 crc kubenswrapper[4908]: I0131 07:28:28.315589 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqxmf"] Jan 31 07:28:28 crc kubenswrapper[4908]: I0131 07:28:28.325139 4908 scope.go:117] "RemoveContainer" containerID="4d113c7a167c8ffe32889850e77ab9def80a86abafc132634c517e9aabc8fc92" Jan 31 07:28:28 crc kubenswrapper[4908]: I0131 07:28:28.334292 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ld4tn"] Jan 31 07:28:28 crc kubenswrapper[4908]: I0131 07:28:28.338153 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ld4tn"] Jan 31 07:28:28 crc kubenswrapper[4908]: I0131 07:28:28.350139 4908 scope.go:117] "RemoveContainer" containerID="6042dde4253d792b82f569620786b94646ce5f06f4f6ef578a2af6288f82b628" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.394564 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lqqs8"] Jan 31 07:28:29 crc kubenswrapper[4908]: E0131 07:28:29.395326 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f" containerName="registry-server" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.395340 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f" containerName="registry-server" Jan 31 07:28:29 crc kubenswrapper[4908]: E0131 07:28:29.395351 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2db9dfc-20ec-446a-878c-db0e800be1a0" containerName="extract-utilities" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.395944 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2db9dfc-20ec-446a-878c-db0e800be1a0" containerName="extract-utilities" Jan 31 07:28:29 crc kubenswrapper[4908]: E0131 07:28:29.395966 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c294adb2-f360-4af0-9919-dc678235c37d" containerName="marketplace-operator" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.395988 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="c294adb2-f360-4af0-9919-dc678235c37d" containerName="marketplace-operator" Jan 31 07:28:29 crc kubenswrapper[4908]: E0131 07:28:29.395996 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" containerName="extract-utilities" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.396002 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" containerName="extract-utilities" Jan 31 07:28:29 crc kubenswrapper[4908]: E0131 07:28:29.396012 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" containerName="extract-content" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.396020 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" containerName="extract-content" Jan 31 07:28:29 crc kubenswrapper[4908]: E0131 07:28:29.396031 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2db9dfc-20ec-446a-878c-db0e800be1a0" containerName="extract-content" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.396037 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2db9dfc-20ec-446a-878c-db0e800be1a0" containerName="extract-content" Jan 31 07:28:29 crc kubenswrapper[4908]: E0131 07:28:29.396046 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" containerName="registry-server" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.396051 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" containerName="registry-server" Jan 31 07:28:29 crc kubenswrapper[4908]: E0131 07:28:29.396059 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f" containerName="extract-content" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.396064 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f" containerName="extract-content" Jan 31 07:28:29 crc kubenswrapper[4908]: E0131 07:28:29.396073 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" containerName="extract-utilities" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.396080 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" containerName="extract-utilities" Jan 31 07:28:29 crc kubenswrapper[4908]: E0131 07:28:29.396087 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2db9dfc-20ec-446a-878c-db0e800be1a0" containerName="registry-server" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.396093 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2db9dfc-20ec-446a-878c-db0e800be1a0" containerName="registry-server" Jan 31 07:28:29 crc kubenswrapper[4908]: E0131 07:28:29.396099 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f" containerName="extract-utilities" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.396104 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f" containerName="extract-utilities" Jan 31 07:28:29 crc kubenswrapper[4908]: E0131 07:28:29.396112 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" containerName="extract-content" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.396117 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" containerName="extract-content" Jan 31 07:28:29 crc kubenswrapper[4908]: E0131 07:28:29.396127 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" containerName="registry-server" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.396134 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" containerName="registry-server" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.396219 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a62ca4-8320-4c0e-a5e0-7e3aaa8b837f" containerName="registry-server" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.396229 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2db9dfc-20ec-446a-878c-db0e800be1a0" containerName="registry-server" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.396235 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="c294adb2-f360-4af0-9919-dc678235c37d" containerName="marketplace-operator" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.396246 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc5d84aa-bc03-4089-be41-0f32bd1ceff4" containerName="registry-server" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.396254 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" containerName="registry-server" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.396862 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lqqs8" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.399811 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.409750 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lqqs8"] Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.506466 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m24t\" (UniqueName: \"kubernetes.io/projected/356c3f0f-a93e-470a-850d-0da5329bc06c-kube-api-access-4m24t\") pod \"redhat-marketplace-lqqs8\" (UID: \"356c3f0f-a93e-470a-850d-0da5329bc06c\") " pod="openshift-marketplace/redhat-marketplace-lqqs8" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.506548 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/356c3f0f-a93e-470a-850d-0da5329bc06c-utilities\") pod \"redhat-marketplace-lqqs8\" (UID: \"356c3f0f-a93e-470a-850d-0da5329bc06c\") " pod="openshift-marketplace/redhat-marketplace-lqqs8" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.506590 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/356c3f0f-a93e-470a-850d-0da5329bc06c-catalog-content\") pod \"redhat-marketplace-lqqs8\" (UID: \"356c3f0f-a93e-470a-850d-0da5329bc06c\") " pod="openshift-marketplace/redhat-marketplace-lqqs8" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.590509 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qf824"] Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.591591 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qf824" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.595330 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.604822 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qf824"] Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.607812 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/356c3f0f-a93e-470a-850d-0da5329bc06c-catalog-content\") pod \"redhat-marketplace-lqqs8\" (UID: \"356c3f0f-a93e-470a-850d-0da5329bc06c\") " pod="openshift-marketplace/redhat-marketplace-lqqs8" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.607886 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m24t\" (UniqueName: \"kubernetes.io/projected/356c3f0f-a93e-470a-850d-0da5329bc06c-kube-api-access-4m24t\") pod \"redhat-marketplace-lqqs8\" (UID: \"356c3f0f-a93e-470a-850d-0da5329bc06c\") " pod="openshift-marketplace/redhat-marketplace-lqqs8" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.607930 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/356c3f0f-a93e-470a-850d-0da5329bc06c-utilities\") pod \"redhat-marketplace-lqqs8\" (UID: \"356c3f0f-a93e-470a-850d-0da5329bc06c\") " pod="openshift-marketplace/redhat-marketplace-lqqs8" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.608435 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/356c3f0f-a93e-470a-850d-0da5329bc06c-catalog-content\") pod \"redhat-marketplace-lqqs8\" (UID: \"356c3f0f-a93e-470a-850d-0da5329bc06c\") " pod="openshift-marketplace/redhat-marketplace-lqqs8" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.608519 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/356c3f0f-a93e-470a-850d-0da5329bc06c-utilities\") pod \"redhat-marketplace-lqqs8\" (UID: \"356c3f0f-a93e-470a-850d-0da5329bc06c\") " pod="openshift-marketplace/redhat-marketplace-lqqs8" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.628949 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m24t\" (UniqueName: \"kubernetes.io/projected/356c3f0f-a93e-470a-850d-0da5329bc06c-kube-api-access-4m24t\") pod \"redhat-marketplace-lqqs8\" (UID: \"356c3f0f-a93e-470a-850d-0da5329bc06c\") " pod="openshift-marketplace/redhat-marketplace-lqqs8" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.709334 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef2517bf-41da-4135-98bd-71b7483a6cf8-utilities\") pod \"redhat-operators-qf824\" (UID: \"ef2517bf-41da-4135-98bd-71b7483a6cf8\") " pod="openshift-marketplace/redhat-operators-qf824" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.709409 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k2pb\" (UniqueName: \"kubernetes.io/projected/ef2517bf-41da-4135-98bd-71b7483a6cf8-kube-api-access-7k2pb\") pod \"redhat-operators-qf824\" (UID: \"ef2517bf-41da-4135-98bd-71b7483a6cf8\") " pod="openshift-marketplace/redhat-operators-qf824" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.709443 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef2517bf-41da-4135-98bd-71b7483a6cf8-catalog-content\") pod \"redhat-operators-qf824\" (UID: \"ef2517bf-41da-4135-98bd-71b7483a6cf8\") " pod="openshift-marketplace/redhat-operators-qf824" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.717278 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lqqs8" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.810883 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef2517bf-41da-4135-98bd-71b7483a6cf8-utilities\") pod \"redhat-operators-qf824\" (UID: \"ef2517bf-41da-4135-98bd-71b7483a6cf8\") " pod="openshift-marketplace/redhat-operators-qf824" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.811575 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k2pb\" (UniqueName: \"kubernetes.io/projected/ef2517bf-41da-4135-98bd-71b7483a6cf8-kube-api-access-7k2pb\") pod \"redhat-operators-qf824\" (UID: \"ef2517bf-41da-4135-98bd-71b7483a6cf8\") " pod="openshift-marketplace/redhat-operators-qf824" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.811615 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef2517bf-41da-4135-98bd-71b7483a6cf8-catalog-content\") pod \"redhat-operators-qf824\" (UID: \"ef2517bf-41da-4135-98bd-71b7483a6cf8\") " pod="openshift-marketplace/redhat-operators-qf824" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.812140 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef2517bf-41da-4135-98bd-71b7483a6cf8-utilities\") pod \"redhat-operators-qf824\" (UID: \"ef2517bf-41da-4135-98bd-71b7483a6cf8\") " pod="openshift-marketplace/redhat-operators-qf824" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.812171 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef2517bf-41da-4135-98bd-71b7483a6cf8-catalog-content\") pod \"redhat-operators-qf824\" (UID: \"ef2517bf-41da-4135-98bd-71b7483a6cf8\") " pod="openshift-marketplace/redhat-operators-qf824" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.832928 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k2pb\" (UniqueName: \"kubernetes.io/projected/ef2517bf-41da-4135-98bd-71b7483a6cf8-kube-api-access-7k2pb\") pod \"redhat-operators-qf824\" (UID: \"ef2517bf-41da-4135-98bd-71b7483a6cf8\") " pod="openshift-marketplace/redhat-operators-qf824" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.911362 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qf824" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.951155 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c0fecdd-45be-4880-9629-53c2efef8340" path="/var/lib/kubelet/pods/0c0fecdd-45be-4880-9629-53c2efef8340/volumes" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.952188 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2db9dfc-20ec-446a-878c-db0e800be1a0" path="/var/lib/kubelet/pods/b2db9dfc-20ec-446a-878c-db0e800be1a0/volumes" Jan 31 07:28:29 crc kubenswrapper[4908]: I0131 07:28:29.952930 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c294adb2-f360-4af0-9919-dc678235c37d" path="/var/lib/kubelet/pods/c294adb2-f360-4af0-9919-dc678235c37d/volumes" Jan 31 07:28:30 crc kubenswrapper[4908]: I0131 07:28:30.102503 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qf824"] Jan 31 07:28:30 crc kubenswrapper[4908]: I0131 07:28:30.110500 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lqqs8"] Jan 31 07:28:30 crc kubenswrapper[4908]: I0131 07:28:30.264497 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qf824" event={"ID":"ef2517bf-41da-4135-98bd-71b7483a6cf8","Type":"ContainerStarted","Data":"6ce294a407a5cb46274d43039d89826c562a320649311fa1aff745b2df253b3d"} Jan 31 07:28:30 crc kubenswrapper[4908]: I0131 07:28:30.265539 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqqs8" event={"ID":"356c3f0f-a93e-470a-850d-0da5329bc06c","Type":"ContainerStarted","Data":"11844c719ce4b1097b9e6a63e909df156344c718a765f34814f343ff7b502eec"} Jan 31 07:28:31 crc kubenswrapper[4908]: I0131 07:28:31.278044 4908 generic.go:334] "Generic (PLEG): container finished" podID="ef2517bf-41da-4135-98bd-71b7483a6cf8" containerID="14060f15c56efdf7ae4f4672d575d15bcc15b1fb7bafc810db0c948915de771a" exitCode=0 Jan 31 07:28:31 crc kubenswrapper[4908]: I0131 07:28:31.278251 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qf824" event={"ID":"ef2517bf-41da-4135-98bd-71b7483a6cf8","Type":"ContainerDied","Data":"14060f15c56efdf7ae4f4672d575d15bcc15b1fb7bafc810db0c948915de771a"} Jan 31 07:28:31 crc kubenswrapper[4908]: I0131 07:28:31.283545 4908 generic.go:334] "Generic (PLEG): container finished" podID="356c3f0f-a93e-470a-850d-0da5329bc06c" containerID="9a9ca16e4bc4374b0dfbcfc92862d7484c51da2658d52b1507622c88b24041e2" exitCode=0 Jan 31 07:28:31 crc kubenswrapper[4908]: I0131 07:28:31.283609 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqqs8" event={"ID":"356c3f0f-a93e-470a-850d-0da5329bc06c","Type":"ContainerDied","Data":"9a9ca16e4bc4374b0dfbcfc92862d7484c51da2658d52b1507622c88b24041e2"} Jan 31 07:28:31 crc kubenswrapper[4908]: I0131 07:28:31.787898 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l7vpt"] Jan 31 07:28:31 crc kubenswrapper[4908]: I0131 07:28:31.789079 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l7vpt" Jan 31 07:28:31 crc kubenswrapper[4908]: I0131 07:28:31.790640 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 07:28:31 crc kubenswrapper[4908]: I0131 07:28:31.806332 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l7vpt"] Jan 31 07:28:31 crc kubenswrapper[4908]: I0131 07:28:31.937710 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db4e226f-ab5f-4ce2-bfbb-baefd681a009-catalog-content\") pod \"community-operators-l7vpt\" (UID: \"db4e226f-ab5f-4ce2-bfbb-baefd681a009\") " pod="openshift-marketplace/community-operators-l7vpt" Jan 31 07:28:31 crc kubenswrapper[4908]: I0131 07:28:31.937812 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwbn6\" (UniqueName: \"kubernetes.io/projected/db4e226f-ab5f-4ce2-bfbb-baefd681a009-kube-api-access-hwbn6\") pod \"community-operators-l7vpt\" (UID: \"db4e226f-ab5f-4ce2-bfbb-baefd681a009\") " pod="openshift-marketplace/community-operators-l7vpt" Jan 31 07:28:31 crc kubenswrapper[4908]: I0131 07:28:31.937846 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db4e226f-ab5f-4ce2-bfbb-baefd681a009-utilities\") pod \"community-operators-l7vpt\" (UID: \"db4e226f-ab5f-4ce2-bfbb-baefd681a009\") " pod="openshift-marketplace/community-operators-l7vpt" Jan 31 07:28:31 crc kubenswrapper[4908]: I0131 07:28:31.987274 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m4dmt"] Jan 31 07:28:31 crc kubenswrapper[4908]: I0131 07:28:31.988313 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m4dmt" Jan 31 07:28:31 crc kubenswrapper[4908]: I0131 07:28:31.990638 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 07:28:32 crc kubenswrapper[4908]: I0131 07:28:32.001679 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m4dmt"] Jan 31 07:28:32 crc kubenswrapper[4908]: I0131 07:28:32.038947 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwbn6\" (UniqueName: \"kubernetes.io/projected/db4e226f-ab5f-4ce2-bfbb-baefd681a009-kube-api-access-hwbn6\") pod \"community-operators-l7vpt\" (UID: \"db4e226f-ab5f-4ce2-bfbb-baefd681a009\") " pod="openshift-marketplace/community-operators-l7vpt" Jan 31 07:28:32 crc kubenswrapper[4908]: I0131 07:28:32.039021 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db4e226f-ab5f-4ce2-bfbb-baefd681a009-utilities\") pod \"community-operators-l7vpt\" (UID: \"db4e226f-ab5f-4ce2-bfbb-baefd681a009\") " pod="openshift-marketplace/community-operators-l7vpt" Jan 31 07:28:32 crc kubenswrapper[4908]: I0131 07:28:32.039099 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db4e226f-ab5f-4ce2-bfbb-baefd681a009-catalog-content\") pod \"community-operators-l7vpt\" (UID: \"db4e226f-ab5f-4ce2-bfbb-baefd681a009\") " pod="openshift-marketplace/community-operators-l7vpt" Jan 31 07:28:32 crc kubenswrapper[4908]: I0131 07:28:32.039775 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db4e226f-ab5f-4ce2-bfbb-baefd681a009-catalog-content\") pod \"community-operators-l7vpt\" (UID: \"db4e226f-ab5f-4ce2-bfbb-baefd681a009\") " pod="openshift-marketplace/community-operators-l7vpt" Jan 31 07:28:32 crc kubenswrapper[4908]: I0131 07:28:32.040221 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db4e226f-ab5f-4ce2-bfbb-baefd681a009-utilities\") pod \"community-operators-l7vpt\" (UID: \"db4e226f-ab5f-4ce2-bfbb-baefd681a009\") " pod="openshift-marketplace/community-operators-l7vpt" Jan 31 07:28:32 crc kubenswrapper[4908]: I0131 07:28:32.058901 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwbn6\" (UniqueName: \"kubernetes.io/projected/db4e226f-ab5f-4ce2-bfbb-baefd681a009-kube-api-access-hwbn6\") pod \"community-operators-l7vpt\" (UID: \"db4e226f-ab5f-4ce2-bfbb-baefd681a009\") " pod="openshift-marketplace/community-operators-l7vpt" Jan 31 07:28:32 crc kubenswrapper[4908]: I0131 07:28:32.140216 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l7vpt" Jan 31 07:28:32 crc kubenswrapper[4908]: I0131 07:28:32.140468 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qztc5\" (UniqueName: \"kubernetes.io/projected/5cabe96a-25f1-4049-a1f6-048e6fa2da67-kube-api-access-qztc5\") pod \"certified-operators-m4dmt\" (UID: \"5cabe96a-25f1-4049-a1f6-048e6fa2da67\") " pod="openshift-marketplace/certified-operators-m4dmt" Jan 31 07:28:32 crc kubenswrapper[4908]: I0131 07:28:32.140503 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cabe96a-25f1-4049-a1f6-048e6fa2da67-utilities\") pod \"certified-operators-m4dmt\" (UID: \"5cabe96a-25f1-4049-a1f6-048e6fa2da67\") " pod="openshift-marketplace/certified-operators-m4dmt" Jan 31 07:28:32 crc kubenswrapper[4908]: I0131 07:28:32.140526 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cabe96a-25f1-4049-a1f6-048e6fa2da67-catalog-content\") pod \"certified-operators-m4dmt\" (UID: \"5cabe96a-25f1-4049-a1f6-048e6fa2da67\") " pod="openshift-marketplace/certified-operators-m4dmt" Jan 31 07:28:32 crc kubenswrapper[4908]: I0131 07:28:32.242051 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qztc5\" (UniqueName: \"kubernetes.io/projected/5cabe96a-25f1-4049-a1f6-048e6fa2da67-kube-api-access-qztc5\") pod \"certified-operators-m4dmt\" (UID: \"5cabe96a-25f1-4049-a1f6-048e6fa2da67\") " pod="openshift-marketplace/certified-operators-m4dmt" Jan 31 07:28:32 crc kubenswrapper[4908]: I0131 07:28:32.242112 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cabe96a-25f1-4049-a1f6-048e6fa2da67-utilities\") pod \"certified-operators-m4dmt\" (UID: \"5cabe96a-25f1-4049-a1f6-048e6fa2da67\") " pod="openshift-marketplace/certified-operators-m4dmt" Jan 31 07:28:32 crc kubenswrapper[4908]: I0131 07:28:32.242137 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cabe96a-25f1-4049-a1f6-048e6fa2da67-catalog-content\") pod \"certified-operators-m4dmt\" (UID: \"5cabe96a-25f1-4049-a1f6-048e6fa2da67\") " pod="openshift-marketplace/certified-operators-m4dmt" Jan 31 07:28:32 crc kubenswrapper[4908]: I0131 07:28:32.242798 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cabe96a-25f1-4049-a1f6-048e6fa2da67-utilities\") pod \"certified-operators-m4dmt\" (UID: \"5cabe96a-25f1-4049-a1f6-048e6fa2da67\") " pod="openshift-marketplace/certified-operators-m4dmt" Jan 31 07:28:32 crc kubenswrapper[4908]: I0131 07:28:32.242822 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cabe96a-25f1-4049-a1f6-048e6fa2da67-catalog-content\") pod \"certified-operators-m4dmt\" (UID: \"5cabe96a-25f1-4049-a1f6-048e6fa2da67\") " pod="openshift-marketplace/certified-operators-m4dmt" Jan 31 07:28:32 crc kubenswrapper[4908]: I0131 07:28:32.259542 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qztc5\" (UniqueName: \"kubernetes.io/projected/5cabe96a-25f1-4049-a1f6-048e6fa2da67-kube-api-access-qztc5\") pod \"certified-operators-m4dmt\" (UID: \"5cabe96a-25f1-4049-a1f6-048e6fa2da67\") " pod="openshift-marketplace/certified-operators-m4dmt" Jan 31 07:28:32 crc kubenswrapper[4908]: I0131 07:28:32.305968 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m4dmt" Jan 31 07:28:32 crc kubenswrapper[4908]: I0131 07:28:32.536390 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l7vpt"] Jan 31 07:28:32 crc kubenswrapper[4908]: W0131 07:28:32.543432 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb4e226f_ab5f_4ce2_bfbb_baefd681a009.slice/crio-806b3340b1118c083693c84fc095f2aa3783cd8c4d3e7ca31cd302062524403f WatchSource:0}: Error finding container 806b3340b1118c083693c84fc095f2aa3783cd8c4d3e7ca31cd302062524403f: Status 404 returned error can't find the container with id 806b3340b1118c083693c84fc095f2aa3783cd8c4d3e7ca31cd302062524403f Jan 31 07:28:32 crc kubenswrapper[4908]: I0131 07:28:32.692554 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m4dmt"] Jan 31 07:28:32 crc kubenswrapper[4908]: W0131 07:28:32.696195 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cabe96a_25f1_4049_a1f6_048e6fa2da67.slice/crio-fb1423adaf680d6b66109c5ee23d6aab9e807c79d7d5d30eef461450382d1d60 WatchSource:0}: Error finding container fb1423adaf680d6b66109c5ee23d6aab9e807c79d7d5d30eef461450382d1d60: Status 404 returned error can't find the container with id fb1423adaf680d6b66109c5ee23d6aab9e807c79d7d5d30eef461450382d1d60 Jan 31 07:28:33 crc kubenswrapper[4908]: I0131 07:28:33.305395 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7vpt" event={"ID":"db4e226f-ab5f-4ce2-bfbb-baefd681a009","Type":"ContainerStarted","Data":"806b3340b1118c083693c84fc095f2aa3783cd8c4d3e7ca31cd302062524403f"} Jan 31 07:28:33 crc kubenswrapper[4908]: I0131 07:28:33.306419 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m4dmt" event={"ID":"5cabe96a-25f1-4049-a1f6-048e6fa2da67","Type":"ContainerStarted","Data":"fb1423adaf680d6b66109c5ee23d6aab9e807c79d7d5d30eef461450382d1d60"} Jan 31 07:28:35 crc kubenswrapper[4908]: I0131 07:28:35.320121 4908 generic.go:334] "Generic (PLEG): container finished" podID="db4e226f-ab5f-4ce2-bfbb-baefd681a009" containerID="7590b743161d99dd7318484424437cf51396f50bee3cce8231535c3b5c5765e1" exitCode=0 Jan 31 07:28:35 crc kubenswrapper[4908]: I0131 07:28:35.320300 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7vpt" event={"ID":"db4e226f-ab5f-4ce2-bfbb-baefd681a009","Type":"ContainerDied","Data":"7590b743161d99dd7318484424437cf51396f50bee3cce8231535c3b5c5765e1"} Jan 31 07:28:35 crc kubenswrapper[4908]: I0131 07:28:35.322640 4908 generic.go:334] "Generic (PLEG): container finished" podID="5cabe96a-25f1-4049-a1f6-048e6fa2da67" containerID="79d2e3ecdddb769229100596a21bdb1de5de6fac8ec1b3ac4920604dc2b09ef7" exitCode=0 Jan 31 07:28:35 crc kubenswrapper[4908]: I0131 07:28:35.322798 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m4dmt" event={"ID":"5cabe96a-25f1-4049-a1f6-048e6fa2da67","Type":"ContainerDied","Data":"79d2e3ecdddb769229100596a21bdb1de5de6fac8ec1b3ac4920604dc2b09ef7"} Jan 31 07:28:35 crc kubenswrapper[4908]: I0131 07:28:35.327296 4908 generic.go:334] "Generic (PLEG): container finished" podID="ef2517bf-41da-4135-98bd-71b7483a6cf8" containerID="85c9c19625d08022d3e0fe2f8cd1d9a2fa8cadb74a0be1990a10b2ff42163ea8" exitCode=0 Jan 31 07:28:35 crc kubenswrapper[4908]: I0131 07:28:35.327376 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qf824" event={"ID":"ef2517bf-41da-4135-98bd-71b7483a6cf8","Type":"ContainerDied","Data":"85c9c19625d08022d3e0fe2f8cd1d9a2fa8cadb74a0be1990a10b2ff42163ea8"} Jan 31 07:28:35 crc kubenswrapper[4908]: I0131 07:28:35.330065 4908 generic.go:334] "Generic (PLEG): container finished" podID="356c3f0f-a93e-470a-850d-0da5329bc06c" containerID="cbdd6b80879dd567dfe9288fe25afc76a8b83076d9340fab06d3d513ad06ea54" exitCode=0 Jan 31 07:28:35 crc kubenswrapper[4908]: I0131 07:28:35.330092 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqqs8" event={"ID":"356c3f0f-a93e-470a-850d-0da5329bc06c","Type":"ContainerDied","Data":"cbdd6b80879dd567dfe9288fe25afc76a8b83076d9340fab06d3d513ad06ea54"} Jan 31 07:28:39 crc kubenswrapper[4908]: I0131 07:28:39.591837 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqqs8" event={"ID":"356c3f0f-a93e-470a-850d-0da5329bc06c","Type":"ContainerStarted","Data":"2dd15b1f4bb862e8f7f8c267d1081ca280fb075ed919fcc2b16ce132b2ed36ad"} Jan 31 07:28:39 crc kubenswrapper[4908]: I0131 07:28:39.717482 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lqqs8" Jan 31 07:28:39 crc kubenswrapper[4908]: I0131 07:28:39.717534 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lqqs8" Jan 31 07:28:40 crc kubenswrapper[4908]: I0131 07:28:40.431610 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:28:40 crc kubenswrapper[4908]: I0131 07:28:40.431696 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:28:40 crc kubenswrapper[4908]: I0131 07:28:40.758386 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-lqqs8" podUID="356c3f0f-a93e-470a-850d-0da5329bc06c" containerName="registry-server" probeResult="failure" output=< Jan 31 07:28:40 crc kubenswrapper[4908]: timeout: failed to connect service ":50051" within 1s Jan 31 07:28:40 crc kubenswrapper[4908]: > Jan 31 07:28:41 crc kubenswrapper[4908]: I0131 07:28:41.606612 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qf824" event={"ID":"ef2517bf-41da-4135-98bd-71b7483a6cf8","Type":"ContainerStarted","Data":"53d1c92defbea018e63d486f0f99a219b70f65792a649857d66959ba667fb5ad"} Jan 31 07:28:41 crc kubenswrapper[4908]: I0131 07:28:41.609068 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7vpt" event={"ID":"db4e226f-ab5f-4ce2-bfbb-baefd681a009","Type":"ContainerStarted","Data":"c9315fc8b91e5f9234117cfe9ef0bdf7ba4d263e8618c3b9b38965605e685f6e"} Jan 31 07:28:41 crc kubenswrapper[4908]: I0131 07:28:41.610947 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m4dmt" event={"ID":"5cabe96a-25f1-4049-a1f6-048e6fa2da67","Type":"ContainerStarted","Data":"ad02563ec3f3317181a0878577f2edce63e2ec64f95a67848981dbd741bcdb0f"} Jan 31 07:28:42 crc kubenswrapper[4908]: I0131 07:28:42.619567 4908 generic.go:334] "Generic (PLEG): container finished" podID="db4e226f-ab5f-4ce2-bfbb-baefd681a009" containerID="c9315fc8b91e5f9234117cfe9ef0bdf7ba4d263e8618c3b9b38965605e685f6e" exitCode=0 Jan 31 07:28:42 crc kubenswrapper[4908]: I0131 07:28:42.619686 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7vpt" event={"ID":"db4e226f-ab5f-4ce2-bfbb-baefd681a009","Type":"ContainerDied","Data":"c9315fc8b91e5f9234117cfe9ef0bdf7ba4d263e8618c3b9b38965605e685f6e"} Jan 31 07:28:42 crc kubenswrapper[4908]: I0131 07:28:42.624537 4908 generic.go:334] "Generic (PLEG): container finished" podID="5cabe96a-25f1-4049-a1f6-048e6fa2da67" containerID="ad02563ec3f3317181a0878577f2edce63e2ec64f95a67848981dbd741bcdb0f" exitCode=0 Jan 31 07:28:42 crc kubenswrapper[4908]: I0131 07:28:42.624600 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m4dmt" event={"ID":"5cabe96a-25f1-4049-a1f6-048e6fa2da67","Type":"ContainerDied","Data":"ad02563ec3f3317181a0878577f2edce63e2ec64f95a67848981dbd741bcdb0f"} Jan 31 07:28:42 crc kubenswrapper[4908]: I0131 07:28:42.641230 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lqqs8" podStartSLOduration=6.345026685 podStartE2EDuration="13.641173396s" podCreationTimestamp="2026-01-31 07:28:29 +0000 UTC" firstStartedPulling="2026-01-31 07:28:31.285975348 +0000 UTC m=+417.901920022" lastFinishedPulling="2026-01-31 07:28:38.582122079 +0000 UTC m=+425.198066733" observedRunningTime="2026-01-31 07:28:39.61304074 +0000 UTC m=+426.228985414" watchObservedRunningTime="2026-01-31 07:28:42.641173396 +0000 UTC m=+429.257118170" Jan 31 07:28:42 crc kubenswrapper[4908]: I0131 07:28:42.681776 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qf824" podStartSLOduration=5.1205931 podStartE2EDuration="13.68175528s" podCreationTimestamp="2026-01-31 07:28:29 +0000 UTC" firstStartedPulling="2026-01-31 07:28:31.285096435 +0000 UTC m=+417.901041109" lastFinishedPulling="2026-01-31 07:28:39.846258625 +0000 UTC m=+426.462203289" observedRunningTime="2026-01-31 07:28:42.678738429 +0000 UTC m=+429.294683103" watchObservedRunningTime="2026-01-31 07:28:42.68175528 +0000 UTC m=+429.297699934" Jan 31 07:28:44 crc kubenswrapper[4908]: I0131 07:28:44.637278 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7vpt" event={"ID":"db4e226f-ab5f-4ce2-bfbb-baefd681a009","Type":"ContainerStarted","Data":"a5644a3c659130828b8dc95a05a27fc510ba96a79840f353f050b1dac542e3e0"} Jan 31 07:28:44 crc kubenswrapper[4908]: I0131 07:28:44.640424 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m4dmt" event={"ID":"5cabe96a-25f1-4049-a1f6-048e6fa2da67","Type":"ContainerStarted","Data":"fa413a3aa15598ed143ac6db66c3dae635e1ada821973b2baad5889ee214d698"} Jan 31 07:28:44 crc kubenswrapper[4908]: I0131 07:28:44.668284 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l7vpt" podStartSLOduration=4.981073214 podStartE2EDuration="13.668263649s" podCreationTimestamp="2026-01-31 07:28:31 +0000 UTC" firstStartedPulling="2026-01-31 07:28:35.321878797 +0000 UTC m=+421.937823471" lastFinishedPulling="2026-01-31 07:28:44.009069252 +0000 UTC m=+430.625013906" observedRunningTime="2026-01-31 07:28:44.667650103 +0000 UTC m=+431.283594767" watchObservedRunningTime="2026-01-31 07:28:44.668263649 +0000 UTC m=+431.284208303" Jan 31 07:28:44 crc kubenswrapper[4908]: I0131 07:28:44.687153 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m4dmt" podStartSLOduration=5.439491212 podStartE2EDuration="13.687115352s" podCreationTimestamp="2026-01-31 07:28:31 +0000 UTC" firstStartedPulling="2026-01-31 07:28:35.325169395 +0000 UTC m=+421.941114049" lastFinishedPulling="2026-01-31 07:28:43.572793545 +0000 UTC m=+430.188738189" observedRunningTime="2026-01-31 07:28:44.682706065 +0000 UTC m=+431.298650719" watchObservedRunningTime="2026-01-31 07:28:44.687115352 +0000 UTC m=+431.303060006" Jan 31 07:28:49 crc kubenswrapper[4908]: I0131 07:28:49.757106 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lqqs8" Jan 31 07:28:49 crc kubenswrapper[4908]: I0131 07:28:49.798120 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lqqs8" Jan 31 07:28:49 crc kubenswrapper[4908]: I0131 07:28:49.911739 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qf824" Jan 31 07:28:49 crc kubenswrapper[4908]: I0131 07:28:49.912235 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qf824" Jan 31 07:28:49 crc kubenswrapper[4908]: I0131 07:28:49.951264 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qf824" Jan 31 07:28:50 crc kubenswrapper[4908]: I0131 07:28:50.700488 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qf824" Jan 31 07:28:52 crc kubenswrapper[4908]: I0131 07:28:52.141127 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l7vpt" Jan 31 07:28:52 crc kubenswrapper[4908]: I0131 07:28:52.141354 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l7vpt" Jan 31 07:28:52 crc kubenswrapper[4908]: I0131 07:28:52.193783 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l7vpt" Jan 31 07:28:52 crc kubenswrapper[4908]: I0131 07:28:52.307301 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m4dmt" Jan 31 07:28:52 crc kubenswrapper[4908]: I0131 07:28:52.307452 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m4dmt" Jan 31 07:28:52 crc kubenswrapper[4908]: I0131 07:28:52.346152 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m4dmt" Jan 31 07:28:52 crc kubenswrapper[4908]: I0131 07:28:52.713554 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l7vpt" Jan 31 07:28:52 crc kubenswrapper[4908]: I0131 07:28:52.715328 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m4dmt" Jan 31 07:28:52 crc kubenswrapper[4908]: I0131 07:28:52.989332 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" podUID="ab833be5-a275-4c72-92d4-f6c93dd249a8" containerName="oauth-openshift" containerID="cri-o://daa71833b6d2f5db4e38de5c2434f221667abb5b3ff730011ddd99e637a00fcd" gracePeriod=15 Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.523064 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.555827 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk"] Jan 31 07:28:55 crc kubenswrapper[4908]: E0131 07:28:54.556026 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab833be5-a275-4c72-92d4-f6c93dd249a8" containerName="oauth-openshift" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.556037 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab833be5-a275-4c72-92d4-f6c93dd249a8" containerName="oauth-openshift" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.556137 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab833be5-a275-4c72-92d4-f6c93dd249a8" containerName="oauth-openshift" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.556449 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.586048 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk"] Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.682609 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-cliconfig\") pod \"ab833be5-a275-4c72-92d4-f6c93dd249a8\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.682652 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab833be5-a275-4c72-92d4-f6c93dd249a8-audit-dir\") pod \"ab833be5-a275-4c72-92d4-f6c93dd249a8\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.682712 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-serving-cert\") pod \"ab833be5-a275-4c72-92d4-f6c93dd249a8\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.682735 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-template-login\") pod \"ab833be5-a275-4c72-92d4-f6c93dd249a8\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.682772 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-audit-policies\") pod \"ab833be5-a275-4c72-92d4-f6c93dd249a8\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.682791 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-ocp-branding-template\") pod \"ab833be5-a275-4c72-92d4-f6c93dd249a8\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.682817 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-idp-0-file-data\") pod \"ab833be5-a275-4c72-92d4-f6c93dd249a8\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.682839 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-template-error\") pod \"ab833be5-a275-4c72-92d4-f6c93dd249a8\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.682861 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dmg5\" (UniqueName: \"kubernetes.io/projected/ab833be5-a275-4c72-92d4-f6c93dd249a8-kube-api-access-4dmg5\") pod \"ab833be5-a275-4c72-92d4-f6c93dd249a8\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.682879 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-session\") pod \"ab833be5-a275-4c72-92d4-f6c93dd249a8\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.682911 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-trusted-ca-bundle\") pod \"ab833be5-a275-4c72-92d4-f6c93dd249a8\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.682930 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-router-certs\") pod \"ab833be5-a275-4c72-92d4-f6c93dd249a8\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.682801 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab833be5-a275-4c72-92d4-f6c93dd249a8-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ab833be5-a275-4c72-92d4-f6c93dd249a8" (UID: "ab833be5-a275-4c72-92d4-f6c93dd249a8"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.682950 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-service-ca\") pod \"ab833be5-a275-4c72-92d4-f6c93dd249a8\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.683072 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-template-provider-selection\") pod \"ab833be5-a275-4c72-92d4-f6c93dd249a8\" (UID: \"ab833be5-a275-4c72-92d4-f6c93dd249a8\") " Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.683298 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-user-template-error\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.683352 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.683395 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/17dce246-bb4f-401b-9041-df07c8e59bf6-audit-dir\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.683517 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-user-template-login\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.683560 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-system-service-ca\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.683582 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "ab833be5-a275-4c72-92d4-f6c93dd249a8" (UID: "ab833be5-a275-4c72-92d4-f6c93dd249a8"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.683595 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "ab833be5-a275-4c72-92d4-f6c93dd249a8" (UID: "ab833be5-a275-4c72-92d4-f6c93dd249a8"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.683599 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.683654 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.683658 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "ab833be5-a275-4c72-92d4-f6c93dd249a8" (UID: "ab833be5-a275-4c72-92d4-f6c93dd249a8"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.683682 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.683698 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.683724 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h78th\" (UniqueName: \"kubernetes.io/projected/17dce246-bb4f-401b-9041-df07c8e59bf6-kube-api-access-h78th\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.683857 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.683884 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-system-session\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.683905 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-system-router-certs\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.683942 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/17dce246-bb4f-401b-9041-df07c8e59bf6-audit-policies\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.683976 4908 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.684003 4908 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab833be5-a275-4c72-92d4-f6c93dd249a8-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.684013 4908 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.684022 4908 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.684041 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "ab833be5-a275-4c72-92d4-f6c93dd249a8" (UID: "ab833be5-a275-4c72-92d4-f6c93dd249a8"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.706123 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "ab833be5-a275-4c72-92d4-f6c93dd249a8" (UID: "ab833be5-a275-4c72-92d4-f6c93dd249a8"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.706828 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "ab833be5-a275-4c72-92d4-f6c93dd249a8" (UID: "ab833be5-a275-4c72-92d4-f6c93dd249a8"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.707304 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "ab833be5-a275-4c72-92d4-f6c93dd249a8" (UID: "ab833be5-a275-4c72-92d4-f6c93dd249a8"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.707488 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab833be5-a275-4c72-92d4-f6c93dd249a8-kube-api-access-4dmg5" (OuterVolumeSpecName: "kube-api-access-4dmg5") pod "ab833be5-a275-4c72-92d4-f6c93dd249a8" (UID: "ab833be5-a275-4c72-92d4-f6c93dd249a8"). InnerVolumeSpecName "kube-api-access-4dmg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.707642 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "ab833be5-a275-4c72-92d4-f6c93dd249a8" (UID: "ab833be5-a275-4c72-92d4-f6c93dd249a8"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.708092 4908 generic.go:334] "Generic (PLEG): container finished" podID="ab833be5-a275-4c72-92d4-f6c93dd249a8" containerID="daa71833b6d2f5db4e38de5c2434f221667abb5b3ff730011ddd99e637a00fcd" exitCode=0 Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.708280 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.708321 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" event={"ID":"ab833be5-a275-4c72-92d4-f6c93dd249a8","Type":"ContainerDied","Data":"daa71833b6d2f5db4e38de5c2434f221667abb5b3ff730011ddd99e637a00fcd"} Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.708361 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ltcmm" event={"ID":"ab833be5-a275-4c72-92d4-f6c93dd249a8","Type":"ContainerDied","Data":"7fe779b0b601cf3338a3cfdd76e9c57d91ea47a3b903462570f4c6b83dc9d2dd"} Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.708456 4908 scope.go:117] "RemoveContainer" containerID="daa71833b6d2f5db4e38de5c2434f221667abb5b3ff730011ddd99e637a00fcd" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.709142 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "ab833be5-a275-4c72-92d4-f6c93dd249a8" (UID: "ab833be5-a275-4c72-92d4-f6c93dd249a8"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.709180 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "ab833be5-a275-4c72-92d4-f6c93dd249a8" (UID: "ab833be5-a275-4c72-92d4-f6c93dd249a8"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.709275 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "ab833be5-a275-4c72-92d4-f6c93dd249a8" (UID: "ab833be5-a275-4c72-92d4-f6c93dd249a8"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.721718 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "ab833be5-a275-4c72-92d4-f6c93dd249a8" (UID: "ab833be5-a275-4c72-92d4-f6c93dd249a8"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.757067 4908 scope.go:117] "RemoveContainer" containerID="daa71833b6d2f5db4e38de5c2434f221667abb5b3ff730011ddd99e637a00fcd" Jan 31 07:28:55 crc kubenswrapper[4908]: E0131 07:28:54.757552 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daa71833b6d2f5db4e38de5c2434f221667abb5b3ff730011ddd99e637a00fcd\": container with ID starting with daa71833b6d2f5db4e38de5c2434f221667abb5b3ff730011ddd99e637a00fcd not found: ID does not exist" containerID="daa71833b6d2f5db4e38de5c2434f221667abb5b3ff730011ddd99e637a00fcd" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.757607 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daa71833b6d2f5db4e38de5c2434f221667abb5b3ff730011ddd99e637a00fcd"} err="failed to get container status \"daa71833b6d2f5db4e38de5c2434f221667abb5b3ff730011ddd99e637a00fcd\": rpc error: code = NotFound desc = could not find container \"daa71833b6d2f5db4e38de5c2434f221667abb5b3ff730011ddd99e637a00fcd\": container with ID starting with daa71833b6d2f5db4e38de5c2434f221667abb5b3ff730011ddd99e637a00fcd not found: ID does not exist" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.785274 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.785309 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.785341 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h78th\" (UniqueName: \"kubernetes.io/projected/17dce246-bb4f-401b-9041-df07c8e59bf6-kube-api-access-h78th\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.785396 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.785422 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-system-session\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.785729 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-system-router-certs\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.785788 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/17dce246-bb4f-401b-9041-df07c8e59bf6-audit-policies\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.786141 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-user-template-error\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.786174 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.786207 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/17dce246-bb4f-401b-9041-df07c8e59bf6-audit-dir\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.786255 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-user-template-login\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.786306 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-system-service-ca\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.786337 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.786364 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.786411 4908 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.786425 4908 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.786440 4908 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.786453 4908 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.786466 4908 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.786491 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dmg5\" (UniqueName: \"kubernetes.io/projected/ab833be5-a275-4c72-92d4-f6c93dd249a8-kube-api-access-4dmg5\") on node \"crc\" DevicePath \"\"" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.786505 4908 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.786517 4908 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.786529 4908 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.786542 4908 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ab833be5-a275-4c72-92d4-f6c93dd249a8-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.786690 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/17dce246-bb4f-401b-9041-df07c8e59bf6-audit-dir\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.787201 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/17dce246-bb4f-401b-9041-df07c8e59bf6-audit-policies\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.787507 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.787842 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-system-service-ca\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.788115 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.791625 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.793538 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-system-router-certs\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.793663 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-user-template-login\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.795012 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.795291 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.795462 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-system-session\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.796406 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-user-template-error\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.806389 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/17dce246-bb4f-401b-9041-df07c8e59bf6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.808708 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h78th\" (UniqueName: \"kubernetes.io/projected/17dce246-bb4f-401b-9041-df07c8e59bf6-kube-api-access-h78th\") pod \"oauth-openshift-7d5db9b7c6-5jxbk\" (UID: \"17dce246-bb4f-401b-9041-df07c8e59bf6\") " pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:54.883799 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:55.051121 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ltcmm"] Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:55.055819 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ltcmm"] Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:55.947594 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab833be5-a275-4c72-92d4-f6c93dd249a8" path="/var/lib/kubelet/pods/ab833be5-a275-4c72-92d4-f6c93dd249a8/volumes" Jan 31 07:28:55 crc kubenswrapper[4908]: I0131 07:28:55.991993 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk"] Jan 31 07:28:56 crc kubenswrapper[4908]: I0131 07:28:56.720694 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" event={"ID":"17dce246-bb4f-401b-9041-df07c8e59bf6","Type":"ContainerStarted","Data":"098a8c47cfb6050235010b0a8e9a4d6ed9ff4e3de58a98fc62ea7db27d58c9bc"} Jan 31 07:28:56 crc kubenswrapper[4908]: I0131 07:28:56.721002 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:56 crc kubenswrapper[4908]: I0131 07:28:56.721015 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" event={"ID":"17dce246-bb4f-401b-9041-df07c8e59bf6","Type":"ContainerStarted","Data":"0f0b806df0deb937cb2e060f3ab7bf8bf5273b20957a9403740f2b6d8785f78f"} Jan 31 07:28:56 crc kubenswrapper[4908]: I0131 07:28:56.726293 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" Jan 31 07:28:56 crc kubenswrapper[4908]: I0131 07:28:56.744098 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7d5db9b7c6-5jxbk" podStartSLOduration=29.744078634 podStartE2EDuration="29.744078634s" podCreationTimestamp="2026-01-31 07:28:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:28:56.741541106 +0000 UTC m=+443.357485760" watchObservedRunningTime="2026-01-31 07:28:56.744078634 +0000 UTC m=+443.360023288" Jan 31 07:29:10 crc kubenswrapper[4908]: I0131 07:29:10.430887 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:29:10 crc kubenswrapper[4908]: I0131 07:29:10.431559 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:29:10 crc kubenswrapper[4908]: I0131 07:29:10.431643 4908 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 07:29:10 crc kubenswrapper[4908]: I0131 07:29:10.432408 4908 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b64acf3642ba08c436df2ada76a037d785a8cac8726ab06ab67e5b2ad4afcbf4"} pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 07:29:10 crc kubenswrapper[4908]: I0131 07:29:10.432471 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" containerID="cri-o://b64acf3642ba08c436df2ada76a037d785a8cac8726ab06ab67e5b2ad4afcbf4" gracePeriod=600 Jan 31 07:29:10 crc kubenswrapper[4908]: I0131 07:29:10.797808 4908 generic.go:334] "Generic (PLEG): container finished" podID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerID="b64acf3642ba08c436df2ada76a037d785a8cac8726ab06ab67e5b2ad4afcbf4" exitCode=0 Jan 31 07:29:10 crc kubenswrapper[4908]: I0131 07:29:10.797879 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerDied","Data":"b64acf3642ba08c436df2ada76a037d785a8cac8726ab06ab67e5b2ad4afcbf4"} Jan 31 07:29:10 crc kubenswrapper[4908]: I0131 07:29:10.797942 4908 scope.go:117] "RemoveContainer" containerID="34ef819486364f86752aaf25789c6e1538d592f02fc1ebaf50374cafc4eb032d" Jan 31 07:29:11 crc kubenswrapper[4908]: I0131 07:29:11.806879 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerStarted","Data":"3dfbbc1b5ff70365792954805ead0bd41cfb62c5615a5fe9df3e5b65b3920434"} Jan 31 07:30:00 crc kubenswrapper[4908]: I0131 07:30:00.164377 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497410-g5phg"] Jan 31 07:30:00 crc kubenswrapper[4908]: I0131 07:30:00.165516 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-g5phg" Jan 31 07:30:00 crc kubenswrapper[4908]: I0131 07:30:00.167074 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 07:30:00 crc kubenswrapper[4908]: I0131 07:30:00.172884 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 07:30:00 crc kubenswrapper[4908]: I0131 07:30:00.174104 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497410-g5phg"] Jan 31 07:30:00 crc kubenswrapper[4908]: I0131 07:30:00.330684 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg9kv\" (UniqueName: \"kubernetes.io/projected/62329ea4-99fc-4e24-8a90-00dbf5b649cc-kube-api-access-hg9kv\") pod \"collect-profiles-29497410-g5phg\" (UID: \"62329ea4-99fc-4e24-8a90-00dbf5b649cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-g5phg" Jan 31 07:30:00 crc kubenswrapper[4908]: I0131 07:30:00.330763 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62329ea4-99fc-4e24-8a90-00dbf5b649cc-secret-volume\") pod \"collect-profiles-29497410-g5phg\" (UID: \"62329ea4-99fc-4e24-8a90-00dbf5b649cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-g5phg" Jan 31 07:30:00 crc kubenswrapper[4908]: I0131 07:30:00.330803 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62329ea4-99fc-4e24-8a90-00dbf5b649cc-config-volume\") pod \"collect-profiles-29497410-g5phg\" (UID: \"62329ea4-99fc-4e24-8a90-00dbf5b649cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-g5phg" Jan 31 07:30:00 crc kubenswrapper[4908]: I0131 07:30:00.432558 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg9kv\" (UniqueName: \"kubernetes.io/projected/62329ea4-99fc-4e24-8a90-00dbf5b649cc-kube-api-access-hg9kv\") pod \"collect-profiles-29497410-g5phg\" (UID: \"62329ea4-99fc-4e24-8a90-00dbf5b649cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-g5phg" Jan 31 07:30:00 crc kubenswrapper[4908]: I0131 07:30:00.432624 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62329ea4-99fc-4e24-8a90-00dbf5b649cc-secret-volume\") pod \"collect-profiles-29497410-g5phg\" (UID: \"62329ea4-99fc-4e24-8a90-00dbf5b649cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-g5phg" Jan 31 07:30:00 crc kubenswrapper[4908]: I0131 07:30:00.432653 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62329ea4-99fc-4e24-8a90-00dbf5b649cc-config-volume\") pod \"collect-profiles-29497410-g5phg\" (UID: \"62329ea4-99fc-4e24-8a90-00dbf5b649cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-g5phg" Jan 31 07:30:00 crc kubenswrapper[4908]: I0131 07:30:00.433470 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62329ea4-99fc-4e24-8a90-00dbf5b649cc-config-volume\") pod \"collect-profiles-29497410-g5phg\" (UID: \"62329ea4-99fc-4e24-8a90-00dbf5b649cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-g5phg" Jan 31 07:30:00 crc kubenswrapper[4908]: I0131 07:30:00.438437 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62329ea4-99fc-4e24-8a90-00dbf5b649cc-secret-volume\") pod \"collect-profiles-29497410-g5phg\" (UID: \"62329ea4-99fc-4e24-8a90-00dbf5b649cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-g5phg" Jan 31 07:30:00 crc kubenswrapper[4908]: I0131 07:30:00.447288 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg9kv\" (UniqueName: \"kubernetes.io/projected/62329ea4-99fc-4e24-8a90-00dbf5b649cc-kube-api-access-hg9kv\") pod \"collect-profiles-29497410-g5phg\" (UID: \"62329ea4-99fc-4e24-8a90-00dbf5b649cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-g5phg" Jan 31 07:30:00 crc kubenswrapper[4908]: I0131 07:30:00.486294 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-g5phg" Jan 31 07:30:00 crc kubenswrapper[4908]: I0131 07:30:00.869042 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497410-g5phg"] Jan 31 07:30:01 crc kubenswrapper[4908]: I0131 07:30:01.090437 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-g5phg" event={"ID":"62329ea4-99fc-4e24-8a90-00dbf5b649cc","Type":"ContainerStarted","Data":"dd1ab417af1dcd6749e8583d60cc88d895456ebce2e8a1d59fffdf29d8aa519f"} Jan 31 07:30:01 crc kubenswrapper[4908]: I0131 07:30:01.090487 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-g5phg" event={"ID":"62329ea4-99fc-4e24-8a90-00dbf5b649cc","Type":"ContainerStarted","Data":"b63b9770c030db334c5db2fedfcfaa78d8ee4e8a99845af99f7530e2b0f30e56"} Jan 31 07:30:01 crc kubenswrapper[4908]: I0131 07:30:01.107892 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-g5phg" podStartSLOduration=1.107875935 podStartE2EDuration="1.107875935s" podCreationTimestamp="2026-01-31 07:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:30:01.10466895 +0000 UTC m=+507.720613604" watchObservedRunningTime="2026-01-31 07:30:01.107875935 +0000 UTC m=+507.723820589" Jan 31 07:30:02 crc kubenswrapper[4908]: I0131 07:30:02.097668 4908 generic.go:334] "Generic (PLEG): container finished" podID="62329ea4-99fc-4e24-8a90-00dbf5b649cc" containerID="dd1ab417af1dcd6749e8583d60cc88d895456ebce2e8a1d59fffdf29d8aa519f" exitCode=0 Jan 31 07:30:02 crc kubenswrapper[4908]: I0131 07:30:02.097709 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-g5phg" event={"ID":"62329ea4-99fc-4e24-8a90-00dbf5b649cc","Type":"ContainerDied","Data":"dd1ab417af1dcd6749e8583d60cc88d895456ebce2e8a1d59fffdf29d8aa519f"} Jan 31 07:30:03 crc kubenswrapper[4908]: I0131 07:30:03.305951 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-g5phg" Jan 31 07:30:03 crc kubenswrapper[4908]: I0131 07:30:03.370536 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg9kv\" (UniqueName: \"kubernetes.io/projected/62329ea4-99fc-4e24-8a90-00dbf5b649cc-kube-api-access-hg9kv\") pod \"62329ea4-99fc-4e24-8a90-00dbf5b649cc\" (UID: \"62329ea4-99fc-4e24-8a90-00dbf5b649cc\") " Jan 31 07:30:03 crc kubenswrapper[4908]: I0131 07:30:03.370588 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62329ea4-99fc-4e24-8a90-00dbf5b649cc-secret-volume\") pod \"62329ea4-99fc-4e24-8a90-00dbf5b649cc\" (UID: \"62329ea4-99fc-4e24-8a90-00dbf5b649cc\") " Jan 31 07:30:03 crc kubenswrapper[4908]: I0131 07:30:03.370610 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62329ea4-99fc-4e24-8a90-00dbf5b649cc-config-volume\") pod \"62329ea4-99fc-4e24-8a90-00dbf5b649cc\" (UID: \"62329ea4-99fc-4e24-8a90-00dbf5b649cc\") " Jan 31 07:30:03 crc kubenswrapper[4908]: I0131 07:30:03.371518 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62329ea4-99fc-4e24-8a90-00dbf5b649cc-config-volume" (OuterVolumeSpecName: "config-volume") pod "62329ea4-99fc-4e24-8a90-00dbf5b649cc" (UID: "62329ea4-99fc-4e24-8a90-00dbf5b649cc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:30:03 crc kubenswrapper[4908]: I0131 07:30:03.375738 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62329ea4-99fc-4e24-8a90-00dbf5b649cc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "62329ea4-99fc-4e24-8a90-00dbf5b649cc" (UID: "62329ea4-99fc-4e24-8a90-00dbf5b649cc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:30:03 crc kubenswrapper[4908]: I0131 07:30:03.375943 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62329ea4-99fc-4e24-8a90-00dbf5b649cc-kube-api-access-hg9kv" (OuterVolumeSpecName: "kube-api-access-hg9kv") pod "62329ea4-99fc-4e24-8a90-00dbf5b649cc" (UID: "62329ea4-99fc-4e24-8a90-00dbf5b649cc"). InnerVolumeSpecName "kube-api-access-hg9kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:30:03 crc kubenswrapper[4908]: I0131 07:30:03.471900 4908 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62329ea4-99fc-4e24-8a90-00dbf5b649cc-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 07:30:03 crc kubenswrapper[4908]: I0131 07:30:03.471947 4908 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62329ea4-99fc-4e24-8a90-00dbf5b649cc-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 07:30:03 crc kubenswrapper[4908]: I0131 07:30:03.471961 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg9kv\" (UniqueName: \"kubernetes.io/projected/62329ea4-99fc-4e24-8a90-00dbf5b649cc-kube-api-access-hg9kv\") on node \"crc\" DevicePath \"\"" Jan 31 07:30:04 crc kubenswrapper[4908]: E0131 07:30:04.030476 4908 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62329ea4_99fc_4e24_8a90_00dbf5b649cc.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62329ea4_99fc_4e24_8a90_00dbf5b649cc.slice/crio-b63b9770c030db334c5db2fedfcfaa78d8ee4e8a99845af99f7530e2b0f30e56\": RecentStats: unable to find data in memory cache]" Jan 31 07:30:04 crc kubenswrapper[4908]: I0131 07:30:04.110342 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-g5phg" event={"ID":"62329ea4-99fc-4e24-8a90-00dbf5b649cc","Type":"ContainerDied","Data":"b63b9770c030db334c5db2fedfcfaa78d8ee4e8a99845af99f7530e2b0f30e56"} Jan 31 07:30:04 crc kubenswrapper[4908]: I0131 07:30:04.110380 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b63b9770c030db334c5db2fedfcfaa78d8ee4e8a99845af99f7530e2b0f30e56" Jan 31 07:30:04 crc kubenswrapper[4908]: I0131 07:30:04.110401 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-g5phg" Jan 31 07:30:39 crc kubenswrapper[4908]: I0131 07:30:39.063293 4908 scope.go:117] "RemoveContainer" containerID="045657d08ee33e47c4d0719a002f2f8851c792e0f12bf3a20c5b131a9b0d59a8" Jan 31 07:30:39 crc kubenswrapper[4908]: I0131 07:30:39.084086 4908 scope.go:117] "RemoveContainer" containerID="95de7b969e700edee582912771674e68fd023ce575bf4ad16109c3372f64c164" Jan 31 07:31:40 crc kubenswrapper[4908]: I0131 07:31:40.431256 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:31:40 crc kubenswrapper[4908]: I0131 07:31:40.431962 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:32:10 crc kubenswrapper[4908]: I0131 07:32:10.430916 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:32:10 crc kubenswrapper[4908]: I0131 07:32:10.431567 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:32:21 crc kubenswrapper[4908]: I0131 07:32:21.422946 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8gp2t"] Jan 31 07:32:21 crc kubenswrapper[4908]: E0131 07:32:21.423631 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62329ea4-99fc-4e24-8a90-00dbf5b649cc" containerName="collect-profiles" Jan 31 07:32:21 crc kubenswrapper[4908]: I0131 07:32:21.423647 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="62329ea4-99fc-4e24-8a90-00dbf5b649cc" containerName="collect-profiles" Jan 31 07:32:21 crc kubenswrapper[4908]: I0131 07:32:21.423770 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="62329ea4-99fc-4e24-8a90-00dbf5b649cc" containerName="collect-profiles" Jan 31 07:32:21 crc kubenswrapper[4908]: I0131 07:32:21.424213 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-8gp2t" Jan 31 07:32:21 crc kubenswrapper[4908]: I0131 07:32:21.443706 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8gp2t"] Jan 31 07:32:21 crc kubenswrapper[4908]: I0131 07:32:21.598127 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5634f312-76cc-48de-9720-147698f4ef09-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8gp2t\" (UID: \"5634f312-76cc-48de-9720-147698f4ef09\") " pod="openshift-image-registry/image-registry-66df7c8f76-8gp2t" Jan 31 07:32:21 crc kubenswrapper[4908]: I0131 07:32:21.598183 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-8gp2t\" (UID: \"5634f312-76cc-48de-9720-147698f4ef09\") " pod="openshift-image-registry/image-registry-66df7c8f76-8gp2t" Jan 31 07:32:21 crc kubenswrapper[4908]: I0131 07:32:21.598258 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5634f312-76cc-48de-9720-147698f4ef09-bound-sa-token\") pod \"image-registry-66df7c8f76-8gp2t\" (UID: \"5634f312-76cc-48de-9720-147698f4ef09\") " pod="openshift-image-registry/image-registry-66df7c8f76-8gp2t" Jan 31 07:32:21 crc kubenswrapper[4908]: I0131 07:32:21.598951 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7ntn\" (UniqueName: \"kubernetes.io/projected/5634f312-76cc-48de-9720-147698f4ef09-kube-api-access-h7ntn\") pod \"image-registry-66df7c8f76-8gp2t\" (UID: \"5634f312-76cc-48de-9720-147698f4ef09\") " pod="openshift-image-registry/image-registry-66df7c8f76-8gp2t" Jan 31 07:32:21 crc kubenswrapper[4908]: I0131 07:32:21.599026 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5634f312-76cc-48de-9720-147698f4ef09-registry-tls\") pod \"image-registry-66df7c8f76-8gp2t\" (UID: \"5634f312-76cc-48de-9720-147698f4ef09\") " pod="openshift-image-registry/image-registry-66df7c8f76-8gp2t" Jan 31 07:32:21 crc kubenswrapper[4908]: I0131 07:32:21.599053 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5634f312-76cc-48de-9720-147698f4ef09-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8gp2t\" (UID: \"5634f312-76cc-48de-9720-147698f4ef09\") " pod="openshift-image-registry/image-registry-66df7c8f76-8gp2t" Jan 31 07:32:21 crc kubenswrapper[4908]: I0131 07:32:21.599113 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5634f312-76cc-48de-9720-147698f4ef09-registry-certificates\") pod \"image-registry-66df7c8f76-8gp2t\" (UID: \"5634f312-76cc-48de-9720-147698f4ef09\") " pod="openshift-image-registry/image-registry-66df7c8f76-8gp2t" Jan 31 07:32:21 crc kubenswrapper[4908]: I0131 07:32:21.599135 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5634f312-76cc-48de-9720-147698f4ef09-trusted-ca\") pod \"image-registry-66df7c8f76-8gp2t\" (UID: \"5634f312-76cc-48de-9720-147698f4ef09\") " pod="openshift-image-registry/image-registry-66df7c8f76-8gp2t" Jan 31 07:32:21 crc kubenswrapper[4908]: I0131 07:32:21.622111 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-8gp2t\" (UID: \"5634f312-76cc-48de-9720-147698f4ef09\") " pod="openshift-image-registry/image-registry-66df7c8f76-8gp2t" Jan 31 07:32:21 crc kubenswrapper[4908]: I0131 07:32:21.700728 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5634f312-76cc-48de-9720-147698f4ef09-registry-tls\") pod \"image-registry-66df7c8f76-8gp2t\" (UID: \"5634f312-76cc-48de-9720-147698f4ef09\") " pod="openshift-image-registry/image-registry-66df7c8f76-8gp2t" Jan 31 07:32:21 crc kubenswrapper[4908]: I0131 07:32:21.700779 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5634f312-76cc-48de-9720-147698f4ef09-registry-certificates\") pod \"image-registry-66df7c8f76-8gp2t\" (UID: \"5634f312-76cc-48de-9720-147698f4ef09\") " pod="openshift-image-registry/image-registry-66df7c8f76-8gp2t" Jan 31 07:32:21 crc kubenswrapper[4908]: I0131 07:32:21.700798 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5634f312-76cc-48de-9720-147698f4ef09-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8gp2t\" (UID: \"5634f312-76cc-48de-9720-147698f4ef09\") " pod="openshift-image-registry/image-registry-66df7c8f76-8gp2t" Jan 31 07:32:21 crc kubenswrapper[4908]: I0131 07:32:21.700816 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5634f312-76cc-48de-9720-147698f4ef09-trusted-ca\") pod \"image-registry-66df7c8f76-8gp2t\" (UID: \"5634f312-76cc-48de-9720-147698f4ef09\") " pod="openshift-image-registry/image-registry-66df7c8f76-8gp2t" Jan 31 07:32:21 crc kubenswrapper[4908]: I0131 07:32:21.700853 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5634f312-76cc-48de-9720-147698f4ef09-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8gp2t\" (UID: \"5634f312-76cc-48de-9720-147698f4ef09\") " pod="openshift-image-registry/image-registry-66df7c8f76-8gp2t" Jan 31 07:32:21 crc kubenswrapper[4908]: I0131 07:32:21.700881 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5634f312-76cc-48de-9720-147698f4ef09-bound-sa-token\") pod \"image-registry-66df7c8f76-8gp2t\" (UID: \"5634f312-76cc-48de-9720-147698f4ef09\") " pod="openshift-image-registry/image-registry-66df7c8f76-8gp2t" Jan 31 07:32:21 crc kubenswrapper[4908]: I0131 07:32:21.700901 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7ntn\" (UniqueName: \"kubernetes.io/projected/5634f312-76cc-48de-9720-147698f4ef09-kube-api-access-h7ntn\") pod \"image-registry-66df7c8f76-8gp2t\" (UID: \"5634f312-76cc-48de-9720-147698f4ef09\") " pod="openshift-image-registry/image-registry-66df7c8f76-8gp2t" Jan 31 07:32:21 crc kubenswrapper[4908]: I0131 07:32:21.701709 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5634f312-76cc-48de-9720-147698f4ef09-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8gp2t\" (UID: \"5634f312-76cc-48de-9720-147698f4ef09\") " pod="openshift-image-registry/image-registry-66df7c8f76-8gp2t" Jan 31 07:32:21 crc kubenswrapper[4908]: I0131 07:32:21.702753 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5634f312-76cc-48de-9720-147698f4ef09-trusted-ca\") pod \"image-registry-66df7c8f76-8gp2t\" (UID: \"5634f312-76cc-48de-9720-147698f4ef09\") " pod="openshift-image-registry/image-registry-66df7c8f76-8gp2t" Jan 31 07:32:21 crc kubenswrapper[4908]: I0131 07:32:21.702886 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5634f312-76cc-48de-9720-147698f4ef09-registry-certificates\") pod \"image-registry-66df7c8f76-8gp2t\" (UID: \"5634f312-76cc-48de-9720-147698f4ef09\") " pod="openshift-image-registry/image-registry-66df7c8f76-8gp2t" Jan 31 07:32:21 crc kubenswrapper[4908]: I0131 07:32:21.705616 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5634f312-76cc-48de-9720-147698f4ef09-registry-tls\") pod \"image-registry-66df7c8f76-8gp2t\" (UID: \"5634f312-76cc-48de-9720-147698f4ef09\") " pod="openshift-image-registry/image-registry-66df7c8f76-8gp2t" Jan 31 07:32:21 crc kubenswrapper[4908]: I0131 07:32:21.705621 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5634f312-76cc-48de-9720-147698f4ef09-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8gp2t\" (UID: \"5634f312-76cc-48de-9720-147698f4ef09\") " pod="openshift-image-registry/image-registry-66df7c8f76-8gp2t" Jan 31 07:32:21 crc kubenswrapper[4908]: I0131 07:32:21.715493 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7ntn\" (UniqueName: \"kubernetes.io/projected/5634f312-76cc-48de-9720-147698f4ef09-kube-api-access-h7ntn\") pod \"image-registry-66df7c8f76-8gp2t\" (UID: \"5634f312-76cc-48de-9720-147698f4ef09\") " pod="openshift-image-registry/image-registry-66df7c8f76-8gp2t" Jan 31 07:32:21 crc kubenswrapper[4908]: I0131 07:32:21.725796 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5634f312-76cc-48de-9720-147698f4ef09-bound-sa-token\") pod \"image-registry-66df7c8f76-8gp2t\" (UID: \"5634f312-76cc-48de-9720-147698f4ef09\") " pod="openshift-image-registry/image-registry-66df7c8f76-8gp2t" Jan 31 07:32:21 crc kubenswrapper[4908]: I0131 07:32:21.801939 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-8gp2t" Jan 31 07:32:21 crc kubenswrapper[4908]: I0131 07:32:21.977743 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8gp2t"] Jan 31 07:32:22 crc kubenswrapper[4908]: I0131 07:32:22.860669 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-8gp2t" event={"ID":"5634f312-76cc-48de-9720-147698f4ef09","Type":"ContainerStarted","Data":"0b885b8e6ca07831db84898dec4e07d81590a509c1279161f6086a8cbd40db7b"} Jan 31 07:32:22 crc kubenswrapper[4908]: I0131 07:32:22.861129 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-8gp2t" Jan 31 07:32:22 crc kubenswrapper[4908]: I0131 07:32:22.861145 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-8gp2t" event={"ID":"5634f312-76cc-48de-9720-147698f4ef09","Type":"ContainerStarted","Data":"f1626d4668e6add24f61447ef56eedcb3d010c73761155a3ab0282a5878b1777"} Jan 31 07:32:22 crc kubenswrapper[4908]: I0131 07:32:22.882630 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-8gp2t" podStartSLOduration=1.882609693 podStartE2EDuration="1.882609693s" podCreationTimestamp="2026-01-31 07:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:32:22.881039799 +0000 UTC m=+649.496984453" watchObservedRunningTime="2026-01-31 07:32:22.882609693 +0000 UTC m=+649.498554347" Jan 31 07:32:40 crc kubenswrapper[4908]: I0131 07:32:40.430674 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:32:40 crc kubenswrapper[4908]: I0131 07:32:40.431356 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:32:40 crc kubenswrapper[4908]: I0131 07:32:40.431407 4908 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 07:32:40 crc kubenswrapper[4908]: I0131 07:32:40.432060 4908 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3dfbbc1b5ff70365792954805ead0bd41cfb62c5615a5fe9df3e5b65b3920434"} pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 07:32:40 crc kubenswrapper[4908]: I0131 07:32:40.432128 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" containerID="cri-o://3dfbbc1b5ff70365792954805ead0bd41cfb62c5615a5fe9df3e5b65b3920434" gracePeriod=600 Jan 31 07:32:40 crc kubenswrapper[4908]: I0131 07:32:40.956529 4908 generic.go:334] "Generic (PLEG): container finished" podID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerID="3dfbbc1b5ff70365792954805ead0bd41cfb62c5615a5fe9df3e5b65b3920434" exitCode=0 Jan 31 07:32:40 crc kubenswrapper[4908]: I0131 07:32:40.956610 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerDied","Data":"3dfbbc1b5ff70365792954805ead0bd41cfb62c5615a5fe9df3e5b65b3920434"} Jan 31 07:32:40 crc kubenswrapper[4908]: I0131 07:32:40.956943 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerStarted","Data":"8fb1fe09c148821fb5edb05d0d628b8701a9cd90e03f6d948ce3ba250379ba75"} Jan 31 07:32:40 crc kubenswrapper[4908]: I0131 07:32:40.956971 4908 scope.go:117] "RemoveContainer" containerID="b64acf3642ba08c436df2ada76a037d785a8cac8726ab06ab67e5b2ad4afcbf4" Jan 31 07:32:41 crc kubenswrapper[4908]: I0131 07:32:41.809918 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-8gp2t" Jan 31 07:32:41 crc kubenswrapper[4908]: I0131 07:32:41.876318 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-glh6f"] Jan 31 07:33:06 crc kubenswrapper[4908]: I0131 07:33:06.929960 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" podUID="0d369cc1-14e7-49ff-b253-bc196840a444" containerName="registry" containerID="cri-o://851691cf9aae707910865e676c602ccbaaea0d6f70debfb98de1c59487996c0e" gracePeriod=30 Jan 31 07:33:07 crc kubenswrapper[4908]: I0131 07:33:07.134676 4908 generic.go:334] "Generic (PLEG): container finished" podID="0d369cc1-14e7-49ff-b253-bc196840a444" containerID="851691cf9aae707910865e676c602ccbaaea0d6f70debfb98de1c59487996c0e" exitCode=0 Jan 31 07:33:07 crc kubenswrapper[4908]: I0131 07:33:07.134791 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" event={"ID":"0d369cc1-14e7-49ff-b253-bc196840a444","Type":"ContainerDied","Data":"851691cf9aae707910865e676c602ccbaaea0d6f70debfb98de1c59487996c0e"} Jan 31 07:33:07 crc kubenswrapper[4908]: I0131 07:33:07.322782 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:33:07 crc kubenswrapper[4908]: I0131 07:33:07.470868 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d369cc1-14e7-49ff-b253-bc196840a444-bound-sa-token\") pod \"0d369cc1-14e7-49ff-b253-bc196840a444\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " Jan 31 07:33:07 crc kubenswrapper[4908]: I0131 07:33:07.471204 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"0d369cc1-14e7-49ff-b253-bc196840a444\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " Jan 31 07:33:07 crc kubenswrapper[4908]: I0131 07:33:07.471248 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0d369cc1-14e7-49ff-b253-bc196840a444-ca-trust-extracted\") pod \"0d369cc1-14e7-49ff-b253-bc196840a444\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " Jan 31 07:33:07 crc kubenswrapper[4908]: I0131 07:33:07.471274 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d369cc1-14e7-49ff-b253-bc196840a444-trusted-ca\") pod \"0d369cc1-14e7-49ff-b253-bc196840a444\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " Jan 31 07:33:07 crc kubenswrapper[4908]: I0131 07:33:07.471308 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0d369cc1-14e7-49ff-b253-bc196840a444-registry-tls\") pod \"0d369cc1-14e7-49ff-b253-bc196840a444\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " Jan 31 07:33:07 crc kubenswrapper[4908]: I0131 07:33:07.471340 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0d369cc1-14e7-49ff-b253-bc196840a444-registry-certificates\") pod \"0d369cc1-14e7-49ff-b253-bc196840a444\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " Jan 31 07:33:07 crc kubenswrapper[4908]: I0131 07:33:07.471380 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0d369cc1-14e7-49ff-b253-bc196840a444-installation-pull-secrets\") pod \"0d369cc1-14e7-49ff-b253-bc196840a444\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " Jan 31 07:33:07 crc kubenswrapper[4908]: I0131 07:33:07.471406 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5v8l\" (UniqueName: \"kubernetes.io/projected/0d369cc1-14e7-49ff-b253-bc196840a444-kube-api-access-j5v8l\") pod \"0d369cc1-14e7-49ff-b253-bc196840a444\" (UID: \"0d369cc1-14e7-49ff-b253-bc196840a444\") " Jan 31 07:33:07 crc kubenswrapper[4908]: I0131 07:33:07.472972 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d369cc1-14e7-49ff-b253-bc196840a444-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "0d369cc1-14e7-49ff-b253-bc196840a444" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:33:07 crc kubenswrapper[4908]: I0131 07:33:07.473225 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d369cc1-14e7-49ff-b253-bc196840a444-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "0d369cc1-14e7-49ff-b253-bc196840a444" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:33:07 crc kubenswrapper[4908]: I0131 07:33:07.478622 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d369cc1-14e7-49ff-b253-bc196840a444-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "0d369cc1-14e7-49ff-b253-bc196840a444" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:33:07 crc kubenswrapper[4908]: I0131 07:33:07.478690 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d369cc1-14e7-49ff-b253-bc196840a444-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "0d369cc1-14e7-49ff-b253-bc196840a444" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:33:07 crc kubenswrapper[4908]: I0131 07:33:07.478934 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d369cc1-14e7-49ff-b253-bc196840a444-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "0d369cc1-14e7-49ff-b253-bc196840a444" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:33:07 crc kubenswrapper[4908]: I0131 07:33:07.480291 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d369cc1-14e7-49ff-b253-bc196840a444-kube-api-access-j5v8l" (OuterVolumeSpecName: "kube-api-access-j5v8l") pod "0d369cc1-14e7-49ff-b253-bc196840a444" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444"). InnerVolumeSpecName "kube-api-access-j5v8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:33:07 crc kubenswrapper[4908]: I0131 07:33:07.483416 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "0d369cc1-14e7-49ff-b253-bc196840a444" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 07:33:07 crc kubenswrapper[4908]: I0131 07:33:07.499940 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d369cc1-14e7-49ff-b253-bc196840a444-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "0d369cc1-14e7-49ff-b253-bc196840a444" (UID: "0d369cc1-14e7-49ff-b253-bc196840a444"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:33:07 crc kubenswrapper[4908]: I0131 07:33:07.572341 4908 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d369cc1-14e7-49ff-b253-bc196840a444-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 07:33:07 crc kubenswrapper[4908]: I0131 07:33:07.572379 4908 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0d369cc1-14e7-49ff-b253-bc196840a444-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 31 07:33:07 crc kubenswrapper[4908]: I0131 07:33:07.572389 4908 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d369cc1-14e7-49ff-b253-bc196840a444-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:33:07 crc kubenswrapper[4908]: I0131 07:33:07.572397 4908 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0d369cc1-14e7-49ff-b253-bc196840a444-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 31 07:33:07 crc kubenswrapper[4908]: I0131 07:33:07.572406 4908 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0d369cc1-14e7-49ff-b253-bc196840a444-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 31 07:33:07 crc kubenswrapper[4908]: I0131 07:33:07.572420 4908 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0d369cc1-14e7-49ff-b253-bc196840a444-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 31 07:33:07 crc kubenswrapper[4908]: I0131 07:33:07.572430 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5v8l\" (UniqueName: \"kubernetes.io/projected/0d369cc1-14e7-49ff-b253-bc196840a444-kube-api-access-j5v8l\") on node \"crc\" DevicePath \"\"" Jan 31 07:33:08 crc kubenswrapper[4908]: I0131 07:33:08.141696 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" event={"ID":"0d369cc1-14e7-49ff-b253-bc196840a444","Type":"ContainerDied","Data":"170ade0cdeb13d8c098004c415e273987010d0d5be7f2fd8031a845b1f97d25d"} Jan 31 07:33:08 crc kubenswrapper[4908]: I0131 07:33:08.141757 4908 scope.go:117] "RemoveContainer" containerID="851691cf9aae707910865e676c602ccbaaea0d6f70debfb98de1c59487996c0e" Jan 31 07:33:08 crc kubenswrapper[4908]: I0131 07:33:08.141758 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-glh6f" Jan 31 07:33:08 crc kubenswrapper[4908]: I0131 07:33:08.163616 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-glh6f"] Jan 31 07:33:08 crc kubenswrapper[4908]: I0131 07:33:08.167741 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-glh6f"] Jan 31 07:33:09 crc kubenswrapper[4908]: I0131 07:33:09.947463 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d369cc1-14e7-49ff-b253-bc196840a444" path="/var/lib/kubelet/pods/0d369cc1-14e7-49ff-b253-bc196840a444/volumes" Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.070721 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-mbqfv"] Jan 31 07:33:38 crc kubenswrapper[4908]: E0131 07:33:38.071421 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d369cc1-14e7-49ff-b253-bc196840a444" containerName="registry" Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.071436 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d369cc1-14e7-49ff-b253-bc196840a444" containerName="registry" Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.071530 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d369cc1-14e7-49ff-b253-bc196840a444" containerName="registry" Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.071893 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mbqfv" Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.076334 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.077072 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.079905 4908 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-zlznc" Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.080080 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-bfcpr"] Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.080702 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-bfcpr" Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.082356 4908 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-svfx8" Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.087911 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wml9b\" (UniqueName: \"kubernetes.io/projected/2ec70fc1-2433-4ad9-9c14-f9decb3ae354-kube-api-access-wml9b\") pod \"cert-manager-cainjector-cf98fcc89-mbqfv\" (UID: \"2ec70fc1-2433-4ad9-9c14-f9decb3ae354\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-mbqfv" Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.092909 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-mbqfv"] Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.101889 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-bfcpr"] Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.106849 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-8dx2s"] Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.107746 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-8dx2s" Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.114186 4908 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-5ntz8" Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.119731 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-8dx2s"] Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.189497 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjv8l\" (UniqueName: \"kubernetes.io/projected/2af952c0-4d39-4459-a32c-a963c7a6741b-kube-api-access-pjv8l\") pod \"cert-manager-webhook-687f57d79b-8dx2s\" (UID: \"2af952c0-4d39-4459-a32c-a963c7a6741b\") " pod="cert-manager/cert-manager-webhook-687f57d79b-8dx2s" Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.189556 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75lp4\" (UniqueName: \"kubernetes.io/projected/80a11742-1646-4db4-9a09-cf364dc4d60c-kube-api-access-75lp4\") pod \"cert-manager-858654f9db-bfcpr\" (UID: \"80a11742-1646-4db4-9a09-cf364dc4d60c\") " pod="cert-manager/cert-manager-858654f9db-bfcpr" Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.189582 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wml9b\" (UniqueName: \"kubernetes.io/projected/2ec70fc1-2433-4ad9-9c14-f9decb3ae354-kube-api-access-wml9b\") pod \"cert-manager-cainjector-cf98fcc89-mbqfv\" (UID: \"2ec70fc1-2433-4ad9-9c14-f9decb3ae354\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-mbqfv" Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.209045 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wml9b\" (UniqueName: \"kubernetes.io/projected/2ec70fc1-2433-4ad9-9c14-f9decb3ae354-kube-api-access-wml9b\") pod \"cert-manager-cainjector-cf98fcc89-mbqfv\" (UID: \"2ec70fc1-2433-4ad9-9c14-f9decb3ae354\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-mbqfv" Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.290404 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjv8l\" (UniqueName: \"kubernetes.io/projected/2af952c0-4d39-4459-a32c-a963c7a6741b-kube-api-access-pjv8l\") pod \"cert-manager-webhook-687f57d79b-8dx2s\" (UID: \"2af952c0-4d39-4459-a32c-a963c7a6741b\") " pod="cert-manager/cert-manager-webhook-687f57d79b-8dx2s" Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.290764 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75lp4\" (UniqueName: \"kubernetes.io/projected/80a11742-1646-4db4-9a09-cf364dc4d60c-kube-api-access-75lp4\") pod \"cert-manager-858654f9db-bfcpr\" (UID: \"80a11742-1646-4db4-9a09-cf364dc4d60c\") " pod="cert-manager/cert-manager-858654f9db-bfcpr" Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.313945 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjv8l\" (UniqueName: \"kubernetes.io/projected/2af952c0-4d39-4459-a32c-a963c7a6741b-kube-api-access-pjv8l\") pod \"cert-manager-webhook-687f57d79b-8dx2s\" (UID: \"2af952c0-4d39-4459-a32c-a963c7a6741b\") " pod="cert-manager/cert-manager-webhook-687f57d79b-8dx2s" Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.313958 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75lp4\" (UniqueName: \"kubernetes.io/projected/80a11742-1646-4db4-9a09-cf364dc4d60c-kube-api-access-75lp4\") pod \"cert-manager-858654f9db-bfcpr\" (UID: \"80a11742-1646-4db4-9a09-cf364dc4d60c\") " pod="cert-manager/cert-manager-858654f9db-bfcpr" Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.395937 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mbqfv" Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.415972 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-bfcpr" Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.429565 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-8dx2s" Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.643001 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-mbqfv"] Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.651670 4908 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.680037 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-bfcpr"] Jan 31 07:33:38 crc kubenswrapper[4908]: W0131 07:33:38.684657 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80a11742_1646_4db4_9a09_cf364dc4d60c.slice/crio-7782319c6952ea50d5d7f8bc34ae0de54cc4750b1398517d9383013a0140c196 WatchSource:0}: Error finding container 7782319c6952ea50d5d7f8bc34ae0de54cc4750b1398517d9383013a0140c196: Status 404 returned error can't find the container with id 7782319c6952ea50d5d7f8bc34ae0de54cc4750b1398517d9383013a0140c196 Jan 31 07:33:38 crc kubenswrapper[4908]: I0131 07:33:38.711952 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-8dx2s"] Jan 31 07:33:38 crc kubenswrapper[4908]: W0131 07:33:38.714776 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2af952c0_4d39_4459_a32c_a963c7a6741b.slice/crio-c2c1db6a91e45332bf580d3327d5659e97571a508179ece87e9fd49feea4b823 WatchSource:0}: Error finding container c2c1db6a91e45332bf580d3327d5659e97571a508179ece87e9fd49feea4b823: Status 404 returned error can't find the container with id c2c1db6a91e45332bf580d3327d5659e97571a508179ece87e9fd49feea4b823 Jan 31 07:33:39 crc kubenswrapper[4908]: I0131 07:33:39.325187 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mbqfv" event={"ID":"2ec70fc1-2433-4ad9-9c14-f9decb3ae354","Type":"ContainerStarted","Data":"b7e814019b638bae1222a4374b72ad5d8d7984a0446e3b5661352b61039cfc6c"} Jan 31 07:33:39 crc kubenswrapper[4908]: I0131 07:33:39.326412 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-8dx2s" event={"ID":"2af952c0-4d39-4459-a32c-a963c7a6741b","Type":"ContainerStarted","Data":"c2c1db6a91e45332bf580d3327d5659e97571a508179ece87e9fd49feea4b823"} Jan 31 07:33:39 crc kubenswrapper[4908]: I0131 07:33:39.327282 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-bfcpr" event={"ID":"80a11742-1646-4db4-9a09-cf364dc4d60c","Type":"ContainerStarted","Data":"7782319c6952ea50d5d7f8bc34ae0de54cc4750b1398517d9383013a0140c196"} Jan 31 07:33:42 crc kubenswrapper[4908]: I0131 07:33:42.346189 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mbqfv" event={"ID":"2ec70fc1-2433-4ad9-9c14-f9decb3ae354","Type":"ContainerStarted","Data":"8bd0626f6cf0f64eb1ca58b31d35a59e87aafaebea57fec8255d50ee4a6c6a87"} Jan 31 07:33:42 crc kubenswrapper[4908]: I0131 07:33:42.348760 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-8dx2s" event={"ID":"2af952c0-4d39-4459-a32c-a963c7a6741b","Type":"ContainerStarted","Data":"be347a2f73bad65c2c17a257e1cdac0c43476eeb5b8b17dcbe349cf06daa7934"} Jan 31 07:33:42 crc kubenswrapper[4908]: I0131 07:33:42.348866 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-8dx2s" Jan 31 07:33:42 crc kubenswrapper[4908]: I0131 07:33:42.350757 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-bfcpr" event={"ID":"80a11742-1646-4db4-9a09-cf364dc4d60c","Type":"ContainerStarted","Data":"6eaa5082232011ca3b044b69d3a1292838993f025be5244f327ae0c7cdce6c55"} Jan 31 07:33:42 crc kubenswrapper[4908]: I0131 07:33:42.368382 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mbqfv" podStartSLOduration=0.989385645 podStartE2EDuration="4.368367429s" podCreationTimestamp="2026-01-31 07:33:38 +0000 UTC" firstStartedPulling="2026-01-31 07:33:38.651449333 +0000 UTC m=+725.267393987" lastFinishedPulling="2026-01-31 07:33:42.030431117 +0000 UTC m=+728.646375771" observedRunningTime="2026-01-31 07:33:42.365023823 +0000 UTC m=+728.980968477" watchObservedRunningTime="2026-01-31 07:33:42.368367429 +0000 UTC m=+728.984312083" Jan 31 07:33:42 crc kubenswrapper[4908]: I0131 07:33:42.401284 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-bfcpr" podStartSLOduration=1.058005273 podStartE2EDuration="4.401258391s" podCreationTimestamp="2026-01-31 07:33:38 +0000 UTC" firstStartedPulling="2026-01-31 07:33:38.687462365 +0000 UTC m=+725.303407019" lastFinishedPulling="2026-01-31 07:33:42.030715493 +0000 UTC m=+728.646660137" observedRunningTime="2026-01-31 07:33:42.398321205 +0000 UTC m=+729.014265859" watchObservedRunningTime="2026-01-31 07:33:42.401258391 +0000 UTC m=+729.017203045" Jan 31 07:33:42 crc kubenswrapper[4908]: I0131 07:33:42.422509 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-8dx2s" podStartSLOduration=1.046015213 podStartE2EDuration="4.42248609s" podCreationTimestamp="2026-01-31 07:33:38 +0000 UTC" firstStartedPulling="2026-01-31 07:33:38.718388113 +0000 UTC m=+725.334332767" lastFinishedPulling="2026-01-31 07:33:42.09485899 +0000 UTC m=+728.710803644" observedRunningTime="2026-01-31 07:33:42.420532386 +0000 UTC m=+729.036477030" watchObservedRunningTime="2026-01-31 07:33:42.42248609 +0000 UTC m=+729.038430744" Jan 31 07:33:48 crc kubenswrapper[4908]: I0131 07:33:48.433298 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-8dx2s" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.229926 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xkd4f"] Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.230285 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="ovn-controller" containerID="cri-o://4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0" gracePeriod=30 Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.230324 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="nbdb" containerID="cri-o://d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07" gracePeriod=30 Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.230367 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78" gracePeriod=30 Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.230400 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="northd" containerID="cri-o://2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb" gracePeriod=30 Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.230473 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="sbdb" containerID="cri-o://f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf" gracePeriod=30 Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.230404 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="kube-rbac-proxy-node" containerID="cri-o://b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8" gracePeriod=30 Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.230384 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="ovn-acl-logging" containerID="cri-o://317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee" gracePeriod=30 Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.266778 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="ovnkube-controller" containerID="cri-o://8bcda58fce5a5726da59287d6554d20780c650951bb906b5cb4f2f0810823b65" gracePeriod=30 Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.407614 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-944z2_c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b/kube-multus/2.log" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.408290 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-944z2_c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b/kube-multus/1.log" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.408322 4908 generic.go:334] "Generic (PLEG): container finished" podID="c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b" containerID="76448e7eaa60d22190bd09ff8cd2152e42afbd2d9e3afc7635062f436b9000dc" exitCode=2 Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.408408 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-944z2" event={"ID":"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b","Type":"ContainerDied","Data":"76448e7eaa60d22190bd09ff8cd2152e42afbd2d9e3afc7635062f436b9000dc"} Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.408470 4908 scope.go:117] "RemoveContainer" containerID="194cdbb2201c22be4445330e908c269d66f69edaee49bad860a1ba85d7425ded" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.409023 4908 scope.go:117] "RemoveContainer" containerID="76448e7eaa60d22190bd09ff8cd2152e42afbd2d9e3afc7635062f436b9000dc" Jan 31 07:33:51 crc kubenswrapper[4908]: E0131 07:33:51.409494 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-944z2_openshift-multus(c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b)\"" pod="openshift-multus/multus-944z2" podUID="c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.417898 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xkd4f_d0d1945f-bd78-48c9-89be-35b3f2908dab/ovnkube-controller/3.log" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.420375 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xkd4f_d0d1945f-bd78-48c9-89be-35b3f2908dab/ovn-acl-logging/0.log" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.421373 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xkd4f_d0d1945f-bd78-48c9-89be-35b3f2908dab/ovn-controller/0.log" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.421737 4908 generic.go:334] "Generic (PLEG): container finished" podID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerID="f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf" exitCode=0 Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.421759 4908 generic.go:334] "Generic (PLEG): container finished" podID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerID="662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78" exitCode=0 Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.421783 4908 generic.go:334] "Generic (PLEG): container finished" podID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerID="b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8" exitCode=0 Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.421791 4908 generic.go:334] "Generic (PLEG): container finished" podID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerID="317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee" exitCode=143 Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.421798 4908 generic.go:334] "Generic (PLEG): container finished" podID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerID="4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0" exitCode=143 Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.421815 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" event={"ID":"d0d1945f-bd78-48c9-89be-35b3f2908dab","Type":"ContainerDied","Data":"f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf"} Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.421840 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" event={"ID":"d0d1945f-bd78-48c9-89be-35b3f2908dab","Type":"ContainerDied","Data":"662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78"} Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.421867 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" event={"ID":"d0d1945f-bd78-48c9-89be-35b3f2908dab","Type":"ContainerDied","Data":"b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8"} Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.421878 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" event={"ID":"d0d1945f-bd78-48c9-89be-35b3f2908dab","Type":"ContainerDied","Data":"317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee"} Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.421886 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" event={"ID":"d0d1945f-bd78-48c9-89be-35b3f2908dab","Type":"ContainerDied","Data":"4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0"} Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.596133 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xkd4f_d0d1945f-bd78-48c9-89be-35b3f2908dab/ovnkube-controller/3.log" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.601237 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xkd4f_d0d1945f-bd78-48c9-89be-35b3f2908dab/ovn-acl-logging/0.log" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.601792 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xkd4f_d0d1945f-bd78-48c9-89be-35b3f2908dab/ovn-controller/0.log" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.602199 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.654321 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-v89ml"] Jan 31 07:33:51 crc kubenswrapper[4908]: E0131 07:33:51.654564 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="ovnkube-controller" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.654578 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="ovnkube-controller" Jan 31 07:33:51 crc kubenswrapper[4908]: E0131 07:33:51.654588 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.654596 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 07:33:51 crc kubenswrapper[4908]: E0131 07:33:51.654610 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="ovnkube-controller" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.654616 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="ovnkube-controller" Jan 31 07:33:51 crc kubenswrapper[4908]: E0131 07:33:51.654627 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="ovn-controller" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.654634 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="ovn-controller" Jan 31 07:33:51 crc kubenswrapper[4908]: E0131 07:33:51.654646 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="northd" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.654654 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="northd" Jan 31 07:33:51 crc kubenswrapper[4908]: E0131 07:33:51.654664 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="kube-rbac-proxy-node" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.654672 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="kube-rbac-proxy-node" Jan 31 07:33:51 crc kubenswrapper[4908]: E0131 07:33:51.654681 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="ovnkube-controller" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.654688 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="ovnkube-controller" Jan 31 07:33:51 crc kubenswrapper[4908]: E0131 07:33:51.654699 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="kubecfg-setup" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.654705 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="kubecfg-setup" Jan 31 07:33:51 crc kubenswrapper[4908]: E0131 07:33:51.654716 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="ovn-acl-logging" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.654724 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="ovn-acl-logging" Jan 31 07:33:51 crc kubenswrapper[4908]: E0131 07:33:51.654737 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="sbdb" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.654761 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="sbdb" Jan 31 07:33:51 crc kubenswrapper[4908]: E0131 07:33:51.654772 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="nbdb" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.654779 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="nbdb" Jan 31 07:33:51 crc kubenswrapper[4908]: E0131 07:33:51.654789 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="ovnkube-controller" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.654808 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="ovnkube-controller" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.654915 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.654923 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="ovnkube-controller" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.654931 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="ovnkube-controller" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.654938 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="ovn-controller" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.654948 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="ovn-acl-logging" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.654956 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="ovnkube-controller" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.654962 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="ovnkube-controller" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.654970 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="kube-rbac-proxy-node" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.654998 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="ovnkube-controller" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.655007 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="sbdb" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.655018 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="nbdb" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.655026 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="northd" Jan 31 07:33:51 crc kubenswrapper[4908]: E0131 07:33:51.655119 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="ovnkube-controller" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.655128 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerName="ovnkube-controller" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.657087 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.745656 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-node-log\") pod \"d0d1945f-bd78-48c9-89be-35b3f2908dab\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.745704 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-run-netns\") pod \"d0d1945f-bd78-48c9-89be-35b3f2908dab\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.745732 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-run-systemd\") pod \"d0d1945f-bd78-48c9-89be-35b3f2908dab\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.745755 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-cni-bin\") pod \"d0d1945f-bd78-48c9-89be-35b3f2908dab\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.745755 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-node-log" (OuterVolumeSpecName: "node-log") pod "d0d1945f-bd78-48c9-89be-35b3f2908dab" (UID: "d0d1945f-bd78-48c9-89be-35b3f2908dab"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.745777 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-var-lib-cni-networks-ovn-kubernetes\") pod \"d0d1945f-bd78-48c9-89be-35b3f2908dab\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.745811 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "d0d1945f-bd78-48c9-89be-35b3f2908dab" (UID: "d0d1945f-bd78-48c9-89be-35b3f2908dab"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.745811 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "d0d1945f-bd78-48c9-89be-35b3f2908dab" (UID: "d0d1945f-bd78-48c9-89be-35b3f2908dab"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.745855 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-slash\") pod \"d0d1945f-bd78-48c9-89be-35b3f2908dab\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.745896 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "d0d1945f-bd78-48c9-89be-35b3f2908dab" (UID: "d0d1945f-bd78-48c9-89be-35b3f2908dab"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.745933 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-log-socket\") pod \"d0d1945f-bd78-48c9-89be-35b3f2908dab\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.745942 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-slash" (OuterVolumeSpecName: "host-slash") pod "d0d1945f-bd78-48c9-89be-35b3f2908dab" (UID: "d0d1945f-bd78-48c9-89be-35b3f2908dab"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.745956 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d0d1945f-bd78-48c9-89be-35b3f2908dab-env-overrides\") pod \"d0d1945f-bd78-48c9-89be-35b3f2908dab\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.745993 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-log-socket" (OuterVolumeSpecName: "log-socket") pod "d0d1945f-bd78-48c9-89be-35b3f2908dab" (UID: "d0d1945f-bd78-48c9-89be-35b3f2908dab"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.746021 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-run-ovn-kubernetes\") pod \"d0d1945f-bd78-48c9-89be-35b3f2908dab\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.746073 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-etc-openvswitch\") pod \"d0d1945f-bd78-48c9-89be-35b3f2908dab\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.746093 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "d0d1945f-bd78-48c9-89be-35b3f2908dab" (UID: "d0d1945f-bd78-48c9-89be-35b3f2908dab"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.746112 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdzpb\" (UniqueName: \"kubernetes.io/projected/d0d1945f-bd78-48c9-89be-35b3f2908dab-kube-api-access-mdzpb\") pod \"d0d1945f-bd78-48c9-89be-35b3f2908dab\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.746130 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "d0d1945f-bd78-48c9-89be-35b3f2908dab" (UID: "d0d1945f-bd78-48c9-89be-35b3f2908dab"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.746151 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-var-lib-openvswitch\") pod \"d0d1945f-bd78-48c9-89be-35b3f2908dab\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.746172 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-kubelet\") pod \"d0d1945f-bd78-48c9-89be-35b3f2908dab\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.746192 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d0d1945f-bd78-48c9-89be-35b3f2908dab-ovn-node-metrics-cert\") pod \"d0d1945f-bd78-48c9-89be-35b3f2908dab\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.746225 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-cni-netd\") pod \"d0d1945f-bd78-48c9-89be-35b3f2908dab\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.746235 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "d0d1945f-bd78-48c9-89be-35b3f2908dab" (UID: "d0d1945f-bd78-48c9-89be-35b3f2908dab"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.746242 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-run-ovn\") pod \"d0d1945f-bd78-48c9-89be-35b3f2908dab\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.746268 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d0d1945f-bd78-48c9-89be-35b3f2908dab-ovnkube-config\") pod \"d0d1945f-bd78-48c9-89be-35b3f2908dab\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.746288 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-systemd-units\") pod \"d0d1945f-bd78-48c9-89be-35b3f2908dab\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.746309 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d0d1945f-bd78-48c9-89be-35b3f2908dab-ovnkube-script-lib\") pod \"d0d1945f-bd78-48c9-89be-35b3f2908dab\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.746323 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0d1945f-bd78-48c9-89be-35b3f2908dab-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "d0d1945f-bd78-48c9-89be-35b3f2908dab" (UID: "d0d1945f-bd78-48c9-89be-35b3f2908dab"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.746334 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-run-openvswitch\") pod \"d0d1945f-bd78-48c9-89be-35b3f2908dab\" (UID: \"d0d1945f-bd78-48c9-89be-35b3f2908dab\") " Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.746351 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "d0d1945f-bd78-48c9-89be-35b3f2908dab" (UID: "d0d1945f-bd78-48c9-89be-35b3f2908dab"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.746469 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "d0d1945f-bd78-48c9-89be-35b3f2908dab" (UID: "d0d1945f-bd78-48c9-89be-35b3f2908dab"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.746577 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0d1945f-bd78-48c9-89be-35b3f2908dab-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "d0d1945f-bd78-48c9-89be-35b3f2908dab" (UID: "d0d1945f-bd78-48c9-89be-35b3f2908dab"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.746598 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "d0d1945f-bd78-48c9-89be-35b3f2908dab" (UID: "d0d1945f-bd78-48c9-89be-35b3f2908dab"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.746615 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "d0d1945f-bd78-48c9-89be-35b3f2908dab" (UID: "d0d1945f-bd78-48c9-89be-35b3f2908dab"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.746635 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "d0d1945f-bd78-48c9-89be-35b3f2908dab" (UID: "d0d1945f-bd78-48c9-89be-35b3f2908dab"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.746923 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0d1945f-bd78-48c9-89be-35b3f2908dab-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "d0d1945f-bd78-48c9-89be-35b3f2908dab" (UID: "d0d1945f-bd78-48c9-89be-35b3f2908dab"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.747090 4908 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.747276 4908 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.747321 4908 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.747332 4908 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.747342 4908 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.747351 4908 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d0d1945f-bd78-48c9-89be-35b3f2908dab-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.747361 4908 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.747400 4908 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.747410 4908 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-node-log\") on node \"crc\" DevicePath \"\"" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.747419 4908 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.747429 4908 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.747440 4908 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.747480 4908 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-slash\") on node \"crc\" DevicePath \"\"" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.747491 4908 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-log-socket\") on node \"crc\" DevicePath \"\"" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.747500 4908 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d0d1945f-bd78-48c9-89be-35b3f2908dab-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.747511 4908 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.750998 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0d1945f-bd78-48c9-89be-35b3f2908dab-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "d0d1945f-bd78-48c9-89be-35b3f2908dab" (UID: "d0d1945f-bd78-48c9-89be-35b3f2908dab"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.751308 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0d1945f-bd78-48c9-89be-35b3f2908dab-kube-api-access-mdzpb" (OuterVolumeSpecName: "kube-api-access-mdzpb") pod "d0d1945f-bd78-48c9-89be-35b3f2908dab" (UID: "d0d1945f-bd78-48c9-89be-35b3f2908dab"). InnerVolumeSpecName "kube-api-access-mdzpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.758672 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "d0d1945f-bd78-48c9-89be-35b3f2908dab" (UID: "d0d1945f-bd78-48c9-89be-35b3f2908dab"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.848014 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.848078 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-env-overrides\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.848095 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-etc-openvswitch\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.848112 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-systemd-units\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.848133 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-ovnkube-script-lib\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.848252 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-host-run-netns\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.848280 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-log-socket\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.848330 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-node-log\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.848364 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-host-cni-netd\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.848390 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-run-ovn\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.848426 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh2kl\" (UniqueName: \"kubernetes.io/projected/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-kube-api-access-jh2kl\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.848456 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-host-slash\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.848476 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-run-systemd\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.848507 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-host-cni-bin\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.848528 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-host-kubelet\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.848550 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-host-run-ovn-kubernetes\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.848571 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-var-lib-openvswitch\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.848592 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-run-openvswitch\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.848617 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-ovnkube-config\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.848636 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-ovn-node-metrics-cert\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.848689 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdzpb\" (UniqueName: \"kubernetes.io/projected/d0d1945f-bd78-48c9-89be-35b3f2908dab-kube-api-access-mdzpb\") on node \"crc\" DevicePath \"\"" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.848704 4908 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d0d1945f-bd78-48c9-89be-35b3f2908dab-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.848714 4908 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d0d1945f-bd78-48c9-89be-35b3f2908dab-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.848723 4908 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d0d1945f-bd78-48c9-89be-35b3f2908dab-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.949577 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-node-log\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.949614 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-host-cni-netd\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.949644 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-run-ovn\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.949666 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh2kl\" (UniqueName: \"kubernetes.io/projected/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-kube-api-access-jh2kl\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.949681 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-host-slash\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.949696 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-run-systemd\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.949720 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-host-cni-bin\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.949741 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-host-kubelet\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.949755 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-host-run-ovn-kubernetes\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.949771 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-var-lib-openvswitch\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.949788 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-run-openvswitch\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.949794 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-host-cni-netd\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.949830 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-run-systemd\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.949885 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-run-ovn\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.949897 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-node-log\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.949808 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-ovnkube-config\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.949971 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-ovn-node-metrics-cert\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.950054 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.950109 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-env-overrides\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.950132 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-etc-openvswitch\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.950195 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-systemd-units\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.950206 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-host-slash\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.950263 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-ovnkube-script-lib\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.950293 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-host-run-netns\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.950341 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-log-socket\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.950465 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-ovnkube-config\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.950479 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-log-socket\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.950512 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-host-cni-bin\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.950540 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-host-kubelet\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.950561 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-host-run-ovn-kubernetes\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.950582 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-var-lib-openvswitch\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.950603 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-run-openvswitch\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.950264 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.950747 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-host-run-netns\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.950791 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-systemd-units\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.950828 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-etc-openvswitch\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.951068 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-env-overrides\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.951408 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-ovnkube-script-lib\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.954776 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-ovn-node-metrics-cert\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.965760 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh2kl\" (UniqueName: \"kubernetes.io/projected/7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1-kube-api-access-jh2kl\") pod \"ovnkube-node-v89ml\" (UID: \"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:51 crc kubenswrapper[4908]: I0131 07:33:51.971106 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.429039 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xkd4f_d0d1945f-bd78-48c9-89be-35b3f2908dab/ovnkube-controller/3.log" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.432124 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xkd4f_d0d1945f-bd78-48c9-89be-35b3f2908dab/ovn-acl-logging/0.log" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.432632 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xkd4f_d0d1945f-bd78-48c9-89be-35b3f2908dab/ovn-controller/0.log" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.433207 4908 generic.go:334] "Generic (PLEG): container finished" podID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerID="8bcda58fce5a5726da59287d6554d20780c650951bb906b5cb4f2f0810823b65" exitCode=0 Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.433226 4908 generic.go:334] "Generic (PLEG): container finished" podID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerID="d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07" exitCode=0 Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.433234 4908 generic.go:334] "Generic (PLEG): container finished" podID="d0d1945f-bd78-48c9-89be-35b3f2908dab" containerID="2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb" exitCode=0 Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.433234 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" event={"ID":"d0d1945f-bd78-48c9-89be-35b3f2908dab","Type":"ContainerDied","Data":"8bcda58fce5a5726da59287d6554d20780c650951bb906b5cb4f2f0810823b65"} Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.433269 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" event={"ID":"d0d1945f-bd78-48c9-89be-35b3f2908dab","Type":"ContainerDied","Data":"d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07"} Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.433280 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" event={"ID":"d0d1945f-bd78-48c9-89be-35b3f2908dab","Type":"ContainerDied","Data":"2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb"} Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.433290 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" event={"ID":"d0d1945f-bd78-48c9-89be-35b3f2908dab","Type":"ContainerDied","Data":"ea1b8e54729b90d23a6ce5d473a33039d2b4f1b372882394d178ca684441b84c"} Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.433307 4908 scope.go:117] "RemoveContainer" containerID="8bcda58fce5a5726da59287d6554d20780c650951bb906b5cb4f2f0810823b65" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.433723 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xkd4f" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.434838 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-944z2_c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b/kube-multus/2.log" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.437374 4908 generic.go:334] "Generic (PLEG): container finished" podID="7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1" containerID="6ec4a38b0f5ebd1beb889d5d84ad40b111c2a671fa0bcbb020e1c75527000ecd" exitCode=0 Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.437418 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" event={"ID":"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1","Type":"ContainerDied","Data":"6ec4a38b0f5ebd1beb889d5d84ad40b111c2a671fa0bcbb020e1c75527000ecd"} Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.437496 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" event={"ID":"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1","Type":"ContainerStarted","Data":"6ccfc5cbde30c64125c5289b74ce2880244955e4b628505dacd33b8250b44b6e"} Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.455514 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xkd4f"] Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.457216 4908 scope.go:117] "RemoveContainer" containerID="4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.462876 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xkd4f"] Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.477674 4908 scope.go:117] "RemoveContainer" containerID="f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.507513 4908 scope.go:117] "RemoveContainer" containerID="d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.530868 4908 scope.go:117] "RemoveContainer" containerID="2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.556567 4908 scope.go:117] "RemoveContainer" containerID="662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.568803 4908 scope.go:117] "RemoveContainer" containerID="b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.583843 4908 scope.go:117] "RemoveContainer" containerID="317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.599592 4908 scope.go:117] "RemoveContainer" containerID="4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.679725 4908 scope.go:117] "RemoveContainer" containerID="3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.702075 4908 scope.go:117] "RemoveContainer" containerID="8bcda58fce5a5726da59287d6554d20780c650951bb906b5cb4f2f0810823b65" Jan 31 07:33:52 crc kubenswrapper[4908]: E0131 07:33:52.702593 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bcda58fce5a5726da59287d6554d20780c650951bb906b5cb4f2f0810823b65\": container with ID starting with 8bcda58fce5a5726da59287d6554d20780c650951bb906b5cb4f2f0810823b65 not found: ID does not exist" containerID="8bcda58fce5a5726da59287d6554d20780c650951bb906b5cb4f2f0810823b65" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.702637 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bcda58fce5a5726da59287d6554d20780c650951bb906b5cb4f2f0810823b65"} err="failed to get container status \"8bcda58fce5a5726da59287d6554d20780c650951bb906b5cb4f2f0810823b65\": rpc error: code = NotFound desc = could not find container \"8bcda58fce5a5726da59287d6554d20780c650951bb906b5cb4f2f0810823b65\": container with ID starting with 8bcda58fce5a5726da59287d6554d20780c650951bb906b5cb4f2f0810823b65 not found: ID does not exist" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.702669 4908 scope.go:117] "RemoveContainer" containerID="4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07" Jan 31 07:33:52 crc kubenswrapper[4908]: E0131 07:33:52.703450 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07\": container with ID starting with 4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07 not found: ID does not exist" containerID="4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.703478 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07"} err="failed to get container status \"4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07\": rpc error: code = NotFound desc = could not find container \"4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07\": container with ID starting with 4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07 not found: ID does not exist" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.703508 4908 scope.go:117] "RemoveContainer" containerID="f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf" Jan 31 07:33:52 crc kubenswrapper[4908]: E0131 07:33:52.703736 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\": container with ID starting with f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf not found: ID does not exist" containerID="f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.703763 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf"} err="failed to get container status \"f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\": rpc error: code = NotFound desc = could not find container \"f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\": container with ID starting with f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf not found: ID does not exist" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.703777 4908 scope.go:117] "RemoveContainer" containerID="d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07" Jan 31 07:33:52 crc kubenswrapper[4908]: E0131 07:33:52.704013 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\": container with ID starting with d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07 not found: ID does not exist" containerID="d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.704041 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07"} err="failed to get container status \"d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\": rpc error: code = NotFound desc = could not find container \"d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\": container with ID starting with d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07 not found: ID does not exist" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.704056 4908 scope.go:117] "RemoveContainer" containerID="2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb" Jan 31 07:33:52 crc kubenswrapper[4908]: E0131 07:33:52.704268 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\": container with ID starting with 2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb not found: ID does not exist" containerID="2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.704333 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb"} err="failed to get container status \"2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\": rpc error: code = NotFound desc = could not find container \"2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\": container with ID starting with 2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb not found: ID does not exist" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.704354 4908 scope.go:117] "RemoveContainer" containerID="662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78" Jan 31 07:33:52 crc kubenswrapper[4908]: E0131 07:33:52.704558 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\": container with ID starting with 662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78 not found: ID does not exist" containerID="662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.704587 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78"} err="failed to get container status \"662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\": rpc error: code = NotFound desc = could not find container \"662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\": container with ID starting with 662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78 not found: ID does not exist" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.704604 4908 scope.go:117] "RemoveContainer" containerID="b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8" Jan 31 07:33:52 crc kubenswrapper[4908]: E0131 07:33:52.704800 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\": container with ID starting with b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8 not found: ID does not exist" containerID="b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.704827 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8"} err="failed to get container status \"b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\": rpc error: code = NotFound desc = could not find container \"b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\": container with ID starting with b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8 not found: ID does not exist" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.704843 4908 scope.go:117] "RemoveContainer" containerID="317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee" Jan 31 07:33:52 crc kubenswrapper[4908]: E0131 07:33:52.705067 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\": container with ID starting with 317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee not found: ID does not exist" containerID="317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.705092 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee"} err="failed to get container status \"317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\": rpc error: code = NotFound desc = could not find container \"317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\": container with ID starting with 317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee not found: ID does not exist" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.705111 4908 scope.go:117] "RemoveContainer" containerID="4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0" Jan 31 07:33:52 crc kubenswrapper[4908]: E0131 07:33:52.705304 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\": container with ID starting with 4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0 not found: ID does not exist" containerID="4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.705428 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0"} err="failed to get container status \"4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\": rpc error: code = NotFound desc = could not find container \"4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\": container with ID starting with 4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0 not found: ID does not exist" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.705451 4908 scope.go:117] "RemoveContainer" containerID="3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0" Jan 31 07:33:52 crc kubenswrapper[4908]: E0131 07:33:52.705668 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\": container with ID starting with 3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0 not found: ID does not exist" containerID="3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.705687 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0"} err="failed to get container status \"3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\": rpc error: code = NotFound desc = could not find container \"3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\": container with ID starting with 3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0 not found: ID does not exist" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.705699 4908 scope.go:117] "RemoveContainer" containerID="8bcda58fce5a5726da59287d6554d20780c650951bb906b5cb4f2f0810823b65" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.705897 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bcda58fce5a5726da59287d6554d20780c650951bb906b5cb4f2f0810823b65"} err="failed to get container status \"8bcda58fce5a5726da59287d6554d20780c650951bb906b5cb4f2f0810823b65\": rpc error: code = NotFound desc = could not find container \"8bcda58fce5a5726da59287d6554d20780c650951bb906b5cb4f2f0810823b65\": container with ID starting with 8bcda58fce5a5726da59287d6554d20780c650951bb906b5cb4f2f0810823b65 not found: ID does not exist" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.705921 4908 scope.go:117] "RemoveContainer" containerID="4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.706133 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07"} err="failed to get container status \"4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07\": rpc error: code = NotFound desc = could not find container \"4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07\": container with ID starting with 4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07 not found: ID does not exist" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.706159 4908 scope.go:117] "RemoveContainer" containerID="f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.706414 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf"} err="failed to get container status \"f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\": rpc error: code = NotFound desc = could not find container \"f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\": container with ID starting with f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf not found: ID does not exist" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.706434 4908 scope.go:117] "RemoveContainer" containerID="d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.706669 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07"} err="failed to get container status \"d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\": rpc error: code = NotFound desc = could not find container \"d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\": container with ID starting with d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07 not found: ID does not exist" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.706784 4908 scope.go:117] "RemoveContainer" containerID="2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.707492 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb"} err="failed to get container status \"2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\": rpc error: code = NotFound desc = could not find container \"2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\": container with ID starting with 2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb not found: ID does not exist" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.707555 4908 scope.go:117] "RemoveContainer" containerID="662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.707837 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78"} err="failed to get container status \"662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\": rpc error: code = NotFound desc = could not find container \"662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\": container with ID starting with 662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78 not found: ID does not exist" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.707917 4908 scope.go:117] "RemoveContainer" containerID="b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.708196 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8"} err="failed to get container status \"b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\": rpc error: code = NotFound desc = could not find container \"b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\": container with ID starting with b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8 not found: ID does not exist" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.708227 4908 scope.go:117] "RemoveContainer" containerID="317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.708495 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee"} err="failed to get container status \"317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\": rpc error: code = NotFound desc = could not find container \"317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\": container with ID starting with 317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee not found: ID does not exist" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.708531 4908 scope.go:117] "RemoveContainer" containerID="4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.708785 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0"} err="failed to get container status \"4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\": rpc error: code = NotFound desc = could not find container \"4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\": container with ID starting with 4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0 not found: ID does not exist" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.708805 4908 scope.go:117] "RemoveContainer" containerID="3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.709420 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0"} err="failed to get container status \"3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\": rpc error: code = NotFound desc = could not find container \"3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\": container with ID starting with 3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0 not found: ID does not exist" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.709489 4908 scope.go:117] "RemoveContainer" containerID="8bcda58fce5a5726da59287d6554d20780c650951bb906b5cb4f2f0810823b65" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.710121 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bcda58fce5a5726da59287d6554d20780c650951bb906b5cb4f2f0810823b65"} err="failed to get container status \"8bcda58fce5a5726da59287d6554d20780c650951bb906b5cb4f2f0810823b65\": rpc error: code = NotFound desc = could not find container \"8bcda58fce5a5726da59287d6554d20780c650951bb906b5cb4f2f0810823b65\": container with ID starting with 8bcda58fce5a5726da59287d6554d20780c650951bb906b5cb4f2f0810823b65 not found: ID does not exist" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.710144 4908 scope.go:117] "RemoveContainer" containerID="4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.710903 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07"} err="failed to get container status \"4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07\": rpc error: code = NotFound desc = could not find container \"4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07\": container with ID starting with 4a4530255fb182fa81ca879d3eef3a008fbca0b9cec2e3138498ffe8e5fa9e07 not found: ID does not exist" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.710998 4908 scope.go:117] "RemoveContainer" containerID="f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.711409 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf"} err="failed to get container status \"f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\": rpc error: code = NotFound desc = could not find container \"f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf\": container with ID starting with f6b8e8e14ce26ca6d8177001250e117710b7ec41a0f4f0b0afbc5ca250fc95bf not found: ID does not exist" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.711432 4908 scope.go:117] "RemoveContainer" containerID="d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.711715 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07"} err="failed to get container status \"d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\": rpc error: code = NotFound desc = could not find container \"d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07\": container with ID starting with d0d754d3a348e1f75962be00e40ff67ccfa3a4576a9d4aeed591ed00796aca07 not found: ID does not exist" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.711805 4908 scope.go:117] "RemoveContainer" containerID="2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.712156 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb"} err="failed to get container status \"2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\": rpc error: code = NotFound desc = could not find container \"2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb\": container with ID starting with 2403d0fa4a15a95be4fd21d89ba81547fff3df328b2ef81521ba37d4b50166eb not found: ID does not exist" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.712180 4908 scope.go:117] "RemoveContainer" containerID="662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.712393 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78"} err="failed to get container status \"662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\": rpc error: code = NotFound desc = could not find container \"662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78\": container with ID starting with 662cdbbd36fbc0536d2ffeebac6065306878567730a239516cd3fbc5bb7dab78 not found: ID does not exist" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.712419 4908 scope.go:117] "RemoveContainer" containerID="b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.712625 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8"} err="failed to get container status \"b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\": rpc error: code = NotFound desc = could not find container \"b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8\": container with ID starting with b1e93d81a69daa866d8356b6c8c90ec2c32b444cea8aab85790a95bf34f175f8 not found: ID does not exist" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.712698 4908 scope.go:117] "RemoveContainer" containerID="317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.713358 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee"} err="failed to get container status \"317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\": rpc error: code = NotFound desc = could not find container \"317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee\": container with ID starting with 317de223c02fc3033f832dd3fbca54bf6775b02e4f379853acdc7da38e0e04ee not found: ID does not exist" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.713440 4908 scope.go:117] "RemoveContainer" containerID="4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.713738 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0"} err="failed to get container status \"4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\": rpc error: code = NotFound desc = could not find container \"4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0\": container with ID starting with 4921d5c3f8f7a85f0a0150a1ca39d2719eb18bc2cf79eee5592b9567c9c2f7b0 not found: ID does not exist" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.713812 4908 scope.go:117] "RemoveContainer" containerID="3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0" Jan 31 07:33:52 crc kubenswrapper[4908]: I0131 07:33:52.714155 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0"} err="failed to get container status \"3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\": rpc error: code = NotFound desc = could not find container \"3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0\": container with ID starting with 3353662fa10a7ca049c0e6fb82c94f701dc7ab2684fd9e227301c731433e17c0 not found: ID does not exist" Jan 31 07:33:53 crc kubenswrapper[4908]: I0131 07:33:53.446718 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" event={"ID":"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1","Type":"ContainerStarted","Data":"2e238733f1c5690be91781a6e8bb7c7accdc98092400309ddb2a90b52bfada96"} Jan 31 07:33:53 crc kubenswrapper[4908]: I0131 07:33:53.447064 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" event={"ID":"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1","Type":"ContainerStarted","Data":"14d9e6fb344104c8cef5bab17057a7c63e78a688b471584b24c5837eb1fdda67"} Jan 31 07:33:53 crc kubenswrapper[4908]: I0131 07:33:53.447079 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" event={"ID":"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1","Type":"ContainerStarted","Data":"f12abcb292d1903f48f098ee4789e8613f5303fb2e5191bb407648a76955c8ad"} Jan 31 07:33:53 crc kubenswrapper[4908]: I0131 07:33:53.447091 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" event={"ID":"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1","Type":"ContainerStarted","Data":"5d8c4020c109da705b7072c8fe59d23b15d13b52ab827fcca6428fd419941e7e"} Jan 31 07:33:53 crc kubenswrapper[4908]: I0131 07:33:53.447103 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" event={"ID":"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1","Type":"ContainerStarted","Data":"9b2407b82a96147b1e7dd39592818b31875a224ad224d641ad463656b1aa6a77"} Jan 31 07:33:53 crc kubenswrapper[4908]: I0131 07:33:53.447114 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" event={"ID":"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1","Type":"ContainerStarted","Data":"630abdac7d7179adfc215befdd5bdda4c039b196eba829f56e4a8715952a9674"} Jan 31 07:33:53 crc kubenswrapper[4908]: I0131 07:33:53.947731 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0d1945f-bd78-48c9-89be-35b3f2908dab" path="/var/lib/kubelet/pods/d0d1945f-bd78-48c9-89be-35b3f2908dab/volumes" Jan 31 07:33:55 crc kubenswrapper[4908]: I0131 07:33:55.458837 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" event={"ID":"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1","Type":"ContainerStarted","Data":"cbf2e9cb543425ab720862511585ffabe72fed7ae1a83b668fd168937b7018d1"} Jan 31 07:33:58 crc kubenswrapper[4908]: I0131 07:33:58.481048 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" event={"ID":"7698bf12-bbe9-46a1-8ca0-9fbb64fc83f1","Type":"ContainerStarted","Data":"7fd64a30bbadd21d4b0ed4dd6615f6226600b49603b5670d652d5fafea63df74"} Jan 31 07:33:58 crc kubenswrapper[4908]: I0131 07:33:58.481604 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:58 crc kubenswrapper[4908]: I0131 07:33:58.481618 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:58 crc kubenswrapper[4908]: I0131 07:33:58.508779 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" podStartSLOduration=7.508758198 podStartE2EDuration="7.508758198s" podCreationTimestamp="2026-01-31 07:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:33:58.506244151 +0000 UTC m=+745.122188835" watchObservedRunningTime="2026-01-31 07:33:58.508758198 +0000 UTC m=+745.124702882" Jan 31 07:33:58 crc kubenswrapper[4908]: I0131 07:33:58.535730 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:59 crc kubenswrapper[4908]: I0131 07:33:59.485927 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:33:59 crc kubenswrapper[4908]: I0131 07:33:59.520356 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:34:05 crc kubenswrapper[4908]: I0131 07:34:05.941177 4908 scope.go:117] "RemoveContainer" containerID="76448e7eaa60d22190bd09ff8cd2152e42afbd2d9e3afc7635062f436b9000dc" Jan 31 07:34:05 crc kubenswrapper[4908]: E0131 07:34:05.942713 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-944z2_openshift-multus(c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b)\"" pod="openshift-multus/multus-944z2" podUID="c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b" Jan 31 07:34:18 crc kubenswrapper[4908]: I0131 07:34:18.940225 4908 scope.go:117] "RemoveContainer" containerID="76448e7eaa60d22190bd09ff8cd2152e42afbd2d9e3afc7635062f436b9000dc" Jan 31 07:34:19 crc kubenswrapper[4908]: I0131 07:34:19.603433 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-944z2_c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b/kube-multus/2.log" Jan 31 07:34:19 crc kubenswrapper[4908]: I0131 07:34:19.603952 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-944z2" event={"ID":"c3e4a7ae-cc31-4d51-ad2b-ba5304abf07b","Type":"ContainerStarted","Data":"4969d89ce83aebfcec18bd500aacc328d7ba8bdca7f782935d0ea797b9b9400d"} Jan 31 07:34:21 crc kubenswrapper[4908]: I0131 07:34:21.995112 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-v89ml" Jan 31 07:34:28 crc kubenswrapper[4908]: I0131 07:34:28.506568 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj"] Jan 31 07:34:28 crc kubenswrapper[4908]: I0131 07:34:28.509023 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj" Jan 31 07:34:28 crc kubenswrapper[4908]: I0131 07:34:28.511049 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 31 07:34:28 crc kubenswrapper[4908]: I0131 07:34:28.517970 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj"] Jan 31 07:34:28 crc kubenswrapper[4908]: I0131 07:34:28.621656 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1e355e32-5cfc-404d-9934-3d15a8189545-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj\" (UID: \"1e355e32-5cfc-404d-9934-3d15a8189545\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj" Jan 31 07:34:28 crc kubenswrapper[4908]: I0131 07:34:28.622053 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k8bv\" (UniqueName: \"kubernetes.io/projected/1e355e32-5cfc-404d-9934-3d15a8189545-kube-api-access-6k8bv\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj\" (UID: \"1e355e32-5cfc-404d-9934-3d15a8189545\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj" Jan 31 07:34:28 crc kubenswrapper[4908]: I0131 07:34:28.622419 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1e355e32-5cfc-404d-9934-3d15a8189545-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj\" (UID: \"1e355e32-5cfc-404d-9934-3d15a8189545\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj" Jan 31 07:34:28 crc kubenswrapper[4908]: I0131 07:34:28.724328 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1e355e32-5cfc-404d-9934-3d15a8189545-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj\" (UID: \"1e355e32-5cfc-404d-9934-3d15a8189545\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj" Jan 31 07:34:28 crc kubenswrapper[4908]: I0131 07:34:28.724689 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1e355e32-5cfc-404d-9934-3d15a8189545-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj\" (UID: \"1e355e32-5cfc-404d-9934-3d15a8189545\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj" Jan 31 07:34:28 crc kubenswrapper[4908]: I0131 07:34:28.724843 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k8bv\" (UniqueName: \"kubernetes.io/projected/1e355e32-5cfc-404d-9934-3d15a8189545-kube-api-access-6k8bv\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj\" (UID: \"1e355e32-5cfc-404d-9934-3d15a8189545\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj" Jan 31 07:34:28 crc kubenswrapper[4908]: I0131 07:34:28.725170 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1e355e32-5cfc-404d-9934-3d15a8189545-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj\" (UID: \"1e355e32-5cfc-404d-9934-3d15a8189545\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj" Jan 31 07:34:28 crc kubenswrapper[4908]: I0131 07:34:28.725473 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1e355e32-5cfc-404d-9934-3d15a8189545-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj\" (UID: \"1e355e32-5cfc-404d-9934-3d15a8189545\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj" Jan 31 07:34:28 crc kubenswrapper[4908]: I0131 07:34:28.746827 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k8bv\" (UniqueName: \"kubernetes.io/projected/1e355e32-5cfc-404d-9934-3d15a8189545-kube-api-access-6k8bv\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj\" (UID: \"1e355e32-5cfc-404d-9934-3d15a8189545\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj" Jan 31 07:34:28 crc kubenswrapper[4908]: I0131 07:34:28.827923 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj" Jan 31 07:34:29 crc kubenswrapper[4908]: I0131 07:34:29.210183 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj"] Jan 31 07:34:29 crc kubenswrapper[4908]: W0131 07:34:29.219403 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e355e32_5cfc_404d_9934_3d15a8189545.slice/crio-61da21c65f1a8d1f3c9508a56fccf585ca2f16d2fa537642a919f70a7a78b4a0 WatchSource:0}: Error finding container 61da21c65f1a8d1f3c9508a56fccf585ca2f16d2fa537642a919f70a7a78b4a0: Status 404 returned error can't find the container with id 61da21c65f1a8d1f3c9508a56fccf585ca2f16d2fa537642a919f70a7a78b4a0 Jan 31 07:34:29 crc kubenswrapper[4908]: I0131 07:34:29.419393 4908 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 07:34:29 crc kubenswrapper[4908]: I0131 07:34:29.656857 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj" event={"ID":"1e355e32-5cfc-404d-9934-3d15a8189545","Type":"ContainerStarted","Data":"b693beb91f5020a2a1b9ae764ae755f9e0e345995fac93fc1b9ecd383acc48a5"} Jan 31 07:34:29 crc kubenswrapper[4908]: I0131 07:34:29.656932 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj" event={"ID":"1e355e32-5cfc-404d-9934-3d15a8189545","Type":"ContainerStarted","Data":"61da21c65f1a8d1f3c9508a56fccf585ca2f16d2fa537642a919f70a7a78b4a0"} Jan 31 07:34:30 crc kubenswrapper[4908]: I0131 07:34:30.669758 4908 generic.go:334] "Generic (PLEG): container finished" podID="1e355e32-5cfc-404d-9934-3d15a8189545" containerID="b693beb91f5020a2a1b9ae764ae755f9e0e345995fac93fc1b9ecd383acc48a5" exitCode=0 Jan 31 07:34:30 crc kubenswrapper[4908]: I0131 07:34:30.669952 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj" event={"ID":"1e355e32-5cfc-404d-9934-3d15a8189545","Type":"ContainerDied","Data":"b693beb91f5020a2a1b9ae764ae755f9e0e345995fac93fc1b9ecd383acc48a5"} Jan 31 07:34:30 crc kubenswrapper[4908]: I0131 07:34:30.733525 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ph7z5"] Jan 31 07:34:30 crc kubenswrapper[4908]: I0131 07:34:30.734813 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ph7z5" Jan 31 07:34:30 crc kubenswrapper[4908]: I0131 07:34:30.745942 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ph7z5"] Jan 31 07:34:30 crc kubenswrapper[4908]: I0131 07:34:30.849758 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt6l7\" (UniqueName: \"kubernetes.io/projected/ee032981-4065-4eb6-b9a4-fafcd8003697-kube-api-access-bt6l7\") pod \"redhat-operators-ph7z5\" (UID: \"ee032981-4065-4eb6-b9a4-fafcd8003697\") " pod="openshift-marketplace/redhat-operators-ph7z5" Jan 31 07:34:30 crc kubenswrapper[4908]: I0131 07:34:30.849868 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee032981-4065-4eb6-b9a4-fafcd8003697-catalog-content\") pod \"redhat-operators-ph7z5\" (UID: \"ee032981-4065-4eb6-b9a4-fafcd8003697\") " pod="openshift-marketplace/redhat-operators-ph7z5" Jan 31 07:34:30 crc kubenswrapper[4908]: I0131 07:34:30.849908 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee032981-4065-4eb6-b9a4-fafcd8003697-utilities\") pod \"redhat-operators-ph7z5\" (UID: \"ee032981-4065-4eb6-b9a4-fafcd8003697\") " pod="openshift-marketplace/redhat-operators-ph7z5" Jan 31 07:34:30 crc kubenswrapper[4908]: I0131 07:34:30.951069 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee032981-4065-4eb6-b9a4-fafcd8003697-catalog-content\") pod \"redhat-operators-ph7z5\" (UID: \"ee032981-4065-4eb6-b9a4-fafcd8003697\") " pod="openshift-marketplace/redhat-operators-ph7z5" Jan 31 07:34:30 crc kubenswrapper[4908]: I0131 07:34:30.951114 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee032981-4065-4eb6-b9a4-fafcd8003697-utilities\") pod \"redhat-operators-ph7z5\" (UID: \"ee032981-4065-4eb6-b9a4-fafcd8003697\") " pod="openshift-marketplace/redhat-operators-ph7z5" Jan 31 07:34:30 crc kubenswrapper[4908]: I0131 07:34:30.951142 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt6l7\" (UniqueName: \"kubernetes.io/projected/ee032981-4065-4eb6-b9a4-fafcd8003697-kube-api-access-bt6l7\") pod \"redhat-operators-ph7z5\" (UID: \"ee032981-4065-4eb6-b9a4-fafcd8003697\") " pod="openshift-marketplace/redhat-operators-ph7z5" Jan 31 07:34:30 crc kubenswrapper[4908]: I0131 07:34:30.952165 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee032981-4065-4eb6-b9a4-fafcd8003697-catalog-content\") pod \"redhat-operators-ph7z5\" (UID: \"ee032981-4065-4eb6-b9a4-fafcd8003697\") " pod="openshift-marketplace/redhat-operators-ph7z5" Jan 31 07:34:30 crc kubenswrapper[4908]: I0131 07:34:30.952430 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee032981-4065-4eb6-b9a4-fafcd8003697-utilities\") pod \"redhat-operators-ph7z5\" (UID: \"ee032981-4065-4eb6-b9a4-fafcd8003697\") " pod="openshift-marketplace/redhat-operators-ph7z5" Jan 31 07:34:30 crc kubenswrapper[4908]: I0131 07:34:30.969586 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt6l7\" (UniqueName: \"kubernetes.io/projected/ee032981-4065-4eb6-b9a4-fafcd8003697-kube-api-access-bt6l7\") pod \"redhat-operators-ph7z5\" (UID: \"ee032981-4065-4eb6-b9a4-fafcd8003697\") " pod="openshift-marketplace/redhat-operators-ph7z5" Jan 31 07:34:31 crc kubenswrapper[4908]: I0131 07:34:31.059735 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ph7z5" Jan 31 07:34:31 crc kubenswrapper[4908]: I0131 07:34:31.450917 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ph7z5"] Jan 31 07:34:31 crc kubenswrapper[4908]: W0131 07:34:31.455912 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee032981_4065_4eb6_b9a4_fafcd8003697.slice/crio-8917eb7b3f31a86b886b2de6b09a3a6892460f4f1e48fc5e64f85e8202d5d077 WatchSource:0}: Error finding container 8917eb7b3f31a86b886b2de6b09a3a6892460f4f1e48fc5e64f85e8202d5d077: Status 404 returned error can't find the container with id 8917eb7b3f31a86b886b2de6b09a3a6892460f4f1e48fc5e64f85e8202d5d077 Jan 31 07:34:31 crc kubenswrapper[4908]: I0131 07:34:31.676770 4908 generic.go:334] "Generic (PLEG): container finished" podID="ee032981-4065-4eb6-b9a4-fafcd8003697" containerID="444bb21f8619e60bf9528b5c387330655bd73e91ca429dc0d70c3c027a90fcd5" exitCode=0 Jan 31 07:34:31 crc kubenswrapper[4908]: I0131 07:34:31.676818 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ph7z5" event={"ID":"ee032981-4065-4eb6-b9a4-fafcd8003697","Type":"ContainerDied","Data":"444bb21f8619e60bf9528b5c387330655bd73e91ca429dc0d70c3c027a90fcd5"} Jan 31 07:34:31 crc kubenswrapper[4908]: I0131 07:34:31.676847 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ph7z5" event={"ID":"ee032981-4065-4eb6-b9a4-fafcd8003697","Type":"ContainerStarted","Data":"8917eb7b3f31a86b886b2de6b09a3a6892460f4f1e48fc5e64f85e8202d5d077"} Jan 31 07:34:32 crc kubenswrapper[4908]: I0131 07:34:32.684825 4908 generic.go:334] "Generic (PLEG): container finished" podID="1e355e32-5cfc-404d-9934-3d15a8189545" containerID="6a392c48f9b657805513444532adff788db63a1e7f74066209d8a91fb4c4dfc2" exitCode=0 Jan 31 07:34:32 crc kubenswrapper[4908]: I0131 07:34:32.684917 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj" event={"ID":"1e355e32-5cfc-404d-9934-3d15a8189545","Type":"ContainerDied","Data":"6a392c48f9b657805513444532adff788db63a1e7f74066209d8a91fb4c4dfc2"} Jan 31 07:34:32 crc kubenswrapper[4908]: I0131 07:34:32.697944 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ph7z5" event={"ID":"ee032981-4065-4eb6-b9a4-fafcd8003697","Type":"ContainerStarted","Data":"8bbce9292ce23f266022f0b01675fc6fe6c023b640233c836cd019d20e1b555d"} Jan 31 07:34:33 crc kubenswrapper[4908]: I0131 07:34:33.704345 4908 generic.go:334] "Generic (PLEG): container finished" podID="1e355e32-5cfc-404d-9934-3d15a8189545" containerID="76aad20a1f5fb6b89487ff1c304c6aae3590225e180eb928ef34268114bb94c7" exitCode=0 Jan 31 07:34:33 crc kubenswrapper[4908]: I0131 07:34:33.704404 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj" event={"ID":"1e355e32-5cfc-404d-9934-3d15a8189545","Type":"ContainerDied","Data":"76aad20a1f5fb6b89487ff1c304c6aae3590225e180eb928ef34268114bb94c7"} Jan 31 07:34:33 crc kubenswrapper[4908]: I0131 07:34:33.706325 4908 generic.go:334] "Generic (PLEG): container finished" podID="ee032981-4065-4eb6-b9a4-fafcd8003697" containerID="8bbce9292ce23f266022f0b01675fc6fe6c023b640233c836cd019d20e1b555d" exitCode=0 Jan 31 07:34:33 crc kubenswrapper[4908]: I0131 07:34:33.706370 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ph7z5" event={"ID":"ee032981-4065-4eb6-b9a4-fafcd8003697","Type":"ContainerDied","Data":"8bbce9292ce23f266022f0b01675fc6fe6c023b640233c836cd019d20e1b555d"} Jan 31 07:34:34 crc kubenswrapper[4908]: I0131 07:34:34.713021 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ph7z5" event={"ID":"ee032981-4065-4eb6-b9a4-fafcd8003697","Type":"ContainerStarted","Data":"7f6c5b28881ac13bcbc7a71075b1e8b7703d2558f7fb5299882704d8cc1a785d"} Jan 31 07:34:34 crc kubenswrapper[4908]: I0131 07:34:34.737822 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ph7z5" podStartSLOduration=2.317896339 podStartE2EDuration="4.737794918s" podCreationTimestamp="2026-01-31 07:34:30 +0000 UTC" firstStartedPulling="2026-01-31 07:34:31.678054598 +0000 UTC m=+778.293999252" lastFinishedPulling="2026-01-31 07:34:34.097953157 +0000 UTC m=+780.713897831" observedRunningTime="2026-01-31 07:34:34.734405301 +0000 UTC m=+781.350349965" watchObservedRunningTime="2026-01-31 07:34:34.737794918 +0000 UTC m=+781.353739572" Jan 31 07:34:34 crc kubenswrapper[4908]: I0131 07:34:34.939773 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj" Jan 31 07:34:35 crc kubenswrapper[4908]: I0131 07:34:35.105103 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6k8bv\" (UniqueName: \"kubernetes.io/projected/1e355e32-5cfc-404d-9934-3d15a8189545-kube-api-access-6k8bv\") pod \"1e355e32-5cfc-404d-9934-3d15a8189545\" (UID: \"1e355e32-5cfc-404d-9934-3d15a8189545\") " Jan 31 07:34:35 crc kubenswrapper[4908]: I0131 07:34:35.105149 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1e355e32-5cfc-404d-9934-3d15a8189545-util\") pod \"1e355e32-5cfc-404d-9934-3d15a8189545\" (UID: \"1e355e32-5cfc-404d-9934-3d15a8189545\") " Jan 31 07:34:35 crc kubenswrapper[4908]: I0131 07:34:35.105220 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1e355e32-5cfc-404d-9934-3d15a8189545-bundle\") pod \"1e355e32-5cfc-404d-9934-3d15a8189545\" (UID: \"1e355e32-5cfc-404d-9934-3d15a8189545\") " Jan 31 07:34:35 crc kubenswrapper[4908]: I0131 07:34:35.105803 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e355e32-5cfc-404d-9934-3d15a8189545-bundle" (OuterVolumeSpecName: "bundle") pod "1e355e32-5cfc-404d-9934-3d15a8189545" (UID: "1e355e32-5cfc-404d-9934-3d15a8189545"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:34:35 crc kubenswrapper[4908]: I0131 07:34:35.112812 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e355e32-5cfc-404d-9934-3d15a8189545-kube-api-access-6k8bv" (OuterVolumeSpecName: "kube-api-access-6k8bv") pod "1e355e32-5cfc-404d-9934-3d15a8189545" (UID: "1e355e32-5cfc-404d-9934-3d15a8189545"). InnerVolumeSpecName "kube-api-access-6k8bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:34:35 crc kubenswrapper[4908]: I0131 07:34:35.126219 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e355e32-5cfc-404d-9934-3d15a8189545-util" (OuterVolumeSpecName: "util") pod "1e355e32-5cfc-404d-9934-3d15a8189545" (UID: "1e355e32-5cfc-404d-9934-3d15a8189545"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:34:35 crc kubenswrapper[4908]: I0131 07:34:35.206514 4908 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1e355e32-5cfc-404d-9934-3d15a8189545-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:34:35 crc kubenswrapper[4908]: I0131 07:34:35.206548 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6k8bv\" (UniqueName: \"kubernetes.io/projected/1e355e32-5cfc-404d-9934-3d15a8189545-kube-api-access-6k8bv\") on node \"crc\" DevicePath \"\"" Jan 31 07:34:35 crc kubenswrapper[4908]: I0131 07:34:35.206559 4908 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1e355e32-5cfc-404d-9934-3d15a8189545-util\") on node \"crc\" DevicePath \"\"" Jan 31 07:34:35 crc kubenswrapper[4908]: I0131 07:34:35.720117 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj" event={"ID":"1e355e32-5cfc-404d-9934-3d15a8189545","Type":"ContainerDied","Data":"61da21c65f1a8d1f3c9508a56fccf585ca2f16d2fa537642a919f70a7a78b4a0"} Jan 31 07:34:35 crc kubenswrapper[4908]: I0131 07:34:35.720170 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61da21c65f1a8d1f3c9508a56fccf585ca2f16d2fa537642a919f70a7a78b4a0" Jan 31 07:34:35 crc kubenswrapper[4908]: I0131 07:34:35.720139 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj" Jan 31 07:34:39 crc kubenswrapper[4908]: I0131 07:34:39.890571 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-psr6s"] Jan 31 07:34:39 crc kubenswrapper[4908]: E0131 07:34:39.891030 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e355e32-5cfc-404d-9934-3d15a8189545" containerName="pull" Jan 31 07:34:39 crc kubenswrapper[4908]: I0131 07:34:39.891042 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e355e32-5cfc-404d-9934-3d15a8189545" containerName="pull" Jan 31 07:34:39 crc kubenswrapper[4908]: E0131 07:34:39.891051 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e355e32-5cfc-404d-9934-3d15a8189545" containerName="util" Jan 31 07:34:39 crc kubenswrapper[4908]: I0131 07:34:39.891056 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e355e32-5cfc-404d-9934-3d15a8189545" containerName="util" Jan 31 07:34:39 crc kubenswrapper[4908]: E0131 07:34:39.891074 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e355e32-5cfc-404d-9934-3d15a8189545" containerName="extract" Jan 31 07:34:39 crc kubenswrapper[4908]: I0131 07:34:39.891080 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e355e32-5cfc-404d-9934-3d15a8189545" containerName="extract" Jan 31 07:34:39 crc kubenswrapper[4908]: I0131 07:34:39.891167 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e355e32-5cfc-404d-9934-3d15a8189545" containerName="extract" Jan 31 07:34:39 crc kubenswrapper[4908]: I0131 07:34:39.891512 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-psr6s" Jan 31 07:34:39 crc kubenswrapper[4908]: I0131 07:34:39.894333 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 31 07:34:39 crc kubenswrapper[4908]: I0131 07:34:39.894775 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-wwsw8" Jan 31 07:34:39 crc kubenswrapper[4908]: I0131 07:34:39.894922 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 31 07:34:39 crc kubenswrapper[4908]: I0131 07:34:39.909894 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-psr6s"] Jan 31 07:34:40 crc kubenswrapper[4908]: I0131 07:34:40.021351 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrfrp\" (UniqueName: \"kubernetes.io/projected/e4126da2-44f9-417d-88ca-4f56a8d7e30e-kube-api-access-vrfrp\") pod \"nmstate-operator-646758c888-psr6s\" (UID: \"e4126da2-44f9-417d-88ca-4f56a8d7e30e\") " pod="openshift-nmstate/nmstate-operator-646758c888-psr6s" Jan 31 07:34:40 crc kubenswrapper[4908]: I0131 07:34:40.123336 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrfrp\" (UniqueName: \"kubernetes.io/projected/e4126da2-44f9-417d-88ca-4f56a8d7e30e-kube-api-access-vrfrp\") pod \"nmstate-operator-646758c888-psr6s\" (UID: \"e4126da2-44f9-417d-88ca-4f56a8d7e30e\") " pod="openshift-nmstate/nmstate-operator-646758c888-psr6s" Jan 31 07:34:40 crc kubenswrapper[4908]: I0131 07:34:40.143953 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrfrp\" (UniqueName: \"kubernetes.io/projected/e4126da2-44f9-417d-88ca-4f56a8d7e30e-kube-api-access-vrfrp\") pod \"nmstate-operator-646758c888-psr6s\" (UID: \"e4126da2-44f9-417d-88ca-4f56a8d7e30e\") " pod="openshift-nmstate/nmstate-operator-646758c888-psr6s" Jan 31 07:34:40 crc kubenswrapper[4908]: I0131 07:34:40.206625 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-psr6s" Jan 31 07:34:40 crc kubenswrapper[4908]: I0131 07:34:40.396812 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-psr6s"] Jan 31 07:34:40 crc kubenswrapper[4908]: I0131 07:34:40.431403 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:34:40 crc kubenswrapper[4908]: I0131 07:34:40.431465 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:34:40 crc kubenswrapper[4908]: I0131 07:34:40.749228 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-psr6s" event={"ID":"e4126da2-44f9-417d-88ca-4f56a8d7e30e","Type":"ContainerStarted","Data":"a9e4c296e69c5597ea3d63f9c847727c82c635c7d5092c1cdf55d6365f51097b"} Jan 31 07:34:41 crc kubenswrapper[4908]: I0131 07:34:41.061228 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ph7z5" Jan 31 07:34:41 crc kubenswrapper[4908]: I0131 07:34:41.061305 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ph7z5" Jan 31 07:34:41 crc kubenswrapper[4908]: I0131 07:34:41.140549 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ph7z5" Jan 31 07:34:41 crc kubenswrapper[4908]: I0131 07:34:41.810737 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ph7z5" Jan 31 07:34:43 crc kubenswrapper[4908]: I0131 07:34:43.729680 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ph7z5"] Jan 31 07:34:43 crc kubenswrapper[4908]: I0131 07:34:43.765830 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-psr6s" event={"ID":"e4126da2-44f9-417d-88ca-4f56a8d7e30e","Type":"ContainerStarted","Data":"1a559693f9b56156d58239083a81a6678891c8f870945b834f9ae01823b7e554"} Jan 31 07:34:43 crc kubenswrapper[4908]: I0131 07:34:43.766032 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ph7z5" podUID="ee032981-4065-4eb6-b9a4-fafcd8003697" containerName="registry-server" containerID="cri-o://7f6c5b28881ac13bcbc7a71075b1e8b7703d2558f7fb5299882704d8cc1a785d" gracePeriod=2 Jan 31 07:34:43 crc kubenswrapper[4908]: I0131 07:34:43.787909 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-psr6s" podStartSLOduration=2.215060752 podStartE2EDuration="4.787890965s" podCreationTimestamp="2026-01-31 07:34:39 +0000 UTC" firstStartedPulling="2026-01-31 07:34:40.402769523 +0000 UTC m=+787.018714177" lastFinishedPulling="2026-01-31 07:34:42.975599736 +0000 UTC m=+789.591544390" observedRunningTime="2026-01-31 07:34:43.785084773 +0000 UTC m=+790.401029457" watchObservedRunningTime="2026-01-31 07:34:43.787890965 +0000 UTC m=+790.403835629" Jan 31 07:34:44 crc kubenswrapper[4908]: I0131 07:34:44.146616 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ph7z5" Jan 31 07:34:44 crc kubenswrapper[4908]: I0131 07:34:44.171419 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee032981-4065-4eb6-b9a4-fafcd8003697-catalog-content\") pod \"ee032981-4065-4eb6-b9a4-fafcd8003697\" (UID: \"ee032981-4065-4eb6-b9a4-fafcd8003697\") " Jan 31 07:34:44 crc kubenswrapper[4908]: I0131 07:34:44.171466 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt6l7\" (UniqueName: \"kubernetes.io/projected/ee032981-4065-4eb6-b9a4-fafcd8003697-kube-api-access-bt6l7\") pod \"ee032981-4065-4eb6-b9a4-fafcd8003697\" (UID: \"ee032981-4065-4eb6-b9a4-fafcd8003697\") " Jan 31 07:34:44 crc kubenswrapper[4908]: I0131 07:34:44.171547 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee032981-4065-4eb6-b9a4-fafcd8003697-utilities\") pod \"ee032981-4065-4eb6-b9a4-fafcd8003697\" (UID: \"ee032981-4065-4eb6-b9a4-fafcd8003697\") " Jan 31 07:34:44 crc kubenswrapper[4908]: I0131 07:34:44.172459 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee032981-4065-4eb6-b9a4-fafcd8003697-utilities" (OuterVolumeSpecName: "utilities") pod "ee032981-4065-4eb6-b9a4-fafcd8003697" (UID: "ee032981-4065-4eb6-b9a4-fafcd8003697"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:34:44 crc kubenswrapper[4908]: I0131 07:34:44.179180 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee032981-4065-4eb6-b9a4-fafcd8003697-kube-api-access-bt6l7" (OuterVolumeSpecName: "kube-api-access-bt6l7") pod "ee032981-4065-4eb6-b9a4-fafcd8003697" (UID: "ee032981-4065-4eb6-b9a4-fafcd8003697"). InnerVolumeSpecName "kube-api-access-bt6l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:34:44 crc kubenswrapper[4908]: I0131 07:34:44.272388 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee032981-4065-4eb6-b9a4-fafcd8003697-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:34:44 crc kubenswrapper[4908]: I0131 07:34:44.272422 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt6l7\" (UniqueName: \"kubernetes.io/projected/ee032981-4065-4eb6-b9a4-fafcd8003697-kube-api-access-bt6l7\") on node \"crc\" DevicePath \"\"" Jan 31 07:34:44 crc kubenswrapper[4908]: I0131 07:34:44.297760 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee032981-4065-4eb6-b9a4-fafcd8003697-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee032981-4065-4eb6-b9a4-fafcd8003697" (UID: "ee032981-4065-4eb6-b9a4-fafcd8003697"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:34:44 crc kubenswrapper[4908]: I0131 07:34:44.374045 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee032981-4065-4eb6-b9a4-fafcd8003697-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:34:44 crc kubenswrapper[4908]: I0131 07:34:44.773952 4908 generic.go:334] "Generic (PLEG): container finished" podID="ee032981-4065-4eb6-b9a4-fafcd8003697" containerID="7f6c5b28881ac13bcbc7a71075b1e8b7703d2558f7fb5299882704d8cc1a785d" exitCode=0 Jan 31 07:34:44 crc kubenswrapper[4908]: I0131 07:34:44.774014 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ph7z5" event={"ID":"ee032981-4065-4eb6-b9a4-fafcd8003697","Type":"ContainerDied","Data":"7f6c5b28881ac13bcbc7a71075b1e8b7703d2558f7fb5299882704d8cc1a785d"} Jan 31 07:34:44 crc kubenswrapper[4908]: I0131 07:34:44.774062 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ph7z5" event={"ID":"ee032981-4065-4eb6-b9a4-fafcd8003697","Type":"ContainerDied","Data":"8917eb7b3f31a86b886b2de6b09a3a6892460f4f1e48fc5e64f85e8202d5d077"} Jan 31 07:34:44 crc kubenswrapper[4908]: I0131 07:34:44.774090 4908 scope.go:117] "RemoveContainer" containerID="7f6c5b28881ac13bcbc7a71075b1e8b7703d2558f7fb5299882704d8cc1a785d" Jan 31 07:34:44 crc kubenswrapper[4908]: I0131 07:34:44.775075 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ph7z5" Jan 31 07:34:44 crc kubenswrapper[4908]: I0131 07:34:44.789646 4908 scope.go:117] "RemoveContainer" containerID="8bbce9292ce23f266022f0b01675fc6fe6c023b640233c836cd019d20e1b555d" Jan 31 07:34:44 crc kubenswrapper[4908]: I0131 07:34:44.800760 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ph7z5"] Jan 31 07:34:44 crc kubenswrapper[4908]: I0131 07:34:44.804602 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ph7z5"] Jan 31 07:34:44 crc kubenswrapper[4908]: I0131 07:34:44.815175 4908 scope.go:117] "RemoveContainer" containerID="444bb21f8619e60bf9528b5c387330655bd73e91ca429dc0d70c3c027a90fcd5" Jan 31 07:34:44 crc kubenswrapper[4908]: I0131 07:34:44.832174 4908 scope.go:117] "RemoveContainer" containerID="7f6c5b28881ac13bcbc7a71075b1e8b7703d2558f7fb5299882704d8cc1a785d" Jan 31 07:34:44 crc kubenswrapper[4908]: E0131 07:34:44.832617 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f6c5b28881ac13bcbc7a71075b1e8b7703d2558f7fb5299882704d8cc1a785d\": container with ID starting with 7f6c5b28881ac13bcbc7a71075b1e8b7703d2558f7fb5299882704d8cc1a785d not found: ID does not exist" containerID="7f6c5b28881ac13bcbc7a71075b1e8b7703d2558f7fb5299882704d8cc1a785d" Jan 31 07:34:44 crc kubenswrapper[4908]: I0131 07:34:44.832650 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f6c5b28881ac13bcbc7a71075b1e8b7703d2558f7fb5299882704d8cc1a785d"} err="failed to get container status \"7f6c5b28881ac13bcbc7a71075b1e8b7703d2558f7fb5299882704d8cc1a785d\": rpc error: code = NotFound desc = could not find container \"7f6c5b28881ac13bcbc7a71075b1e8b7703d2558f7fb5299882704d8cc1a785d\": container with ID starting with 7f6c5b28881ac13bcbc7a71075b1e8b7703d2558f7fb5299882704d8cc1a785d not found: ID does not exist" Jan 31 07:34:44 crc kubenswrapper[4908]: I0131 07:34:44.832677 4908 scope.go:117] "RemoveContainer" containerID="8bbce9292ce23f266022f0b01675fc6fe6c023b640233c836cd019d20e1b555d" Jan 31 07:34:44 crc kubenswrapper[4908]: E0131 07:34:44.833050 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bbce9292ce23f266022f0b01675fc6fe6c023b640233c836cd019d20e1b555d\": container with ID starting with 8bbce9292ce23f266022f0b01675fc6fe6c023b640233c836cd019d20e1b555d not found: ID does not exist" containerID="8bbce9292ce23f266022f0b01675fc6fe6c023b640233c836cd019d20e1b555d" Jan 31 07:34:44 crc kubenswrapper[4908]: I0131 07:34:44.833081 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bbce9292ce23f266022f0b01675fc6fe6c023b640233c836cd019d20e1b555d"} err="failed to get container status \"8bbce9292ce23f266022f0b01675fc6fe6c023b640233c836cd019d20e1b555d\": rpc error: code = NotFound desc = could not find container \"8bbce9292ce23f266022f0b01675fc6fe6c023b640233c836cd019d20e1b555d\": container with ID starting with 8bbce9292ce23f266022f0b01675fc6fe6c023b640233c836cd019d20e1b555d not found: ID does not exist" Jan 31 07:34:44 crc kubenswrapper[4908]: I0131 07:34:44.833099 4908 scope.go:117] "RemoveContainer" containerID="444bb21f8619e60bf9528b5c387330655bd73e91ca429dc0d70c3c027a90fcd5" Jan 31 07:34:44 crc kubenswrapper[4908]: E0131 07:34:44.833322 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"444bb21f8619e60bf9528b5c387330655bd73e91ca429dc0d70c3c027a90fcd5\": container with ID starting with 444bb21f8619e60bf9528b5c387330655bd73e91ca429dc0d70c3c027a90fcd5 not found: ID does not exist" containerID="444bb21f8619e60bf9528b5c387330655bd73e91ca429dc0d70c3c027a90fcd5" Jan 31 07:34:44 crc kubenswrapper[4908]: I0131 07:34:44.833348 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"444bb21f8619e60bf9528b5c387330655bd73e91ca429dc0d70c3c027a90fcd5"} err="failed to get container status \"444bb21f8619e60bf9528b5c387330655bd73e91ca429dc0d70c3c027a90fcd5\": rpc error: code = NotFound desc = could not find container \"444bb21f8619e60bf9528b5c387330655bd73e91ca429dc0d70c3c027a90fcd5\": container with ID starting with 444bb21f8619e60bf9528b5c387330655bd73e91ca429dc0d70c3c027a90fcd5 not found: ID does not exist" Jan 31 07:34:45 crc kubenswrapper[4908]: I0131 07:34:45.947119 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee032981-4065-4eb6-b9a4-fafcd8003697" path="/var/lib/kubelet/pods/ee032981-4065-4eb6-b9a4-fafcd8003697/volumes" Jan 31 07:34:48 crc kubenswrapper[4908]: I0131 07:34:48.883312 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-nqkw8"] Jan 31 07:34:48 crc kubenswrapper[4908]: E0131 07:34:48.883833 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee032981-4065-4eb6-b9a4-fafcd8003697" containerName="registry-server" Jan 31 07:34:48 crc kubenswrapper[4908]: I0131 07:34:48.883851 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee032981-4065-4eb6-b9a4-fafcd8003697" containerName="registry-server" Jan 31 07:34:48 crc kubenswrapper[4908]: E0131 07:34:48.883875 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee032981-4065-4eb6-b9a4-fafcd8003697" containerName="extract-content" Jan 31 07:34:48 crc kubenswrapper[4908]: I0131 07:34:48.883883 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee032981-4065-4eb6-b9a4-fafcd8003697" containerName="extract-content" Jan 31 07:34:48 crc kubenswrapper[4908]: E0131 07:34:48.883894 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee032981-4065-4eb6-b9a4-fafcd8003697" containerName="extract-utilities" Jan 31 07:34:48 crc kubenswrapper[4908]: I0131 07:34:48.883905 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee032981-4065-4eb6-b9a4-fafcd8003697" containerName="extract-utilities" Jan 31 07:34:48 crc kubenswrapper[4908]: I0131 07:34:48.884037 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee032981-4065-4eb6-b9a4-fafcd8003697" containerName="registry-server" Jan 31 07:34:48 crc kubenswrapper[4908]: I0131 07:34:48.884736 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-nqkw8" Jan 31 07:34:48 crc kubenswrapper[4908]: I0131 07:34:48.887168 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-4zcrg" Jan 31 07:34:48 crc kubenswrapper[4908]: I0131 07:34:48.895695 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-nqkw8"] Jan 31 07:34:48 crc kubenswrapper[4908]: I0131 07:34:48.909496 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-r9rqh"] Jan 31 07:34:48 crc kubenswrapper[4908]: I0131 07:34:48.910323 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-r9rqh" Jan 31 07:34:48 crc kubenswrapper[4908]: I0131 07:34:48.915867 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-hrzpt"] Jan 31 07:34:48 crc kubenswrapper[4908]: I0131 07:34:48.917004 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hrzpt" Jan 31 07:34:48 crc kubenswrapper[4908]: I0131 07:34:48.918777 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 31 07:34:48 crc kubenswrapper[4908]: I0131 07:34:48.925500 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d9f1a13a-3bbd-4ee8-a70f-e58da94c82d5-ovs-socket\") pod \"nmstate-handler-r9rqh\" (UID: \"d9f1a13a-3bbd-4ee8-a70f-e58da94c82d5\") " pod="openshift-nmstate/nmstate-handler-r9rqh" Jan 31 07:34:48 crc kubenswrapper[4908]: I0131 07:34:48.925544 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwvv4\" (UniqueName: \"kubernetes.io/projected/6513c39c-085e-4d01-bf22-be7f55191bd5-kube-api-access-dwvv4\") pod \"nmstate-metrics-54757c584b-nqkw8\" (UID: \"6513c39c-085e-4d01-bf22-be7f55191bd5\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-nqkw8" Jan 31 07:34:48 crc kubenswrapper[4908]: I0131 07:34:48.925582 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rppw\" (UniqueName: \"kubernetes.io/projected/347aecb4-8ba2-4837-af2b-11582ba4de6f-kube-api-access-6rppw\") pod \"nmstate-webhook-8474b5b9d8-hrzpt\" (UID: \"347aecb4-8ba2-4837-af2b-11582ba4de6f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hrzpt" Jan 31 07:34:48 crc kubenswrapper[4908]: I0131 07:34:48.925603 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d9f1a13a-3bbd-4ee8-a70f-e58da94c82d5-dbus-socket\") pod \"nmstate-handler-r9rqh\" (UID: \"d9f1a13a-3bbd-4ee8-a70f-e58da94c82d5\") " pod="openshift-nmstate/nmstate-handler-r9rqh" Jan 31 07:34:48 crc kubenswrapper[4908]: I0131 07:34:48.925629 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d9f1a13a-3bbd-4ee8-a70f-e58da94c82d5-nmstate-lock\") pod \"nmstate-handler-r9rqh\" (UID: \"d9f1a13a-3bbd-4ee8-a70f-e58da94c82d5\") " pod="openshift-nmstate/nmstate-handler-r9rqh" Jan 31 07:34:48 crc kubenswrapper[4908]: I0131 07:34:48.925646 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/347aecb4-8ba2-4837-af2b-11582ba4de6f-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-hrzpt\" (UID: \"347aecb4-8ba2-4837-af2b-11582ba4de6f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hrzpt" Jan 31 07:34:48 crc kubenswrapper[4908]: I0131 07:34:48.925661 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p67xq\" (UniqueName: \"kubernetes.io/projected/d9f1a13a-3bbd-4ee8-a70f-e58da94c82d5-kube-api-access-p67xq\") pod \"nmstate-handler-r9rqh\" (UID: \"d9f1a13a-3bbd-4ee8-a70f-e58da94c82d5\") " pod="openshift-nmstate/nmstate-handler-r9rqh" Jan 31 07:34:48 crc kubenswrapper[4908]: I0131 07:34:48.936726 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-hrzpt"] Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.020328 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-2jnxn"] Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.027422 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rppw\" (UniqueName: \"kubernetes.io/projected/347aecb4-8ba2-4837-af2b-11582ba4de6f-kube-api-access-6rppw\") pod \"nmstate-webhook-8474b5b9d8-hrzpt\" (UID: \"347aecb4-8ba2-4837-af2b-11582ba4de6f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hrzpt" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.027489 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d9f1a13a-3bbd-4ee8-a70f-e58da94c82d5-dbus-socket\") pod \"nmstate-handler-r9rqh\" (UID: \"d9f1a13a-3bbd-4ee8-a70f-e58da94c82d5\") " pod="openshift-nmstate/nmstate-handler-r9rqh" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.027541 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d9f1a13a-3bbd-4ee8-a70f-e58da94c82d5-nmstate-lock\") pod \"nmstate-handler-r9rqh\" (UID: \"d9f1a13a-3bbd-4ee8-a70f-e58da94c82d5\") " pod="openshift-nmstate/nmstate-handler-r9rqh" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.027567 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/347aecb4-8ba2-4837-af2b-11582ba4de6f-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-hrzpt\" (UID: \"347aecb4-8ba2-4837-af2b-11582ba4de6f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hrzpt" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.027596 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p67xq\" (UniqueName: \"kubernetes.io/projected/d9f1a13a-3bbd-4ee8-a70f-e58da94c82d5-kube-api-access-p67xq\") pod \"nmstate-handler-r9rqh\" (UID: \"d9f1a13a-3bbd-4ee8-a70f-e58da94c82d5\") " pod="openshift-nmstate/nmstate-handler-r9rqh" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.027668 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d9f1a13a-3bbd-4ee8-a70f-e58da94c82d5-ovs-socket\") pod \"nmstate-handler-r9rqh\" (UID: \"d9f1a13a-3bbd-4ee8-a70f-e58da94c82d5\") " pod="openshift-nmstate/nmstate-handler-r9rqh" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.027694 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwvv4\" (UniqueName: \"kubernetes.io/projected/6513c39c-085e-4d01-bf22-be7f55191bd5-kube-api-access-dwvv4\") pod \"nmstate-metrics-54757c584b-nqkw8\" (UID: \"6513c39c-085e-4d01-bf22-be7f55191bd5\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-nqkw8" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.028264 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d9f1a13a-3bbd-4ee8-a70f-e58da94c82d5-nmstate-lock\") pod \"nmstate-handler-r9rqh\" (UID: \"d9f1a13a-3bbd-4ee8-a70f-e58da94c82d5\") " pod="openshift-nmstate/nmstate-handler-r9rqh" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.028603 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d9f1a13a-3bbd-4ee8-a70f-e58da94c82d5-ovs-socket\") pod \"nmstate-handler-r9rqh\" (UID: \"d9f1a13a-3bbd-4ee8-a70f-e58da94c82d5\") " pod="openshift-nmstate/nmstate-handler-r9rqh" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.028605 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d9f1a13a-3bbd-4ee8-a70f-e58da94c82d5-dbus-socket\") pod \"nmstate-handler-r9rqh\" (UID: \"d9f1a13a-3bbd-4ee8-a70f-e58da94c82d5\") " pod="openshift-nmstate/nmstate-handler-r9rqh" Jan 31 07:34:49 crc kubenswrapper[4908]: E0131 07:34:49.028782 4908 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 31 07:34:49 crc kubenswrapper[4908]: E0131 07:34:49.028950 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/347aecb4-8ba2-4837-af2b-11582ba4de6f-tls-key-pair podName:347aecb4-8ba2-4837-af2b-11582ba4de6f nodeName:}" failed. No retries permitted until 2026-01-31 07:34:49.528929849 +0000 UTC m=+796.144874573 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/347aecb4-8ba2-4837-af2b-11582ba4de6f-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-hrzpt" (UID: "347aecb4-8ba2-4837-af2b-11582ba4de6f") : secret "openshift-nmstate-webhook" not found Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.029628 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-2jnxn"] Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.029778 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2jnxn" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.031206 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.031396 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.033537 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-b7qgw" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.049204 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p67xq\" (UniqueName: \"kubernetes.io/projected/d9f1a13a-3bbd-4ee8-a70f-e58da94c82d5-kube-api-access-p67xq\") pod \"nmstate-handler-r9rqh\" (UID: \"d9f1a13a-3bbd-4ee8-a70f-e58da94c82d5\") " pod="openshift-nmstate/nmstate-handler-r9rqh" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.049872 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwvv4\" (UniqueName: \"kubernetes.io/projected/6513c39c-085e-4d01-bf22-be7f55191bd5-kube-api-access-dwvv4\") pod \"nmstate-metrics-54757c584b-nqkw8\" (UID: \"6513c39c-085e-4d01-bf22-be7f55191bd5\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-nqkw8" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.051787 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rppw\" (UniqueName: \"kubernetes.io/projected/347aecb4-8ba2-4837-af2b-11582ba4de6f-kube-api-access-6rppw\") pod \"nmstate-webhook-8474b5b9d8-hrzpt\" (UID: \"347aecb4-8ba2-4837-af2b-11582ba4de6f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hrzpt" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.129194 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a0b9ab4-75be-4ba4-bd4a-f87df5b21366-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-2jnxn\" (UID: \"3a0b9ab4-75be-4ba4-bd4a-f87df5b21366\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2jnxn" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.129328 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3a0b9ab4-75be-4ba4-bd4a-f87df5b21366-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-2jnxn\" (UID: \"3a0b9ab4-75be-4ba4-bd4a-f87df5b21366\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2jnxn" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.129365 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhbn4\" (UniqueName: \"kubernetes.io/projected/3a0b9ab4-75be-4ba4-bd4a-f87df5b21366-kube-api-access-vhbn4\") pod \"nmstate-console-plugin-7754f76f8b-2jnxn\" (UID: \"3a0b9ab4-75be-4ba4-bd4a-f87df5b21366\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2jnxn" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.189412 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-854d79fbc5-8nhzx"] Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.190280 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-854d79fbc5-8nhzx" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.206522 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-854d79fbc5-8nhzx"] Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.207401 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-nqkw8" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.229947 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a01a2586-5dc2-4df2-bb93-15f26367c079-console-serving-cert\") pod \"console-854d79fbc5-8nhzx\" (UID: \"a01a2586-5dc2-4df2-bb93-15f26367c079\") " pod="openshift-console/console-854d79fbc5-8nhzx" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.230035 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a0b9ab4-75be-4ba4-bd4a-f87df5b21366-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-2jnxn\" (UID: \"3a0b9ab4-75be-4ba4-bd4a-f87df5b21366\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2jnxn" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.230069 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a01a2586-5dc2-4df2-bb93-15f26367c079-console-oauth-config\") pod \"console-854d79fbc5-8nhzx\" (UID: \"a01a2586-5dc2-4df2-bb93-15f26367c079\") " pod="openshift-console/console-854d79fbc5-8nhzx" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.230094 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a01a2586-5dc2-4df2-bb93-15f26367c079-trusted-ca-bundle\") pod \"console-854d79fbc5-8nhzx\" (UID: \"a01a2586-5dc2-4df2-bb93-15f26367c079\") " pod="openshift-console/console-854d79fbc5-8nhzx" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.230116 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3a0b9ab4-75be-4ba4-bd4a-f87df5b21366-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-2jnxn\" (UID: \"3a0b9ab4-75be-4ba4-bd4a-f87df5b21366\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2jnxn" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.230136 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a01a2586-5dc2-4df2-bb93-15f26367c079-service-ca\") pod \"console-854d79fbc5-8nhzx\" (UID: \"a01a2586-5dc2-4df2-bb93-15f26367c079\") " pod="openshift-console/console-854d79fbc5-8nhzx" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.230157 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhbn4\" (UniqueName: \"kubernetes.io/projected/3a0b9ab4-75be-4ba4-bd4a-f87df5b21366-kube-api-access-vhbn4\") pod \"nmstate-console-plugin-7754f76f8b-2jnxn\" (UID: \"3a0b9ab4-75be-4ba4-bd4a-f87df5b21366\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2jnxn" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.230180 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbx74\" (UniqueName: \"kubernetes.io/projected/a01a2586-5dc2-4df2-bb93-15f26367c079-kube-api-access-tbx74\") pod \"console-854d79fbc5-8nhzx\" (UID: \"a01a2586-5dc2-4df2-bb93-15f26367c079\") " pod="openshift-console/console-854d79fbc5-8nhzx" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.230199 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a01a2586-5dc2-4df2-bb93-15f26367c079-console-config\") pod \"console-854d79fbc5-8nhzx\" (UID: \"a01a2586-5dc2-4df2-bb93-15f26367c079\") " pod="openshift-console/console-854d79fbc5-8nhzx" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.240642 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a01a2586-5dc2-4df2-bb93-15f26367c079-oauth-serving-cert\") pod \"console-854d79fbc5-8nhzx\" (UID: \"a01a2586-5dc2-4df2-bb93-15f26367c079\") " pod="openshift-console/console-854d79fbc5-8nhzx" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.249203 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-r9rqh" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.250424 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3a0b9ab4-75be-4ba4-bd4a-f87df5b21366-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-2jnxn\" (UID: \"3a0b9ab4-75be-4ba4-bd4a-f87df5b21366\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2jnxn" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.253625 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a0b9ab4-75be-4ba4-bd4a-f87df5b21366-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-2jnxn\" (UID: \"3a0b9ab4-75be-4ba4-bd4a-f87df5b21366\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2jnxn" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.273390 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhbn4\" (UniqueName: \"kubernetes.io/projected/3a0b9ab4-75be-4ba4-bd4a-f87df5b21366-kube-api-access-vhbn4\") pod \"nmstate-console-plugin-7754f76f8b-2jnxn\" (UID: \"3a0b9ab4-75be-4ba4-bd4a-f87df5b21366\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2jnxn" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.347455 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2jnxn" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.347707 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a01a2586-5dc2-4df2-bb93-15f26367c079-console-oauth-config\") pod \"console-854d79fbc5-8nhzx\" (UID: \"a01a2586-5dc2-4df2-bb93-15f26367c079\") " pod="openshift-console/console-854d79fbc5-8nhzx" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.347898 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a01a2586-5dc2-4df2-bb93-15f26367c079-trusted-ca-bundle\") pod \"console-854d79fbc5-8nhzx\" (UID: \"a01a2586-5dc2-4df2-bb93-15f26367c079\") " pod="openshift-console/console-854d79fbc5-8nhzx" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.347923 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a01a2586-5dc2-4df2-bb93-15f26367c079-service-ca\") pod \"console-854d79fbc5-8nhzx\" (UID: \"a01a2586-5dc2-4df2-bb93-15f26367c079\") " pod="openshift-console/console-854d79fbc5-8nhzx" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.347966 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbx74\" (UniqueName: \"kubernetes.io/projected/a01a2586-5dc2-4df2-bb93-15f26367c079-kube-api-access-tbx74\") pod \"console-854d79fbc5-8nhzx\" (UID: \"a01a2586-5dc2-4df2-bb93-15f26367c079\") " pod="openshift-console/console-854d79fbc5-8nhzx" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.348049 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a01a2586-5dc2-4df2-bb93-15f26367c079-console-config\") pod \"console-854d79fbc5-8nhzx\" (UID: \"a01a2586-5dc2-4df2-bb93-15f26367c079\") " pod="openshift-console/console-854d79fbc5-8nhzx" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.348072 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a01a2586-5dc2-4df2-bb93-15f26367c079-oauth-serving-cert\") pod \"console-854d79fbc5-8nhzx\" (UID: \"a01a2586-5dc2-4df2-bb93-15f26367c079\") " pod="openshift-console/console-854d79fbc5-8nhzx" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.348112 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a01a2586-5dc2-4df2-bb93-15f26367c079-console-serving-cert\") pod \"console-854d79fbc5-8nhzx\" (UID: \"a01a2586-5dc2-4df2-bb93-15f26367c079\") " pod="openshift-console/console-854d79fbc5-8nhzx" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.349803 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a01a2586-5dc2-4df2-bb93-15f26367c079-console-config\") pod \"console-854d79fbc5-8nhzx\" (UID: \"a01a2586-5dc2-4df2-bb93-15f26367c079\") " pod="openshift-console/console-854d79fbc5-8nhzx" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.350319 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a01a2586-5dc2-4df2-bb93-15f26367c079-service-ca\") pod \"console-854d79fbc5-8nhzx\" (UID: \"a01a2586-5dc2-4df2-bb93-15f26367c079\") " pod="openshift-console/console-854d79fbc5-8nhzx" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.351084 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a01a2586-5dc2-4df2-bb93-15f26367c079-trusted-ca-bundle\") pod \"console-854d79fbc5-8nhzx\" (UID: \"a01a2586-5dc2-4df2-bb93-15f26367c079\") " pod="openshift-console/console-854d79fbc5-8nhzx" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.352791 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a01a2586-5dc2-4df2-bb93-15f26367c079-console-serving-cert\") pod \"console-854d79fbc5-8nhzx\" (UID: \"a01a2586-5dc2-4df2-bb93-15f26367c079\") " pod="openshift-console/console-854d79fbc5-8nhzx" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.352904 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a01a2586-5dc2-4df2-bb93-15f26367c079-oauth-serving-cert\") pod \"console-854d79fbc5-8nhzx\" (UID: \"a01a2586-5dc2-4df2-bb93-15f26367c079\") " pod="openshift-console/console-854d79fbc5-8nhzx" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.354535 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a01a2586-5dc2-4df2-bb93-15f26367c079-console-oauth-config\") pod \"console-854d79fbc5-8nhzx\" (UID: \"a01a2586-5dc2-4df2-bb93-15f26367c079\") " pod="openshift-console/console-854d79fbc5-8nhzx" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.376924 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbx74\" (UniqueName: \"kubernetes.io/projected/a01a2586-5dc2-4df2-bb93-15f26367c079-kube-api-access-tbx74\") pod \"console-854d79fbc5-8nhzx\" (UID: \"a01a2586-5dc2-4df2-bb93-15f26367c079\") " pod="openshift-console/console-854d79fbc5-8nhzx" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.504787 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-854d79fbc5-8nhzx" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.552761 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/347aecb4-8ba2-4837-af2b-11582ba4de6f-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-hrzpt\" (UID: \"347aecb4-8ba2-4837-af2b-11582ba4de6f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hrzpt" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.556333 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/347aecb4-8ba2-4837-af2b-11582ba4de6f-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-hrzpt\" (UID: \"347aecb4-8ba2-4837-af2b-11582ba4de6f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hrzpt" Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.629697 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-nqkw8"] Jan 31 07:34:49 crc kubenswrapper[4908]: W0131 07:34:49.638958 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6513c39c_085e_4d01_bf22_be7f55191bd5.slice/crio-ba85566e8b904116cc35213152d46fd6a7fc10652636676ef4e1d30282b4b10c WatchSource:0}: Error finding container ba85566e8b904116cc35213152d46fd6a7fc10652636676ef4e1d30282b4b10c: Status 404 returned error can't find the container with id ba85566e8b904116cc35213152d46fd6a7fc10652636676ef4e1d30282b4b10c Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.679512 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-854d79fbc5-8nhzx"] Jan 31 07:34:49 crc kubenswrapper[4908]: W0131 07:34:49.684543 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda01a2586_5dc2_4df2_bb93_15f26367c079.slice/crio-117ea7c7c440942baaffcf5b17a52fe0b412db872fc48475b58d6222ef3a558d WatchSource:0}: Error finding container 117ea7c7c440942baaffcf5b17a52fe0b412db872fc48475b58d6222ef3a558d: Status 404 returned error can't find the container with id 117ea7c7c440942baaffcf5b17a52fe0b412db872fc48475b58d6222ef3a558d Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.728253 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-2jnxn"] Jan 31 07:34:49 crc kubenswrapper[4908]: W0131 07:34:49.739589 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a0b9ab4_75be_4ba4_bd4a_f87df5b21366.slice/crio-5c224e69e59497487a60e467f36b75c052fe3035720370abbf442394dac9ef15 WatchSource:0}: Error finding container 5c224e69e59497487a60e467f36b75c052fe3035720370abbf442394dac9ef15: Status 404 returned error can't find the container with id 5c224e69e59497487a60e467f36b75c052fe3035720370abbf442394dac9ef15 Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.803305 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-r9rqh" event={"ID":"d9f1a13a-3bbd-4ee8-a70f-e58da94c82d5","Type":"ContainerStarted","Data":"56d826749de136abb291b3cbe68f358efc8e1561beede29832ce003de452f9e3"} Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.804609 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-854d79fbc5-8nhzx" event={"ID":"a01a2586-5dc2-4df2-bb93-15f26367c079","Type":"ContainerStarted","Data":"117ea7c7c440942baaffcf5b17a52fe0b412db872fc48475b58d6222ef3a558d"} Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.805405 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-nqkw8" event={"ID":"6513c39c-085e-4d01-bf22-be7f55191bd5","Type":"ContainerStarted","Data":"ba85566e8b904116cc35213152d46fd6a7fc10652636676ef4e1d30282b4b10c"} Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.806134 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2jnxn" event={"ID":"3a0b9ab4-75be-4ba4-bd4a-f87df5b21366","Type":"ContainerStarted","Data":"5c224e69e59497487a60e467f36b75c052fe3035720370abbf442394dac9ef15"} Jan 31 07:34:49 crc kubenswrapper[4908]: I0131 07:34:49.845817 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hrzpt" Jan 31 07:34:50 crc kubenswrapper[4908]: I0131 07:34:50.012253 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-hrzpt"] Jan 31 07:34:50 crc kubenswrapper[4908]: W0131 07:34:50.014475 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod347aecb4_8ba2_4837_af2b_11582ba4de6f.slice/crio-907e159fe0b8ac515095b7a18b0e4eefac7f1b85547b62b5c819cfa4ad7bb0f8 WatchSource:0}: Error finding container 907e159fe0b8ac515095b7a18b0e4eefac7f1b85547b62b5c819cfa4ad7bb0f8: Status 404 returned error can't find the container with id 907e159fe0b8ac515095b7a18b0e4eefac7f1b85547b62b5c819cfa4ad7bb0f8 Jan 31 07:34:50 crc kubenswrapper[4908]: I0131 07:34:50.813487 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-854d79fbc5-8nhzx" event={"ID":"a01a2586-5dc2-4df2-bb93-15f26367c079","Type":"ContainerStarted","Data":"5f17ad49379b95398c5b09fb5c0120356610f907d084285e01d7027b28e580d3"} Jan 31 07:34:50 crc kubenswrapper[4908]: I0131 07:34:50.815127 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hrzpt" event={"ID":"347aecb4-8ba2-4837-af2b-11582ba4de6f","Type":"ContainerStarted","Data":"907e159fe0b8ac515095b7a18b0e4eefac7f1b85547b62b5c819cfa4ad7bb0f8"} Jan 31 07:34:50 crc kubenswrapper[4908]: I0131 07:34:50.837103 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-854d79fbc5-8nhzx" podStartSLOduration=1.837083007 podStartE2EDuration="1.837083007s" podCreationTimestamp="2026-01-31 07:34:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:34:50.83059414 +0000 UTC m=+797.446538834" watchObservedRunningTime="2026-01-31 07:34:50.837083007 +0000 UTC m=+797.453027661" Jan 31 07:34:53 crc kubenswrapper[4908]: I0131 07:34:53.835773 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-r9rqh" event={"ID":"d9f1a13a-3bbd-4ee8-a70f-e58da94c82d5","Type":"ContainerStarted","Data":"9546a86399f024469790fba7d4b6d39a9764553b011cbb110d24e6d3e5187e11"} Jan 31 07:34:53 crc kubenswrapper[4908]: I0131 07:34:53.836358 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-r9rqh" Jan 31 07:34:53 crc kubenswrapper[4908]: I0131 07:34:53.838533 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hrzpt" event={"ID":"347aecb4-8ba2-4837-af2b-11582ba4de6f","Type":"ContainerStarted","Data":"72242579fb9861c824b425f723936ca71d4bbd03113a43bd2f38f03818b15439"} Jan 31 07:34:53 crc kubenswrapper[4908]: I0131 07:34:53.839204 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hrzpt" Jan 31 07:34:53 crc kubenswrapper[4908]: I0131 07:34:53.840786 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-nqkw8" event={"ID":"6513c39c-085e-4d01-bf22-be7f55191bd5","Type":"ContainerStarted","Data":"c7d87caee18539f32a20dde74f36ace0625ec4579eebf0b571b3717d5d4a5f32"} Jan 31 07:34:53 crc kubenswrapper[4908]: I0131 07:34:53.841931 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2jnxn" event={"ID":"3a0b9ab4-75be-4ba4-bd4a-f87df5b21366","Type":"ContainerStarted","Data":"6e22757a666a794283d5a44a156196917f38703d5fab753d3e6dbce939a72939"} Jan 31 07:34:53 crc kubenswrapper[4908]: I0131 07:34:53.852657 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-r9rqh" podStartSLOduration=2.390744511 podStartE2EDuration="5.852639302s" podCreationTimestamp="2026-01-31 07:34:48 +0000 UTC" firstStartedPulling="2026-01-31 07:34:49.28220699 +0000 UTC m=+795.898151644" lastFinishedPulling="2026-01-31 07:34:52.744101781 +0000 UTC m=+799.360046435" observedRunningTime="2026-01-31 07:34:53.85216013 +0000 UTC m=+800.468104784" watchObservedRunningTime="2026-01-31 07:34:53.852639302 +0000 UTC m=+800.468583956" Jan 31 07:34:53 crc kubenswrapper[4908]: I0131 07:34:53.891908 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hrzpt" podStartSLOduration=3.16293833 podStartE2EDuration="5.89189058s" podCreationTimestamp="2026-01-31 07:34:48 +0000 UTC" firstStartedPulling="2026-01-31 07:34:50.016358692 +0000 UTC m=+796.632303346" lastFinishedPulling="2026-01-31 07:34:52.745310942 +0000 UTC m=+799.361255596" observedRunningTime="2026-01-31 07:34:53.891727606 +0000 UTC m=+800.507672260" watchObservedRunningTime="2026-01-31 07:34:53.89189058 +0000 UTC m=+800.507835234" Jan 31 07:34:53 crc kubenswrapper[4908]: I0131 07:34:53.895477 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2jnxn" podStartSLOduration=1.89411059 podStartE2EDuration="4.895460631s" podCreationTimestamp="2026-01-31 07:34:49 +0000 UTC" firstStartedPulling="2026-01-31 07:34:49.742781391 +0000 UTC m=+796.358726035" lastFinishedPulling="2026-01-31 07:34:52.744131422 +0000 UTC m=+799.360076076" observedRunningTime="2026-01-31 07:34:53.866858227 +0000 UTC m=+800.482802901" watchObservedRunningTime="2026-01-31 07:34:53.895460631 +0000 UTC m=+800.511405285" Jan 31 07:34:56 crc kubenswrapper[4908]: I0131 07:34:56.859452 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-nqkw8" event={"ID":"6513c39c-085e-4d01-bf22-be7f55191bd5","Type":"ContainerStarted","Data":"9c4fcaa04bb92c780e58df0076bad9534c50fd838ecd4ec601b6f7a3128de7dd"} Jan 31 07:34:56 crc kubenswrapper[4908]: I0131 07:34:56.882890 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-nqkw8" podStartSLOduration=2.692865835 podStartE2EDuration="8.882870095s" podCreationTimestamp="2026-01-31 07:34:48 +0000 UTC" firstStartedPulling="2026-01-31 07:34:49.642097606 +0000 UTC m=+796.258042250" lastFinishedPulling="2026-01-31 07:34:55.832101856 +0000 UTC m=+802.448046510" observedRunningTime="2026-01-31 07:34:56.878319229 +0000 UTC m=+803.494263883" watchObservedRunningTime="2026-01-31 07:34:56.882870095 +0000 UTC m=+803.498814769" Jan 31 07:34:59 crc kubenswrapper[4908]: I0131 07:34:59.276110 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-r9rqh" Jan 31 07:34:59 crc kubenswrapper[4908]: I0131 07:34:59.505285 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-854d79fbc5-8nhzx" Jan 31 07:34:59 crc kubenswrapper[4908]: I0131 07:34:59.505371 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-854d79fbc5-8nhzx" Jan 31 07:34:59 crc kubenswrapper[4908]: I0131 07:34:59.511362 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-854d79fbc5-8nhzx" Jan 31 07:34:59 crc kubenswrapper[4908]: I0131 07:34:59.884042 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-854d79fbc5-8nhzx" Jan 31 07:34:59 crc kubenswrapper[4908]: I0131 07:34:59.931366 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-fjlrr"] Jan 31 07:35:09 crc kubenswrapper[4908]: I0131 07:35:09.851923 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hrzpt" Jan 31 07:35:10 crc kubenswrapper[4908]: I0131 07:35:10.431107 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:35:10 crc kubenswrapper[4908]: I0131 07:35:10.431186 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:35:24 crc kubenswrapper[4908]: I0131 07:35:24.760003 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh"] Jan 31 07:35:24 crc kubenswrapper[4908]: I0131 07:35:24.762585 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh" Jan 31 07:35:24 crc kubenswrapper[4908]: I0131 07:35:24.765057 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 31 07:35:24 crc kubenswrapper[4908]: I0131 07:35:24.783099 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh"] Jan 31 07:35:24 crc kubenswrapper[4908]: I0131 07:35:24.938880 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/625a99c3-e2e7-493b-8a5a-071981756003-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh\" (UID: \"625a99c3-e2e7-493b-8a5a-071981756003\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh" Jan 31 07:35:24 crc kubenswrapper[4908]: I0131 07:35:24.939741 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92fm8\" (UniqueName: \"kubernetes.io/projected/625a99c3-e2e7-493b-8a5a-071981756003-kube-api-access-92fm8\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh\" (UID: \"625a99c3-e2e7-493b-8a5a-071981756003\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh" Jan 31 07:35:24 crc kubenswrapper[4908]: I0131 07:35:24.939837 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/625a99c3-e2e7-493b-8a5a-071981756003-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh\" (UID: \"625a99c3-e2e7-493b-8a5a-071981756003\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh" Jan 31 07:35:24 crc kubenswrapper[4908]: I0131 07:35:24.989494 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-fjlrr" podUID="097d2f96-ce86-4d47-a55c-c717d272a8ef" containerName="console" containerID="cri-o://75d170aeb9ea07f95483c3f806eed8d83417cfa72b4a7226dc501100c51eacd1" gracePeriod=15 Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.040776 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/625a99c3-e2e7-493b-8a5a-071981756003-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh\" (UID: \"625a99c3-e2e7-493b-8a5a-071981756003\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh" Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.040933 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/625a99c3-e2e7-493b-8a5a-071981756003-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh\" (UID: \"625a99c3-e2e7-493b-8a5a-071981756003\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh" Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.041011 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92fm8\" (UniqueName: \"kubernetes.io/projected/625a99c3-e2e7-493b-8a5a-071981756003-kube-api-access-92fm8\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh\" (UID: \"625a99c3-e2e7-493b-8a5a-071981756003\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh" Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.041668 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/625a99c3-e2e7-493b-8a5a-071981756003-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh\" (UID: \"625a99c3-e2e7-493b-8a5a-071981756003\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh" Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.042128 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/625a99c3-e2e7-493b-8a5a-071981756003-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh\" (UID: \"625a99c3-e2e7-493b-8a5a-071981756003\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh" Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.068524 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92fm8\" (UniqueName: \"kubernetes.io/projected/625a99c3-e2e7-493b-8a5a-071981756003-kube-api-access-92fm8\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh\" (UID: \"625a99c3-e2e7-493b-8a5a-071981756003\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh" Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.081093 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh" Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.428026 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-fjlrr_097d2f96-ce86-4d47-a55c-c717d272a8ef/console/0.log" Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.428127 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fjlrr" Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.542584 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh"] Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.545828 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/097d2f96-ce86-4d47-a55c-c717d272a8ef-service-ca\") pod \"097d2f96-ce86-4d47-a55c-c717d272a8ef\" (UID: \"097d2f96-ce86-4d47-a55c-c717d272a8ef\") " Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.546061 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/097d2f96-ce86-4d47-a55c-c717d272a8ef-console-oauth-config\") pod \"097d2f96-ce86-4d47-a55c-c717d272a8ef\" (UID: \"097d2f96-ce86-4d47-a55c-c717d272a8ef\") " Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.546122 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/097d2f96-ce86-4d47-a55c-c717d272a8ef-console-config\") pod \"097d2f96-ce86-4d47-a55c-c717d272a8ef\" (UID: \"097d2f96-ce86-4d47-a55c-c717d272a8ef\") " Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.546232 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/097d2f96-ce86-4d47-a55c-c717d272a8ef-trusted-ca-bundle\") pod \"097d2f96-ce86-4d47-a55c-c717d272a8ef\" (UID: \"097d2f96-ce86-4d47-a55c-c717d272a8ef\") " Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.546263 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh776\" (UniqueName: \"kubernetes.io/projected/097d2f96-ce86-4d47-a55c-c717d272a8ef-kube-api-access-mh776\") pod \"097d2f96-ce86-4d47-a55c-c717d272a8ef\" (UID: \"097d2f96-ce86-4d47-a55c-c717d272a8ef\") " Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.546938 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/097d2f96-ce86-4d47-a55c-c717d272a8ef-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "097d2f96-ce86-4d47-a55c-c717d272a8ef" (UID: "097d2f96-ce86-4d47-a55c-c717d272a8ef"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.547009 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/097d2f96-ce86-4d47-a55c-c717d272a8ef-console-config" (OuterVolumeSpecName: "console-config") pod "097d2f96-ce86-4d47-a55c-c717d272a8ef" (UID: "097d2f96-ce86-4d47-a55c-c717d272a8ef"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.547056 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/097d2f96-ce86-4d47-a55c-c717d272a8ef-oauth-serving-cert\") pod \"097d2f96-ce86-4d47-a55c-c717d272a8ef\" (UID: \"097d2f96-ce86-4d47-a55c-c717d272a8ef\") " Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.547047 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/097d2f96-ce86-4d47-a55c-c717d272a8ef-service-ca" (OuterVolumeSpecName: "service-ca") pod "097d2f96-ce86-4d47-a55c-c717d272a8ef" (UID: "097d2f96-ce86-4d47-a55c-c717d272a8ef"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.547101 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/097d2f96-ce86-4d47-a55c-c717d272a8ef-console-serving-cert\") pod \"097d2f96-ce86-4d47-a55c-c717d272a8ef\" (UID: \"097d2f96-ce86-4d47-a55c-c717d272a8ef\") " Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.547399 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/097d2f96-ce86-4d47-a55c-c717d272a8ef-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "097d2f96-ce86-4d47-a55c-c717d272a8ef" (UID: "097d2f96-ce86-4d47-a55c-c717d272a8ef"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.548340 4908 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/097d2f96-ce86-4d47-a55c-c717d272a8ef-console-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.548371 4908 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/097d2f96-ce86-4d47-a55c-c717d272a8ef-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.548382 4908 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/097d2f96-ce86-4d47-a55c-c717d272a8ef-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.548391 4908 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/097d2f96-ce86-4d47-a55c-c717d272a8ef-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.550295 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/097d2f96-ce86-4d47-a55c-c717d272a8ef-kube-api-access-mh776" (OuterVolumeSpecName: "kube-api-access-mh776") pod "097d2f96-ce86-4d47-a55c-c717d272a8ef" (UID: "097d2f96-ce86-4d47-a55c-c717d272a8ef"). InnerVolumeSpecName "kube-api-access-mh776". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.550908 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/097d2f96-ce86-4d47-a55c-c717d272a8ef-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "097d2f96-ce86-4d47-a55c-c717d272a8ef" (UID: "097d2f96-ce86-4d47-a55c-c717d272a8ef"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.551789 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/097d2f96-ce86-4d47-a55c-c717d272a8ef-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "097d2f96-ce86-4d47-a55c-c717d272a8ef" (UID: "097d2f96-ce86-4d47-a55c-c717d272a8ef"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.649248 4908 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/097d2f96-ce86-4d47-a55c-c717d272a8ef-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.649273 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh776\" (UniqueName: \"kubernetes.io/projected/097d2f96-ce86-4d47-a55c-c717d272a8ef-kube-api-access-mh776\") on node \"crc\" DevicePath \"\"" Jan 31 07:35:25 crc kubenswrapper[4908]: I0131 07:35:25.649283 4908 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/097d2f96-ce86-4d47-a55c-c717d272a8ef-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:35:26 crc kubenswrapper[4908]: I0131 07:35:26.047035 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-fjlrr_097d2f96-ce86-4d47-a55c-c717d272a8ef/console/0.log" Jan 31 07:35:26 crc kubenswrapper[4908]: I0131 07:35:26.047083 4908 generic.go:334] "Generic (PLEG): container finished" podID="097d2f96-ce86-4d47-a55c-c717d272a8ef" containerID="75d170aeb9ea07f95483c3f806eed8d83417cfa72b4a7226dc501100c51eacd1" exitCode=2 Jan 31 07:35:26 crc kubenswrapper[4908]: I0131 07:35:26.047134 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fjlrr" event={"ID":"097d2f96-ce86-4d47-a55c-c717d272a8ef","Type":"ContainerDied","Data":"75d170aeb9ea07f95483c3f806eed8d83417cfa72b4a7226dc501100c51eacd1"} Jan 31 07:35:26 crc kubenswrapper[4908]: I0131 07:35:26.047161 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fjlrr" event={"ID":"097d2f96-ce86-4d47-a55c-c717d272a8ef","Type":"ContainerDied","Data":"0d15d2d1ef0960f36e9c4003db41f1f0806d8004d5771c5b04c28e5a01c6459c"} Jan 31 07:35:26 crc kubenswrapper[4908]: I0131 07:35:26.047169 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fjlrr" Jan 31 07:35:26 crc kubenswrapper[4908]: I0131 07:35:26.047181 4908 scope.go:117] "RemoveContainer" containerID="75d170aeb9ea07f95483c3f806eed8d83417cfa72b4a7226dc501100c51eacd1" Jan 31 07:35:26 crc kubenswrapper[4908]: I0131 07:35:26.049995 4908 generic.go:334] "Generic (PLEG): container finished" podID="625a99c3-e2e7-493b-8a5a-071981756003" containerID="360319e75497f3abcc7c000f6161a9775ff4aab905ada81369cd4992c688bd7d" exitCode=0 Jan 31 07:35:26 crc kubenswrapper[4908]: I0131 07:35:26.050038 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh" event={"ID":"625a99c3-e2e7-493b-8a5a-071981756003","Type":"ContainerDied","Data":"360319e75497f3abcc7c000f6161a9775ff4aab905ada81369cd4992c688bd7d"} Jan 31 07:35:26 crc kubenswrapper[4908]: I0131 07:35:26.050069 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh" event={"ID":"625a99c3-e2e7-493b-8a5a-071981756003","Type":"ContainerStarted","Data":"eef7c024bb17e098674965c69254832dda92b66dbf82806af50200dfb5c9494b"} Jan 31 07:35:26 crc kubenswrapper[4908]: I0131 07:35:26.068211 4908 scope.go:117] "RemoveContainer" containerID="75d170aeb9ea07f95483c3f806eed8d83417cfa72b4a7226dc501100c51eacd1" Jan 31 07:35:26 crc kubenswrapper[4908]: E0131 07:35:26.068894 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75d170aeb9ea07f95483c3f806eed8d83417cfa72b4a7226dc501100c51eacd1\": container with ID starting with 75d170aeb9ea07f95483c3f806eed8d83417cfa72b4a7226dc501100c51eacd1 not found: ID does not exist" containerID="75d170aeb9ea07f95483c3f806eed8d83417cfa72b4a7226dc501100c51eacd1" Jan 31 07:35:26 crc kubenswrapper[4908]: I0131 07:35:26.068949 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75d170aeb9ea07f95483c3f806eed8d83417cfa72b4a7226dc501100c51eacd1"} err="failed to get container status \"75d170aeb9ea07f95483c3f806eed8d83417cfa72b4a7226dc501100c51eacd1\": rpc error: code = NotFound desc = could not find container \"75d170aeb9ea07f95483c3f806eed8d83417cfa72b4a7226dc501100c51eacd1\": container with ID starting with 75d170aeb9ea07f95483c3f806eed8d83417cfa72b4a7226dc501100c51eacd1 not found: ID does not exist" Jan 31 07:35:26 crc kubenswrapper[4908]: I0131 07:35:26.091428 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-fjlrr"] Jan 31 07:35:26 crc kubenswrapper[4908]: I0131 07:35:26.103421 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-fjlrr"] Jan 31 07:35:27 crc kubenswrapper[4908]: I0131 07:35:27.947153 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="097d2f96-ce86-4d47-a55c-c717d272a8ef" path="/var/lib/kubelet/pods/097d2f96-ce86-4d47-a55c-c717d272a8ef/volumes" Jan 31 07:35:29 crc kubenswrapper[4908]: I0131 07:35:29.072339 4908 generic.go:334] "Generic (PLEG): container finished" podID="625a99c3-e2e7-493b-8a5a-071981756003" containerID="00169421b92402843c72a03545dffea61bef60b9e469f822cdfbc6667d9e2d68" exitCode=0 Jan 31 07:35:29 crc kubenswrapper[4908]: I0131 07:35:29.072445 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh" event={"ID":"625a99c3-e2e7-493b-8a5a-071981756003","Type":"ContainerDied","Data":"00169421b92402843c72a03545dffea61bef60b9e469f822cdfbc6667d9e2d68"} Jan 31 07:35:30 crc kubenswrapper[4908]: I0131 07:35:30.081577 4908 generic.go:334] "Generic (PLEG): container finished" podID="625a99c3-e2e7-493b-8a5a-071981756003" containerID="b8bd471569b1b6f4eb71a4c46da675556ebcef6e839d09a90e72545414d6e581" exitCode=0 Jan 31 07:35:30 crc kubenswrapper[4908]: I0131 07:35:30.081615 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh" event={"ID":"625a99c3-e2e7-493b-8a5a-071981756003","Type":"ContainerDied","Data":"b8bd471569b1b6f4eb71a4c46da675556ebcef6e839d09a90e72545414d6e581"} Jan 31 07:35:31 crc kubenswrapper[4908]: I0131 07:35:31.337869 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh" Jan 31 07:35:31 crc kubenswrapper[4908]: I0131 07:35:31.537086 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/625a99c3-e2e7-493b-8a5a-071981756003-bundle\") pod \"625a99c3-e2e7-493b-8a5a-071981756003\" (UID: \"625a99c3-e2e7-493b-8a5a-071981756003\") " Jan 31 07:35:31 crc kubenswrapper[4908]: I0131 07:35:31.537453 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/625a99c3-e2e7-493b-8a5a-071981756003-util\") pod \"625a99c3-e2e7-493b-8a5a-071981756003\" (UID: \"625a99c3-e2e7-493b-8a5a-071981756003\") " Jan 31 07:35:31 crc kubenswrapper[4908]: I0131 07:35:31.537562 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92fm8\" (UniqueName: \"kubernetes.io/projected/625a99c3-e2e7-493b-8a5a-071981756003-kube-api-access-92fm8\") pod \"625a99c3-e2e7-493b-8a5a-071981756003\" (UID: \"625a99c3-e2e7-493b-8a5a-071981756003\") " Jan 31 07:35:31 crc kubenswrapper[4908]: I0131 07:35:31.538535 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/625a99c3-e2e7-493b-8a5a-071981756003-bundle" (OuterVolumeSpecName: "bundle") pod "625a99c3-e2e7-493b-8a5a-071981756003" (UID: "625a99c3-e2e7-493b-8a5a-071981756003"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:35:31 crc kubenswrapper[4908]: I0131 07:35:31.542647 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/625a99c3-e2e7-493b-8a5a-071981756003-kube-api-access-92fm8" (OuterVolumeSpecName: "kube-api-access-92fm8") pod "625a99c3-e2e7-493b-8a5a-071981756003" (UID: "625a99c3-e2e7-493b-8a5a-071981756003"). InnerVolumeSpecName "kube-api-access-92fm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:35:31 crc kubenswrapper[4908]: I0131 07:35:31.549369 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/625a99c3-e2e7-493b-8a5a-071981756003-util" (OuterVolumeSpecName: "util") pod "625a99c3-e2e7-493b-8a5a-071981756003" (UID: "625a99c3-e2e7-493b-8a5a-071981756003"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:35:31 crc kubenswrapper[4908]: I0131 07:35:31.639171 4908 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/625a99c3-e2e7-493b-8a5a-071981756003-util\") on node \"crc\" DevicePath \"\"" Jan 31 07:35:31 crc kubenswrapper[4908]: I0131 07:35:31.639231 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92fm8\" (UniqueName: \"kubernetes.io/projected/625a99c3-e2e7-493b-8a5a-071981756003-kube-api-access-92fm8\") on node \"crc\" DevicePath \"\"" Jan 31 07:35:31 crc kubenswrapper[4908]: I0131 07:35:31.639253 4908 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/625a99c3-e2e7-493b-8a5a-071981756003-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:35:32 crc kubenswrapper[4908]: I0131 07:35:32.097909 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh" event={"ID":"625a99c3-e2e7-493b-8a5a-071981756003","Type":"ContainerDied","Data":"eef7c024bb17e098674965c69254832dda92b66dbf82806af50200dfb5c9494b"} Jan 31 07:35:32 crc kubenswrapper[4908]: I0131 07:35:32.097946 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eef7c024bb17e098674965c69254832dda92b66dbf82806af50200dfb5c9494b" Jan 31 07:35:32 crc kubenswrapper[4908]: I0131 07:35:32.098023 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh" Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.631789 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-655f5d8bc7-frrz6"] Jan 31 07:35:39 crc kubenswrapper[4908]: E0131 07:35:39.632615 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="625a99c3-e2e7-493b-8a5a-071981756003" containerName="extract" Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.632632 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="625a99c3-e2e7-493b-8a5a-071981756003" containerName="extract" Jan 31 07:35:39 crc kubenswrapper[4908]: E0131 07:35:39.632646 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="625a99c3-e2e7-493b-8a5a-071981756003" containerName="pull" Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.632654 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="625a99c3-e2e7-493b-8a5a-071981756003" containerName="pull" Jan 31 07:35:39 crc kubenswrapper[4908]: E0131 07:35:39.632664 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="097d2f96-ce86-4d47-a55c-c717d272a8ef" containerName="console" Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.632673 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="097d2f96-ce86-4d47-a55c-c717d272a8ef" containerName="console" Jan 31 07:35:39 crc kubenswrapper[4908]: E0131 07:35:39.632682 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="625a99c3-e2e7-493b-8a5a-071981756003" containerName="util" Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.632690 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="625a99c3-e2e7-493b-8a5a-071981756003" containerName="util" Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.632798 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="625a99c3-e2e7-493b-8a5a-071981756003" containerName="extract" Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.632815 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="097d2f96-ce86-4d47-a55c-c717d272a8ef" containerName="console" Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.633298 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-655f5d8bc7-frrz6" Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.635115 4908 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.635443 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.635634 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.636689 4908 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-gcljg" Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.638378 4908 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.651367 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-655f5d8bc7-frrz6"] Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.651602 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhjz5\" (UniqueName: \"kubernetes.io/projected/776290b3-3d7d-4abb-8718-0e6dadf1bbfa-kube-api-access-hhjz5\") pod \"metallb-operator-controller-manager-655f5d8bc7-frrz6\" (UID: \"776290b3-3d7d-4abb-8718-0e6dadf1bbfa\") " pod="metallb-system/metallb-operator-controller-manager-655f5d8bc7-frrz6" Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.651653 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/776290b3-3d7d-4abb-8718-0e6dadf1bbfa-apiservice-cert\") pod \"metallb-operator-controller-manager-655f5d8bc7-frrz6\" (UID: \"776290b3-3d7d-4abb-8718-0e6dadf1bbfa\") " pod="metallb-system/metallb-operator-controller-manager-655f5d8bc7-frrz6" Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.651763 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/776290b3-3d7d-4abb-8718-0e6dadf1bbfa-webhook-cert\") pod \"metallb-operator-controller-manager-655f5d8bc7-frrz6\" (UID: \"776290b3-3d7d-4abb-8718-0e6dadf1bbfa\") " pod="metallb-system/metallb-operator-controller-manager-655f5d8bc7-frrz6" Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.752705 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/776290b3-3d7d-4abb-8718-0e6dadf1bbfa-apiservice-cert\") pod \"metallb-operator-controller-manager-655f5d8bc7-frrz6\" (UID: \"776290b3-3d7d-4abb-8718-0e6dadf1bbfa\") " pod="metallb-system/metallb-operator-controller-manager-655f5d8bc7-frrz6" Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.752756 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhjz5\" (UniqueName: \"kubernetes.io/projected/776290b3-3d7d-4abb-8718-0e6dadf1bbfa-kube-api-access-hhjz5\") pod \"metallb-operator-controller-manager-655f5d8bc7-frrz6\" (UID: \"776290b3-3d7d-4abb-8718-0e6dadf1bbfa\") " pod="metallb-system/metallb-operator-controller-manager-655f5d8bc7-frrz6" Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.752826 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/776290b3-3d7d-4abb-8718-0e6dadf1bbfa-webhook-cert\") pod \"metallb-operator-controller-manager-655f5d8bc7-frrz6\" (UID: \"776290b3-3d7d-4abb-8718-0e6dadf1bbfa\") " pod="metallb-system/metallb-operator-controller-manager-655f5d8bc7-frrz6" Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.759345 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/776290b3-3d7d-4abb-8718-0e6dadf1bbfa-apiservice-cert\") pod \"metallb-operator-controller-manager-655f5d8bc7-frrz6\" (UID: \"776290b3-3d7d-4abb-8718-0e6dadf1bbfa\") " pod="metallb-system/metallb-operator-controller-manager-655f5d8bc7-frrz6" Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.761807 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/776290b3-3d7d-4abb-8718-0e6dadf1bbfa-webhook-cert\") pod \"metallb-operator-controller-manager-655f5d8bc7-frrz6\" (UID: \"776290b3-3d7d-4abb-8718-0e6dadf1bbfa\") " pod="metallb-system/metallb-operator-controller-manager-655f5d8bc7-frrz6" Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.774997 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhjz5\" (UniqueName: \"kubernetes.io/projected/776290b3-3d7d-4abb-8718-0e6dadf1bbfa-kube-api-access-hhjz5\") pod \"metallb-operator-controller-manager-655f5d8bc7-frrz6\" (UID: \"776290b3-3d7d-4abb-8718-0e6dadf1bbfa\") " pod="metallb-system/metallb-operator-controller-manager-655f5d8bc7-frrz6" Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.891304 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5dc4575dbd-vcbx4"] Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.904534 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5dc4575dbd-vcbx4"] Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.904645 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5dc4575dbd-vcbx4" Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.908184 4908 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-q4j75" Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.908320 4908 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.908392 4908 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.948264 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-655f5d8bc7-frrz6" Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.954616 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmcwz\" (UniqueName: \"kubernetes.io/projected/0bf3501e-40f0-4fd9-aa69-e1843e83887e-kube-api-access-tmcwz\") pod \"metallb-operator-webhook-server-5dc4575dbd-vcbx4\" (UID: \"0bf3501e-40f0-4fd9-aa69-e1843e83887e\") " pod="metallb-system/metallb-operator-webhook-server-5dc4575dbd-vcbx4" Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.954660 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0bf3501e-40f0-4fd9-aa69-e1843e83887e-webhook-cert\") pod \"metallb-operator-webhook-server-5dc4575dbd-vcbx4\" (UID: \"0bf3501e-40f0-4fd9-aa69-e1843e83887e\") " pod="metallb-system/metallb-operator-webhook-server-5dc4575dbd-vcbx4" Jan 31 07:35:39 crc kubenswrapper[4908]: I0131 07:35:39.954783 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0bf3501e-40f0-4fd9-aa69-e1843e83887e-apiservice-cert\") pod \"metallb-operator-webhook-server-5dc4575dbd-vcbx4\" (UID: \"0bf3501e-40f0-4fd9-aa69-e1843e83887e\") " pod="metallb-system/metallb-operator-webhook-server-5dc4575dbd-vcbx4" Jan 31 07:35:40 crc kubenswrapper[4908]: I0131 07:35:40.055991 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmcwz\" (UniqueName: \"kubernetes.io/projected/0bf3501e-40f0-4fd9-aa69-e1843e83887e-kube-api-access-tmcwz\") pod \"metallb-operator-webhook-server-5dc4575dbd-vcbx4\" (UID: \"0bf3501e-40f0-4fd9-aa69-e1843e83887e\") " pod="metallb-system/metallb-operator-webhook-server-5dc4575dbd-vcbx4" Jan 31 07:35:40 crc kubenswrapper[4908]: I0131 07:35:40.056035 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0bf3501e-40f0-4fd9-aa69-e1843e83887e-webhook-cert\") pod \"metallb-operator-webhook-server-5dc4575dbd-vcbx4\" (UID: \"0bf3501e-40f0-4fd9-aa69-e1843e83887e\") " pod="metallb-system/metallb-operator-webhook-server-5dc4575dbd-vcbx4" Jan 31 07:35:40 crc kubenswrapper[4908]: I0131 07:35:40.056088 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0bf3501e-40f0-4fd9-aa69-e1843e83887e-apiservice-cert\") pod \"metallb-operator-webhook-server-5dc4575dbd-vcbx4\" (UID: \"0bf3501e-40f0-4fd9-aa69-e1843e83887e\") " pod="metallb-system/metallb-operator-webhook-server-5dc4575dbd-vcbx4" Jan 31 07:35:40 crc kubenswrapper[4908]: I0131 07:35:40.060123 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0bf3501e-40f0-4fd9-aa69-e1843e83887e-apiservice-cert\") pod \"metallb-operator-webhook-server-5dc4575dbd-vcbx4\" (UID: \"0bf3501e-40f0-4fd9-aa69-e1843e83887e\") " pod="metallb-system/metallb-operator-webhook-server-5dc4575dbd-vcbx4" Jan 31 07:35:40 crc kubenswrapper[4908]: I0131 07:35:40.061684 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0bf3501e-40f0-4fd9-aa69-e1843e83887e-webhook-cert\") pod \"metallb-operator-webhook-server-5dc4575dbd-vcbx4\" (UID: \"0bf3501e-40f0-4fd9-aa69-e1843e83887e\") " pod="metallb-system/metallb-operator-webhook-server-5dc4575dbd-vcbx4" Jan 31 07:35:40 crc kubenswrapper[4908]: I0131 07:35:40.079180 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmcwz\" (UniqueName: \"kubernetes.io/projected/0bf3501e-40f0-4fd9-aa69-e1843e83887e-kube-api-access-tmcwz\") pod \"metallb-operator-webhook-server-5dc4575dbd-vcbx4\" (UID: \"0bf3501e-40f0-4fd9-aa69-e1843e83887e\") " pod="metallb-system/metallb-operator-webhook-server-5dc4575dbd-vcbx4" Jan 31 07:35:40 crc kubenswrapper[4908]: I0131 07:35:40.222492 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5dc4575dbd-vcbx4" Jan 31 07:35:40 crc kubenswrapper[4908]: I0131 07:35:40.389383 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-655f5d8bc7-frrz6"] Jan 31 07:35:40 crc kubenswrapper[4908]: I0131 07:35:40.432238 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:35:40 crc kubenswrapper[4908]: I0131 07:35:40.432311 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:35:40 crc kubenswrapper[4908]: I0131 07:35:40.432378 4908 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 07:35:40 crc kubenswrapper[4908]: I0131 07:35:40.433043 4908 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8fb1fe09c148821fb5edb05d0d628b8701a9cd90e03f6d948ce3ba250379ba75"} pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 07:35:40 crc kubenswrapper[4908]: I0131 07:35:40.433116 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" containerID="cri-o://8fb1fe09c148821fb5edb05d0d628b8701a9cd90e03f6d948ce3ba250379ba75" gracePeriod=600 Jan 31 07:35:40 crc kubenswrapper[4908]: I0131 07:35:40.557294 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5dc4575dbd-vcbx4"] Jan 31 07:35:40 crc kubenswrapper[4908]: W0131 07:35:40.573942 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bf3501e_40f0_4fd9_aa69_e1843e83887e.slice/crio-13961fbf9a00294da40be0b89134a0d4073f18b26bd2841f63e8288189efc506 WatchSource:0}: Error finding container 13961fbf9a00294da40be0b89134a0d4073f18b26bd2841f63e8288189efc506: Status 404 returned error can't find the container with id 13961fbf9a00294da40be0b89134a0d4073f18b26bd2841f63e8288189efc506 Jan 31 07:35:41 crc kubenswrapper[4908]: I0131 07:35:41.153152 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5dc4575dbd-vcbx4" event={"ID":"0bf3501e-40f0-4fd9-aa69-e1843e83887e","Type":"ContainerStarted","Data":"13961fbf9a00294da40be0b89134a0d4073f18b26bd2841f63e8288189efc506"} Jan 31 07:35:41 crc kubenswrapper[4908]: I0131 07:35:41.154495 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-655f5d8bc7-frrz6" event={"ID":"776290b3-3d7d-4abb-8718-0e6dadf1bbfa","Type":"ContainerStarted","Data":"1f955ca43586a07d59b5ef765ff4f67d78c952fb992b239134bb57a460c44edf"} Jan 31 07:35:41 crc kubenswrapper[4908]: I0131 07:35:41.158533 4908 generic.go:334] "Generic (PLEG): container finished" podID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerID="8fb1fe09c148821fb5edb05d0d628b8701a9cd90e03f6d948ce3ba250379ba75" exitCode=0 Jan 31 07:35:41 crc kubenswrapper[4908]: I0131 07:35:41.158588 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerDied","Data":"8fb1fe09c148821fb5edb05d0d628b8701a9cd90e03f6d948ce3ba250379ba75"} Jan 31 07:35:41 crc kubenswrapper[4908]: I0131 07:35:41.158616 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerStarted","Data":"58539bfd78268412e99de62573981b4cb5c5685bca0dc270f70e958484596b19"} Jan 31 07:35:41 crc kubenswrapper[4908]: I0131 07:35:41.158634 4908 scope.go:117] "RemoveContainer" containerID="3dfbbc1b5ff70365792954805ead0bd41cfb62c5615a5fe9df3e5b65b3920434" Jan 31 07:35:44 crc kubenswrapper[4908]: I0131 07:35:44.184149 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-655f5d8bc7-frrz6" event={"ID":"776290b3-3d7d-4abb-8718-0e6dadf1bbfa","Type":"ContainerStarted","Data":"55c0b8d1a47e3fa5070b59eb1d7c3db8bac3991802f54c8c78c5d2fc803eaa00"} Jan 31 07:35:44 crc kubenswrapper[4908]: I0131 07:35:44.185528 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-655f5d8bc7-frrz6" Jan 31 07:35:44 crc kubenswrapper[4908]: I0131 07:35:44.218394 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-655f5d8bc7-frrz6" podStartSLOduration=2.201909077 podStartE2EDuration="5.218375981s" podCreationTimestamp="2026-01-31 07:35:39 +0000 UTC" firstStartedPulling="2026-01-31 07:35:40.409386611 +0000 UTC m=+847.025331265" lastFinishedPulling="2026-01-31 07:35:43.425853515 +0000 UTC m=+850.041798169" observedRunningTime="2026-01-31 07:35:44.204039044 +0000 UTC m=+850.819983698" watchObservedRunningTime="2026-01-31 07:35:44.218375981 +0000 UTC m=+850.834320635" Jan 31 07:35:49 crc kubenswrapper[4908]: I0131 07:35:49.213962 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5dc4575dbd-vcbx4" event={"ID":"0bf3501e-40f0-4fd9-aa69-e1843e83887e","Type":"ContainerStarted","Data":"f2972cbc5b47ae2bbf08b961aa3e854ae5a0cc968cfd60726144b1d118a9a243"} Jan 31 07:35:49 crc kubenswrapper[4908]: I0131 07:35:49.215164 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5dc4575dbd-vcbx4" Jan 31 07:35:49 crc kubenswrapper[4908]: I0131 07:35:49.248306 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5dc4575dbd-vcbx4" podStartSLOduration=2.656128718 podStartE2EDuration="10.248288698s" podCreationTimestamp="2026-01-31 07:35:39 +0000 UTC" firstStartedPulling="2026-01-31 07:35:40.577963224 +0000 UTC m=+847.193907888" lastFinishedPulling="2026-01-31 07:35:48.170123214 +0000 UTC m=+854.786067868" observedRunningTime="2026-01-31 07:35:49.242598418 +0000 UTC m=+855.858543072" watchObservedRunningTime="2026-01-31 07:35:49.248288698 +0000 UTC m=+855.864233352" Jan 31 07:36:00 crc kubenswrapper[4908]: I0131 07:36:00.266565 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5dc4575dbd-vcbx4" Jan 31 07:36:19 crc kubenswrapper[4908]: I0131 07:36:19.955098 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-655f5d8bc7-frrz6" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.731092 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-rp98d"] Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.733786 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-rp98d" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.736369 4908 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.736757 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.737004 4908 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-hsrcj" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.738136 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-2xcng"] Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.739110 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2xcng" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.744357 4908 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.753184 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-2xcng"] Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.810346 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6c6977f3-afad-417f-b8e0-8283a6456b1b-frr-startup\") pod \"frr-k8s-rp98d\" (UID: \"6c6977f3-afad-417f-b8e0-8283a6456b1b\") " pod="metallb-system/frr-k8s-rp98d" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.810427 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz6zn\" (UniqueName: \"kubernetes.io/projected/d12d1a65-d2bd-47b1-a662-d97bbfa8aa51-kube-api-access-bz6zn\") pod \"frr-k8s-webhook-server-7df86c4f6c-2xcng\" (UID: \"d12d1a65-d2bd-47b1-a662-d97bbfa8aa51\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2xcng" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.810460 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6c6977f3-afad-417f-b8e0-8283a6456b1b-reloader\") pod \"frr-k8s-rp98d\" (UID: \"6c6977f3-afad-417f-b8e0-8283a6456b1b\") " pod="metallb-system/frr-k8s-rp98d" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.810484 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6c6977f3-afad-417f-b8e0-8283a6456b1b-frr-sockets\") pod \"frr-k8s-rp98d\" (UID: \"6c6977f3-afad-417f-b8e0-8283a6456b1b\") " pod="metallb-system/frr-k8s-rp98d" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.810505 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d12d1a65-d2bd-47b1-a662-d97bbfa8aa51-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-2xcng\" (UID: \"d12d1a65-d2bd-47b1-a662-d97bbfa8aa51\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2xcng" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.810520 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c6977f3-afad-417f-b8e0-8283a6456b1b-metrics-certs\") pod \"frr-k8s-rp98d\" (UID: \"6c6977f3-afad-417f-b8e0-8283a6456b1b\") " pod="metallb-system/frr-k8s-rp98d" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.810543 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjjgh\" (UniqueName: \"kubernetes.io/projected/6c6977f3-afad-417f-b8e0-8283a6456b1b-kube-api-access-jjjgh\") pod \"frr-k8s-rp98d\" (UID: \"6c6977f3-afad-417f-b8e0-8283a6456b1b\") " pod="metallb-system/frr-k8s-rp98d" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.810559 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6c6977f3-afad-417f-b8e0-8283a6456b1b-frr-conf\") pod \"frr-k8s-rp98d\" (UID: \"6c6977f3-afad-417f-b8e0-8283a6456b1b\") " pod="metallb-system/frr-k8s-rp98d" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.810579 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6c6977f3-afad-417f-b8e0-8283a6456b1b-metrics\") pod \"frr-k8s-rp98d\" (UID: \"6c6977f3-afad-417f-b8e0-8283a6456b1b\") " pod="metallb-system/frr-k8s-rp98d" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.819995 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-wfcsd"] Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.820817 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-wfcsd" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.822888 4908 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-v5lhk" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.823090 4908 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.823289 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.823421 4908 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.828105 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-mkc4z"] Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.829370 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-mkc4z" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.831536 4908 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.840357 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-mkc4z"] Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.911657 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6c6977f3-afad-417f-b8e0-8283a6456b1b-metrics\") pod \"frr-k8s-rp98d\" (UID: \"6c6977f3-afad-417f-b8e0-8283a6456b1b\") " pod="metallb-system/frr-k8s-rp98d" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.911908 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6c6977f3-afad-417f-b8e0-8283a6456b1b-frr-startup\") pod \"frr-k8s-rp98d\" (UID: \"6c6977f3-afad-417f-b8e0-8283a6456b1b\") " pod="metallb-system/frr-k8s-rp98d" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.912164 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz6zn\" (UniqueName: \"kubernetes.io/projected/d12d1a65-d2bd-47b1-a662-d97bbfa8aa51-kube-api-access-bz6zn\") pod \"frr-k8s-webhook-server-7df86c4f6c-2xcng\" (UID: \"d12d1a65-d2bd-47b1-a662-d97bbfa8aa51\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2xcng" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.912206 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/92a4638a-389b-465b-8c59-c8689328205b-memberlist\") pod \"speaker-wfcsd\" (UID: \"92a4638a-389b-465b-8c59-c8689328205b\") " pod="metallb-system/speaker-wfcsd" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.912237 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6c6977f3-afad-417f-b8e0-8283a6456b1b-reloader\") pod \"frr-k8s-rp98d\" (UID: \"6c6977f3-afad-417f-b8e0-8283a6456b1b\") " pod="metallb-system/frr-k8s-rp98d" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.912275 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6c6977f3-afad-417f-b8e0-8283a6456b1b-frr-sockets\") pod \"frr-k8s-rp98d\" (UID: \"6c6977f3-afad-417f-b8e0-8283a6456b1b\") " pod="metallb-system/frr-k8s-rp98d" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.912302 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d12d1a65-d2bd-47b1-a662-d97bbfa8aa51-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-2xcng\" (UID: \"d12d1a65-d2bd-47b1-a662-d97bbfa8aa51\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2xcng" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.912321 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c6977f3-afad-417f-b8e0-8283a6456b1b-metrics-certs\") pod \"frr-k8s-rp98d\" (UID: \"6c6977f3-afad-417f-b8e0-8283a6456b1b\") " pod="metallb-system/frr-k8s-rp98d" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.912344 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/92a4638a-389b-465b-8c59-c8689328205b-metallb-excludel2\") pod \"speaker-wfcsd\" (UID: \"92a4638a-389b-465b-8c59-c8689328205b\") " pod="metallb-system/speaker-wfcsd" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.912368 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/92a4638a-389b-465b-8c59-c8689328205b-metrics-certs\") pod \"speaker-wfcsd\" (UID: \"92a4638a-389b-465b-8c59-c8689328205b\") " pod="metallb-system/speaker-wfcsd" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.912383 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6c6977f3-afad-417f-b8e0-8283a6456b1b-metrics\") pod \"frr-k8s-rp98d\" (UID: \"6c6977f3-afad-417f-b8e0-8283a6456b1b\") " pod="metallb-system/frr-k8s-rp98d" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.912395 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjjgh\" (UniqueName: \"kubernetes.io/projected/6c6977f3-afad-417f-b8e0-8283a6456b1b-kube-api-access-jjjgh\") pod \"frr-k8s-rp98d\" (UID: \"6c6977f3-afad-417f-b8e0-8283a6456b1b\") " pod="metallb-system/frr-k8s-rp98d" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.912463 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6c6977f3-afad-417f-b8e0-8283a6456b1b-frr-conf\") pod \"frr-k8s-rp98d\" (UID: \"6c6977f3-afad-417f-b8e0-8283a6456b1b\") " pod="metallb-system/frr-k8s-rp98d" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.912749 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6c6977f3-afad-417f-b8e0-8283a6456b1b-frr-conf\") pod \"frr-k8s-rp98d\" (UID: \"6c6977f3-afad-417f-b8e0-8283a6456b1b\") " pod="metallb-system/frr-k8s-rp98d" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.912806 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6c6977f3-afad-417f-b8e0-8283a6456b1b-frr-startup\") pod \"frr-k8s-rp98d\" (UID: \"6c6977f3-afad-417f-b8e0-8283a6456b1b\") " pod="metallb-system/frr-k8s-rp98d" Jan 31 07:36:20 crc kubenswrapper[4908]: E0131 07:36:20.912921 4908 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 31 07:36:20 crc kubenswrapper[4908]: E0131 07:36:20.913003 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c6977f3-afad-417f-b8e0-8283a6456b1b-metrics-certs podName:6c6977f3-afad-417f-b8e0-8283a6456b1b nodeName:}" failed. No retries permitted until 2026-01-31 07:36:21.412965275 +0000 UTC m=+888.028909999 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6c6977f3-afad-417f-b8e0-8283a6456b1b-metrics-certs") pod "frr-k8s-rp98d" (UID: "6c6977f3-afad-417f-b8e0-8283a6456b1b") : secret "frr-k8s-certs-secret" not found Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.913088 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6c6977f3-afad-417f-b8e0-8283a6456b1b-frr-sockets\") pod \"frr-k8s-rp98d\" (UID: \"6c6977f3-afad-417f-b8e0-8283a6456b1b\") " pod="metallb-system/frr-k8s-rp98d" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.913188 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6c6977f3-afad-417f-b8e0-8283a6456b1b-reloader\") pod \"frr-k8s-rp98d\" (UID: \"6c6977f3-afad-417f-b8e0-8283a6456b1b\") " pod="metallb-system/frr-k8s-rp98d" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.921964 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d12d1a65-d2bd-47b1-a662-d97bbfa8aa51-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-2xcng\" (UID: \"d12d1a65-d2bd-47b1-a662-d97bbfa8aa51\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2xcng" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.942686 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz6zn\" (UniqueName: \"kubernetes.io/projected/d12d1a65-d2bd-47b1-a662-d97bbfa8aa51-kube-api-access-bz6zn\") pod \"frr-k8s-webhook-server-7df86c4f6c-2xcng\" (UID: \"d12d1a65-d2bd-47b1-a662-d97bbfa8aa51\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2xcng" Jan 31 07:36:20 crc kubenswrapper[4908]: I0131 07:36:20.944532 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjjgh\" (UniqueName: \"kubernetes.io/projected/6c6977f3-afad-417f-b8e0-8283a6456b1b-kube-api-access-jjjgh\") pod \"frr-k8s-rp98d\" (UID: \"6c6977f3-afad-417f-b8e0-8283a6456b1b\") " pod="metallb-system/frr-k8s-rp98d" Jan 31 07:36:21 crc kubenswrapper[4908]: I0131 07:36:21.013579 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6e02f235-b699-4c99-a66e-7bde91d7b5be-cert\") pod \"controller-6968d8fdc4-mkc4z\" (UID: \"6e02f235-b699-4c99-a66e-7bde91d7b5be\") " pod="metallb-system/controller-6968d8fdc4-mkc4z" Jan 31 07:36:21 crc kubenswrapper[4908]: I0131 07:36:21.013669 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jnrs\" (UniqueName: \"kubernetes.io/projected/6e02f235-b699-4c99-a66e-7bde91d7b5be-kube-api-access-6jnrs\") pod \"controller-6968d8fdc4-mkc4z\" (UID: \"6e02f235-b699-4c99-a66e-7bde91d7b5be\") " pod="metallb-system/controller-6968d8fdc4-mkc4z" Jan 31 07:36:21 crc kubenswrapper[4908]: I0131 07:36:21.013742 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e02f235-b699-4c99-a66e-7bde91d7b5be-metrics-certs\") pod \"controller-6968d8fdc4-mkc4z\" (UID: \"6e02f235-b699-4c99-a66e-7bde91d7b5be\") " pod="metallb-system/controller-6968d8fdc4-mkc4z" Jan 31 07:36:21 crc kubenswrapper[4908]: I0131 07:36:21.013782 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/92a4638a-389b-465b-8c59-c8689328205b-memberlist\") pod \"speaker-wfcsd\" (UID: \"92a4638a-389b-465b-8c59-c8689328205b\") " pod="metallb-system/speaker-wfcsd" Jan 31 07:36:21 crc kubenswrapper[4908]: E0131 07:36:21.013914 4908 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 31 07:36:21 crc kubenswrapper[4908]: I0131 07:36:21.013910 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/92a4638a-389b-465b-8c59-c8689328205b-metallb-excludel2\") pod \"speaker-wfcsd\" (UID: \"92a4638a-389b-465b-8c59-c8689328205b\") " pod="metallb-system/speaker-wfcsd" Jan 31 07:36:21 crc kubenswrapper[4908]: E0131 07:36:21.013965 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92a4638a-389b-465b-8c59-c8689328205b-memberlist podName:92a4638a-389b-465b-8c59-c8689328205b nodeName:}" failed. No retries permitted until 2026-01-31 07:36:21.513944036 +0000 UTC m=+888.129888680 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/92a4638a-389b-465b-8c59-c8689328205b-memberlist") pod "speaker-wfcsd" (UID: "92a4638a-389b-465b-8c59-c8689328205b") : secret "metallb-memberlist" not found Jan 31 07:36:21 crc kubenswrapper[4908]: I0131 07:36:21.014012 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/92a4638a-389b-465b-8c59-c8689328205b-metrics-certs\") pod \"speaker-wfcsd\" (UID: \"92a4638a-389b-465b-8c59-c8689328205b\") " pod="metallb-system/speaker-wfcsd" Jan 31 07:36:21 crc kubenswrapper[4908]: I0131 07:36:21.014044 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht9mn\" (UniqueName: \"kubernetes.io/projected/92a4638a-389b-465b-8c59-c8689328205b-kube-api-access-ht9mn\") pod \"speaker-wfcsd\" (UID: \"92a4638a-389b-465b-8c59-c8689328205b\") " pod="metallb-system/speaker-wfcsd" Jan 31 07:36:21 crc kubenswrapper[4908]: E0131 07:36:21.014136 4908 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 31 07:36:21 crc kubenswrapper[4908]: E0131 07:36:21.014174 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92a4638a-389b-465b-8c59-c8689328205b-metrics-certs podName:92a4638a-389b-465b-8c59-c8689328205b nodeName:}" failed. No retries permitted until 2026-01-31 07:36:21.514166472 +0000 UTC m=+888.130111126 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/92a4638a-389b-465b-8c59-c8689328205b-metrics-certs") pod "speaker-wfcsd" (UID: "92a4638a-389b-465b-8c59-c8689328205b") : secret "speaker-certs-secret" not found Jan 31 07:36:21 crc kubenswrapper[4908]: I0131 07:36:21.014576 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/92a4638a-389b-465b-8c59-c8689328205b-metallb-excludel2\") pod \"speaker-wfcsd\" (UID: \"92a4638a-389b-465b-8c59-c8689328205b\") " pod="metallb-system/speaker-wfcsd" Jan 31 07:36:21 crc kubenswrapper[4908]: I0131 07:36:21.066859 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2xcng" Jan 31 07:36:21 crc kubenswrapper[4908]: I0131 07:36:21.115280 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht9mn\" (UniqueName: \"kubernetes.io/projected/92a4638a-389b-465b-8c59-c8689328205b-kube-api-access-ht9mn\") pod \"speaker-wfcsd\" (UID: \"92a4638a-389b-465b-8c59-c8689328205b\") " pod="metallb-system/speaker-wfcsd" Jan 31 07:36:21 crc kubenswrapper[4908]: I0131 07:36:21.115669 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6e02f235-b699-4c99-a66e-7bde91d7b5be-cert\") pod \"controller-6968d8fdc4-mkc4z\" (UID: \"6e02f235-b699-4c99-a66e-7bde91d7b5be\") " pod="metallb-system/controller-6968d8fdc4-mkc4z" Jan 31 07:36:21 crc kubenswrapper[4908]: I0131 07:36:21.115707 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jnrs\" (UniqueName: \"kubernetes.io/projected/6e02f235-b699-4c99-a66e-7bde91d7b5be-kube-api-access-6jnrs\") pod \"controller-6968d8fdc4-mkc4z\" (UID: \"6e02f235-b699-4c99-a66e-7bde91d7b5be\") " pod="metallb-system/controller-6968d8fdc4-mkc4z" Jan 31 07:36:21 crc kubenswrapper[4908]: I0131 07:36:21.115736 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e02f235-b699-4c99-a66e-7bde91d7b5be-metrics-certs\") pod \"controller-6968d8fdc4-mkc4z\" (UID: \"6e02f235-b699-4c99-a66e-7bde91d7b5be\") " pod="metallb-system/controller-6968d8fdc4-mkc4z" Jan 31 07:36:21 crc kubenswrapper[4908]: I0131 07:36:21.124991 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6e02f235-b699-4c99-a66e-7bde91d7b5be-cert\") pod \"controller-6968d8fdc4-mkc4z\" (UID: \"6e02f235-b699-4c99-a66e-7bde91d7b5be\") " pod="metallb-system/controller-6968d8fdc4-mkc4z" Jan 31 07:36:21 crc kubenswrapper[4908]: I0131 07:36:21.126097 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e02f235-b699-4c99-a66e-7bde91d7b5be-metrics-certs\") pod \"controller-6968d8fdc4-mkc4z\" (UID: \"6e02f235-b699-4c99-a66e-7bde91d7b5be\") " pod="metallb-system/controller-6968d8fdc4-mkc4z" Jan 31 07:36:21 crc kubenswrapper[4908]: I0131 07:36:21.136132 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jnrs\" (UniqueName: \"kubernetes.io/projected/6e02f235-b699-4c99-a66e-7bde91d7b5be-kube-api-access-6jnrs\") pod \"controller-6968d8fdc4-mkc4z\" (UID: \"6e02f235-b699-4c99-a66e-7bde91d7b5be\") " pod="metallb-system/controller-6968d8fdc4-mkc4z" Jan 31 07:36:21 crc kubenswrapper[4908]: I0131 07:36:21.141456 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht9mn\" (UniqueName: \"kubernetes.io/projected/92a4638a-389b-465b-8c59-c8689328205b-kube-api-access-ht9mn\") pod \"speaker-wfcsd\" (UID: \"92a4638a-389b-465b-8c59-c8689328205b\") " pod="metallb-system/speaker-wfcsd" Jan 31 07:36:21 crc kubenswrapper[4908]: I0131 07:36:21.159670 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-mkc4z" Jan 31 07:36:21 crc kubenswrapper[4908]: I0131 07:36:21.418865 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c6977f3-afad-417f-b8e0-8283a6456b1b-metrics-certs\") pod \"frr-k8s-rp98d\" (UID: \"6c6977f3-afad-417f-b8e0-8283a6456b1b\") " pod="metallb-system/frr-k8s-rp98d" Jan 31 07:36:21 crc kubenswrapper[4908]: I0131 07:36:21.422417 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c6977f3-afad-417f-b8e0-8283a6456b1b-metrics-certs\") pod \"frr-k8s-rp98d\" (UID: \"6c6977f3-afad-417f-b8e0-8283a6456b1b\") " pod="metallb-system/frr-k8s-rp98d" Jan 31 07:36:21 crc kubenswrapper[4908]: I0131 07:36:21.479517 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-2xcng"] Jan 31 07:36:21 crc kubenswrapper[4908]: I0131 07:36:21.520599 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/92a4638a-389b-465b-8c59-c8689328205b-metrics-certs\") pod \"speaker-wfcsd\" (UID: \"92a4638a-389b-465b-8c59-c8689328205b\") " pod="metallb-system/speaker-wfcsd" Jan 31 07:36:21 crc kubenswrapper[4908]: I0131 07:36:21.520684 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/92a4638a-389b-465b-8c59-c8689328205b-memberlist\") pod \"speaker-wfcsd\" (UID: \"92a4638a-389b-465b-8c59-c8689328205b\") " pod="metallb-system/speaker-wfcsd" Jan 31 07:36:21 crc kubenswrapper[4908]: E0131 07:36:21.520776 4908 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 31 07:36:21 crc kubenswrapper[4908]: E0131 07:36:21.520820 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92a4638a-389b-465b-8c59-c8689328205b-memberlist podName:92a4638a-389b-465b-8c59-c8689328205b nodeName:}" failed. No retries permitted until 2026-01-31 07:36:22.520807423 +0000 UTC m=+889.136752077 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/92a4638a-389b-465b-8c59-c8689328205b-memberlist") pod "speaker-wfcsd" (UID: "92a4638a-389b-465b-8c59-c8689328205b") : secret "metallb-memberlist" not found Jan 31 07:36:21 crc kubenswrapper[4908]: I0131 07:36:21.524019 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/92a4638a-389b-465b-8c59-c8689328205b-metrics-certs\") pod \"speaker-wfcsd\" (UID: \"92a4638a-389b-465b-8c59-c8689328205b\") " pod="metallb-system/speaker-wfcsd" Jan 31 07:36:21 crc kubenswrapper[4908]: I0131 07:36:21.581341 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-mkc4z"] Jan 31 07:36:21 crc kubenswrapper[4908]: W0131 07:36:21.585199 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e02f235_b699_4c99_a66e_7bde91d7b5be.slice/crio-55252c4fd2a79ae25a3de89629d45731eb9f545dfbf04eb4b8811b9e54e5f534 WatchSource:0}: Error finding container 55252c4fd2a79ae25a3de89629d45731eb9f545dfbf04eb4b8811b9e54e5f534: Status 404 returned error can't find the container with id 55252c4fd2a79ae25a3de89629d45731eb9f545dfbf04eb4b8811b9e54e5f534 Jan 31 07:36:21 crc kubenswrapper[4908]: I0131 07:36:21.653916 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-rp98d" Jan 31 07:36:22 crc kubenswrapper[4908]: I0131 07:36:22.388728 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-mkc4z" event={"ID":"6e02f235-b699-4c99-a66e-7bde91d7b5be","Type":"ContainerStarted","Data":"940b0499ea80225c20d31c7889d6c493629fd7587ea45a25995f2b46ce702844"} Jan 31 07:36:22 crc kubenswrapper[4908]: I0131 07:36:22.389083 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-mkc4z" event={"ID":"6e02f235-b699-4c99-a66e-7bde91d7b5be","Type":"ContainerStarted","Data":"a0ad8282fd246a7bf22fb848ff9334bc5d4b3ad1f469c33d7d9fdae65208d65b"} Jan 31 07:36:22 crc kubenswrapper[4908]: I0131 07:36:22.389100 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-mkc4z" event={"ID":"6e02f235-b699-4c99-a66e-7bde91d7b5be","Type":"ContainerStarted","Data":"55252c4fd2a79ae25a3de89629d45731eb9f545dfbf04eb4b8811b9e54e5f534"} Jan 31 07:36:22 crc kubenswrapper[4908]: I0131 07:36:22.389116 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-mkc4z" Jan 31 07:36:22 crc kubenswrapper[4908]: I0131 07:36:22.389811 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rp98d" event={"ID":"6c6977f3-afad-417f-b8e0-8283a6456b1b","Type":"ContainerStarted","Data":"0e229b8694c19551e9b679ebff84c9e5f62c3b75bfddb4f0d0fd6f0d5f1ee852"} Jan 31 07:36:22 crc kubenswrapper[4908]: I0131 07:36:22.391165 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2xcng" event={"ID":"d12d1a65-d2bd-47b1-a662-d97bbfa8aa51","Type":"ContainerStarted","Data":"9167e93a2621cf612dc63512443fd1d7ad21548bedc94e642850d3a45a715434"} Jan 31 07:36:22 crc kubenswrapper[4908]: I0131 07:36:22.407915 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-mkc4z" podStartSLOduration=2.407898951 podStartE2EDuration="2.407898951s" podCreationTimestamp="2026-01-31 07:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:36:22.405182379 +0000 UTC m=+889.021127033" watchObservedRunningTime="2026-01-31 07:36:22.407898951 +0000 UTC m=+889.023843605" Jan 31 07:36:22 crc kubenswrapper[4908]: I0131 07:36:22.532885 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/92a4638a-389b-465b-8c59-c8689328205b-memberlist\") pod \"speaker-wfcsd\" (UID: \"92a4638a-389b-465b-8c59-c8689328205b\") " pod="metallb-system/speaker-wfcsd" Jan 31 07:36:22 crc kubenswrapper[4908]: I0131 07:36:22.538523 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/92a4638a-389b-465b-8c59-c8689328205b-memberlist\") pod \"speaker-wfcsd\" (UID: \"92a4638a-389b-465b-8c59-c8689328205b\") " pod="metallb-system/speaker-wfcsd" Jan 31 07:36:22 crc kubenswrapper[4908]: I0131 07:36:22.640109 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-wfcsd" Jan 31 07:36:22 crc kubenswrapper[4908]: W0131 07:36:22.661550 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92a4638a_389b_465b_8c59_c8689328205b.slice/crio-caafacc32cc7039e044c7ca048c9180f453766d0e17f8b5821f5ba2f376a1e0b WatchSource:0}: Error finding container caafacc32cc7039e044c7ca048c9180f453766d0e17f8b5821f5ba2f376a1e0b: Status 404 returned error can't find the container with id caafacc32cc7039e044c7ca048c9180f453766d0e17f8b5821f5ba2f376a1e0b Jan 31 07:36:23 crc kubenswrapper[4908]: I0131 07:36:23.400802 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wfcsd" event={"ID":"92a4638a-389b-465b-8c59-c8689328205b","Type":"ContainerStarted","Data":"a63294b1d3e6737a6e6adae261605cde754845337ab7f3a5413e542218c6834d"} Jan 31 07:36:23 crc kubenswrapper[4908]: I0131 07:36:23.400844 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wfcsd" event={"ID":"92a4638a-389b-465b-8c59-c8689328205b","Type":"ContainerStarted","Data":"62dedd483b46b819a6be5c40d27c3ee6df74bbbfcf657a3417ca873b5557c4d9"} Jan 31 07:36:23 crc kubenswrapper[4908]: I0131 07:36:23.400856 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wfcsd" event={"ID":"92a4638a-389b-465b-8c59-c8689328205b","Type":"ContainerStarted","Data":"caafacc32cc7039e044c7ca048c9180f453766d0e17f8b5821f5ba2f376a1e0b"} Jan 31 07:36:23 crc kubenswrapper[4908]: I0131 07:36:23.401413 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-wfcsd" Jan 31 07:36:23 crc kubenswrapper[4908]: I0131 07:36:23.427422 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-wfcsd" podStartSLOduration=3.427401419 podStartE2EDuration="3.427401419s" podCreationTimestamp="2026-01-31 07:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:36:23.42554714 +0000 UTC m=+890.041491804" watchObservedRunningTime="2026-01-31 07:36:23.427401419 +0000 UTC m=+890.043346083" Jan 31 07:36:29 crc kubenswrapper[4908]: I0131 07:36:29.456764 4908 generic.go:334] "Generic (PLEG): container finished" podID="6c6977f3-afad-417f-b8e0-8283a6456b1b" containerID="614acadcb9af92c0133bb6ae73b35a6d9f40d9b15c0bdf4dd05f6aed75895c35" exitCode=0 Jan 31 07:36:29 crc kubenswrapper[4908]: I0131 07:36:29.456858 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rp98d" event={"ID":"6c6977f3-afad-417f-b8e0-8283a6456b1b","Type":"ContainerDied","Data":"614acadcb9af92c0133bb6ae73b35a6d9f40d9b15c0bdf4dd05f6aed75895c35"} Jan 31 07:36:29 crc kubenswrapper[4908]: I0131 07:36:29.460717 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2xcng" event={"ID":"d12d1a65-d2bd-47b1-a662-d97bbfa8aa51","Type":"ContainerStarted","Data":"b3d7b96a84b0d88a01cffea3f2d4d85a2801fb3c28c23e80885f6a1768a326da"} Jan 31 07:36:29 crc kubenswrapper[4908]: I0131 07:36:29.460893 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2xcng" Jan 31 07:36:29 crc kubenswrapper[4908]: I0131 07:36:29.521305 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2xcng" podStartSLOduration=2.216456277 podStartE2EDuration="9.521284944s" podCreationTimestamp="2026-01-31 07:36:20 +0000 UTC" firstStartedPulling="2026-01-31 07:36:21.49145091 +0000 UTC m=+888.107395564" lastFinishedPulling="2026-01-31 07:36:28.796279577 +0000 UTC m=+895.412224231" observedRunningTime="2026-01-31 07:36:29.516399675 +0000 UTC m=+896.132344329" watchObservedRunningTime="2026-01-31 07:36:29.521284944 +0000 UTC m=+896.137229598" Jan 31 07:36:30 crc kubenswrapper[4908]: I0131 07:36:30.476571 4908 generic.go:334] "Generic (PLEG): container finished" podID="6c6977f3-afad-417f-b8e0-8283a6456b1b" containerID="5e50620a1ef439c4b8d3265eb3879c7afe2793682101e3cc2375a2553384f613" exitCode=0 Jan 31 07:36:30 crc kubenswrapper[4908]: I0131 07:36:30.476651 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rp98d" event={"ID":"6c6977f3-afad-417f-b8e0-8283a6456b1b","Type":"ContainerDied","Data":"5e50620a1ef439c4b8d3265eb3879c7afe2793682101e3cc2375a2553384f613"} Jan 31 07:36:31 crc kubenswrapper[4908]: I0131 07:36:31.164220 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-mkc4z" Jan 31 07:36:31 crc kubenswrapper[4908]: I0131 07:36:31.484161 4908 generic.go:334] "Generic (PLEG): container finished" podID="6c6977f3-afad-417f-b8e0-8283a6456b1b" containerID="ab25deddd1de88f05ca785ed88daa13fff60c9abbbfe3d5ddc391cb31e565ba9" exitCode=0 Jan 31 07:36:31 crc kubenswrapper[4908]: I0131 07:36:31.484220 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rp98d" event={"ID":"6c6977f3-afad-417f-b8e0-8283a6456b1b","Type":"ContainerDied","Data":"ab25deddd1de88f05ca785ed88daa13fff60c9abbbfe3d5ddc391cb31e565ba9"} Jan 31 07:36:32 crc kubenswrapper[4908]: I0131 07:36:32.496251 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rp98d" event={"ID":"6c6977f3-afad-417f-b8e0-8283a6456b1b","Type":"ContainerStarted","Data":"dd73c3e48e10997237406a800a217a3bdf865e1601f55f5609399037bb9e9ce8"} Jan 31 07:36:32 crc kubenswrapper[4908]: I0131 07:36:32.496528 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rp98d" event={"ID":"6c6977f3-afad-417f-b8e0-8283a6456b1b","Type":"ContainerStarted","Data":"c4e4372251bb967d80b9b7cb276e1d83cf13f636bca9d31d098ff68ce3469ccf"} Jan 31 07:36:32 crc kubenswrapper[4908]: I0131 07:36:32.496543 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rp98d" event={"ID":"6c6977f3-afad-417f-b8e0-8283a6456b1b","Type":"ContainerStarted","Data":"b86838dfd3beb71ba8457342e3be8789daa22532d9654ca229090c79b87708e5"} Jan 31 07:36:32 crc kubenswrapper[4908]: I0131 07:36:32.496555 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rp98d" event={"ID":"6c6977f3-afad-417f-b8e0-8283a6456b1b","Type":"ContainerStarted","Data":"b081e230e57a97613cda2834f972efc315c16b848b5e18f38985f35caa2829b6"} Jan 31 07:36:32 crc kubenswrapper[4908]: I0131 07:36:32.496569 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rp98d" event={"ID":"6c6977f3-afad-417f-b8e0-8283a6456b1b","Type":"ContainerStarted","Data":"8bf05cfc20d330774a0ffada435100c9dd9f159cf1756dce23e61b0e9321be49"} Jan 31 07:36:32 crc kubenswrapper[4908]: I0131 07:36:32.643996 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-wfcsd" Jan 31 07:36:33 crc kubenswrapper[4908]: I0131 07:36:33.505689 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rp98d" event={"ID":"6c6977f3-afad-417f-b8e0-8283a6456b1b","Type":"ContainerStarted","Data":"ea894adecd2621ba025af31de77b40dfe82cb9c2e391443f6136885dc5e17460"} Jan 31 07:36:33 crc kubenswrapper[4908]: I0131 07:36:33.505992 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-rp98d" Jan 31 07:36:33 crc kubenswrapper[4908]: I0131 07:36:33.528146 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-rp98d" podStartSLOduration=6.520240635 podStartE2EDuration="13.528122598s" podCreationTimestamp="2026-01-31 07:36:20 +0000 UTC" firstStartedPulling="2026-01-31 07:36:21.756048722 +0000 UTC m=+888.371993386" lastFinishedPulling="2026-01-31 07:36:28.763930695 +0000 UTC m=+895.379875349" observedRunningTime="2026-01-31 07:36:33.524160314 +0000 UTC m=+900.140104978" watchObservedRunningTime="2026-01-31 07:36:33.528122598 +0000 UTC m=+900.144067262" Jan 31 07:36:35 crc kubenswrapper[4908]: I0131 07:36:35.355824 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-xsx6m"] Jan 31 07:36:35 crc kubenswrapper[4908]: I0131 07:36:35.357563 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xsx6m" Jan 31 07:36:35 crc kubenswrapper[4908]: I0131 07:36:35.361751 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 31 07:36:35 crc kubenswrapper[4908]: I0131 07:36:35.365943 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 31 07:36:35 crc kubenswrapper[4908]: I0131 07:36:35.365944 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-tvjfh" Jan 31 07:36:35 crc kubenswrapper[4908]: I0131 07:36:35.381143 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xsx6m"] Jan 31 07:36:35 crc kubenswrapper[4908]: I0131 07:36:35.435261 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm6r2\" (UniqueName: \"kubernetes.io/projected/d2afe25d-38c4-4ee7-a9b9-1a38a23eb7a7-kube-api-access-cm6r2\") pod \"openstack-operator-index-xsx6m\" (UID: \"d2afe25d-38c4-4ee7-a9b9-1a38a23eb7a7\") " pod="openstack-operators/openstack-operator-index-xsx6m" Jan 31 07:36:35 crc kubenswrapper[4908]: I0131 07:36:35.536377 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm6r2\" (UniqueName: \"kubernetes.io/projected/d2afe25d-38c4-4ee7-a9b9-1a38a23eb7a7-kube-api-access-cm6r2\") pod \"openstack-operator-index-xsx6m\" (UID: \"d2afe25d-38c4-4ee7-a9b9-1a38a23eb7a7\") " pod="openstack-operators/openstack-operator-index-xsx6m" Jan 31 07:36:35 crc kubenswrapper[4908]: I0131 07:36:35.557271 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm6r2\" (UniqueName: \"kubernetes.io/projected/d2afe25d-38c4-4ee7-a9b9-1a38a23eb7a7-kube-api-access-cm6r2\") pod \"openstack-operator-index-xsx6m\" (UID: \"d2afe25d-38c4-4ee7-a9b9-1a38a23eb7a7\") " pod="openstack-operators/openstack-operator-index-xsx6m" Jan 31 07:36:35 crc kubenswrapper[4908]: I0131 07:36:35.677772 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xsx6m" Jan 31 07:36:35 crc kubenswrapper[4908]: I0131 07:36:35.902684 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xsx6m"] Jan 31 07:36:36 crc kubenswrapper[4908]: I0131 07:36:36.524210 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xsx6m" event={"ID":"d2afe25d-38c4-4ee7-a9b9-1a38a23eb7a7","Type":"ContainerStarted","Data":"bffa37ceb8b6fd519f5d2a6b76114a7a5ee0ef0b8303106b89fa5072ce332f96"} Jan 31 07:36:36 crc kubenswrapper[4908]: I0131 07:36:36.654433 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-rp98d" Jan 31 07:36:36 crc kubenswrapper[4908]: I0131 07:36:36.690910 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-rp98d" Jan 31 07:36:38 crc kubenswrapper[4908]: I0131 07:36:38.535371 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xsx6m" event={"ID":"d2afe25d-38c4-4ee7-a9b9-1a38a23eb7a7","Type":"ContainerStarted","Data":"ba2d4b8dea2359d664d9d7c19a8ca0f838671e7ee0ba47c3bba467e1c2dbaaa3"} Jan 31 07:36:38 crc kubenswrapper[4908]: I0131 07:36:38.553034 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-xsx6m" podStartSLOduration=1.231454098 podStartE2EDuration="3.553007832s" podCreationTimestamp="2026-01-31 07:36:35 +0000 UTC" firstStartedPulling="2026-01-31 07:36:35.915930745 +0000 UTC m=+902.531875429" lastFinishedPulling="2026-01-31 07:36:38.237484509 +0000 UTC m=+904.853429163" observedRunningTime="2026-01-31 07:36:38.549357159 +0000 UTC m=+905.165301823" watchObservedRunningTime="2026-01-31 07:36:38.553007832 +0000 UTC m=+905.168952486" Jan 31 07:36:39 crc kubenswrapper[4908]: I0131 07:36:39.138648 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-xsx6m"] Jan 31 07:36:39 crc kubenswrapper[4908]: I0131 07:36:39.956367 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-62zcb"] Jan 31 07:36:39 crc kubenswrapper[4908]: I0131 07:36:39.958560 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-62zcb" Jan 31 07:36:39 crc kubenswrapper[4908]: I0131 07:36:39.965958 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-62zcb"] Jan 31 07:36:40 crc kubenswrapper[4908]: I0131 07:36:40.108829 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45pxj\" (UniqueName: \"kubernetes.io/projected/ec772726-ea9e-4f95-a9e5-88ab00f607f9-kube-api-access-45pxj\") pod \"openstack-operator-index-62zcb\" (UID: \"ec772726-ea9e-4f95-a9e5-88ab00f607f9\") " pod="openstack-operators/openstack-operator-index-62zcb" Jan 31 07:36:40 crc kubenswrapper[4908]: I0131 07:36:40.212033 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45pxj\" (UniqueName: \"kubernetes.io/projected/ec772726-ea9e-4f95-a9e5-88ab00f607f9-kube-api-access-45pxj\") pod \"openstack-operator-index-62zcb\" (UID: \"ec772726-ea9e-4f95-a9e5-88ab00f607f9\") " pod="openstack-operators/openstack-operator-index-62zcb" Jan 31 07:36:40 crc kubenswrapper[4908]: I0131 07:36:40.237314 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45pxj\" (UniqueName: \"kubernetes.io/projected/ec772726-ea9e-4f95-a9e5-88ab00f607f9-kube-api-access-45pxj\") pod \"openstack-operator-index-62zcb\" (UID: \"ec772726-ea9e-4f95-a9e5-88ab00f607f9\") " pod="openstack-operators/openstack-operator-index-62zcb" Jan 31 07:36:40 crc kubenswrapper[4908]: I0131 07:36:40.286803 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-62zcb" Jan 31 07:36:40 crc kubenswrapper[4908]: I0131 07:36:40.547178 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-xsx6m" podUID="d2afe25d-38c4-4ee7-a9b9-1a38a23eb7a7" containerName="registry-server" containerID="cri-o://ba2d4b8dea2359d664d9d7c19a8ca0f838671e7ee0ba47c3bba467e1c2dbaaa3" gracePeriod=2 Jan 31 07:36:41 crc kubenswrapper[4908]: I0131 07:36:41.073133 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2xcng" Jan 31 07:36:41 crc kubenswrapper[4908]: I0131 07:36:41.235175 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-62zcb"] Jan 31 07:36:41 crc kubenswrapper[4908]: I0131 07:36:41.554944 4908 generic.go:334] "Generic (PLEG): container finished" podID="d2afe25d-38c4-4ee7-a9b9-1a38a23eb7a7" containerID="ba2d4b8dea2359d664d9d7c19a8ca0f838671e7ee0ba47c3bba467e1c2dbaaa3" exitCode=0 Jan 31 07:36:41 crc kubenswrapper[4908]: I0131 07:36:41.555078 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xsx6m" event={"ID":"d2afe25d-38c4-4ee7-a9b9-1a38a23eb7a7","Type":"ContainerDied","Data":"ba2d4b8dea2359d664d9d7c19a8ca0f838671e7ee0ba47c3bba467e1c2dbaaa3"} Jan 31 07:36:41 crc kubenswrapper[4908]: I0131 07:36:41.556780 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-62zcb" event={"ID":"ec772726-ea9e-4f95-a9e5-88ab00f607f9","Type":"ContainerStarted","Data":"843d1beb657419c117e4f1eb625f48945e4c67d37a19e6beee2eef9a89be0bc3"} Jan 31 07:36:41 crc kubenswrapper[4908]: I0131 07:36:41.556832 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-62zcb" event={"ID":"ec772726-ea9e-4f95-a9e5-88ab00f607f9","Type":"ContainerStarted","Data":"0180f7480c939764d55a4f1cdb2efa72ff8a2c63f2f9bfdf687f6e872e5c6076"} Jan 31 07:36:41 crc kubenswrapper[4908]: I0131 07:36:41.573484 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-62zcb" podStartSLOduration=2.526094198 podStartE2EDuration="2.573428445s" podCreationTimestamp="2026-01-31 07:36:39 +0000 UTC" firstStartedPulling="2026-01-31 07:36:41.24211853 +0000 UTC m=+907.858063184" lastFinishedPulling="2026-01-31 07:36:41.289452777 +0000 UTC m=+907.905397431" observedRunningTime="2026-01-31 07:36:41.568720855 +0000 UTC m=+908.184665539" watchObservedRunningTime="2026-01-31 07:36:41.573428445 +0000 UTC m=+908.189373099" Jan 31 07:36:41 crc kubenswrapper[4908]: I0131 07:36:41.640821 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xsx6m" Jan 31 07:36:41 crc kubenswrapper[4908]: I0131 07:36:41.659281 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-rp98d" Jan 31 07:36:41 crc kubenswrapper[4908]: I0131 07:36:41.743947 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qdz5z"] Jan 31 07:36:41 crc kubenswrapper[4908]: E0131 07:36:41.744252 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2afe25d-38c4-4ee7-a9b9-1a38a23eb7a7" containerName="registry-server" Jan 31 07:36:41 crc kubenswrapper[4908]: I0131 07:36:41.744269 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2afe25d-38c4-4ee7-a9b9-1a38a23eb7a7" containerName="registry-server" Jan 31 07:36:41 crc kubenswrapper[4908]: I0131 07:36:41.744413 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2afe25d-38c4-4ee7-a9b9-1a38a23eb7a7" containerName="registry-server" Jan 31 07:36:41 crc kubenswrapper[4908]: I0131 07:36:41.745349 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdz5z" Jan 31 07:36:41 crc kubenswrapper[4908]: I0131 07:36:41.758460 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdz5z"] Jan 31 07:36:41 crc kubenswrapper[4908]: I0131 07:36:41.807598 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm6r2\" (UniqueName: \"kubernetes.io/projected/d2afe25d-38c4-4ee7-a9b9-1a38a23eb7a7-kube-api-access-cm6r2\") pod \"d2afe25d-38c4-4ee7-a9b9-1a38a23eb7a7\" (UID: \"d2afe25d-38c4-4ee7-a9b9-1a38a23eb7a7\") " Jan 31 07:36:41 crc kubenswrapper[4908]: I0131 07:36:41.814126 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2afe25d-38c4-4ee7-a9b9-1a38a23eb7a7-kube-api-access-cm6r2" (OuterVolumeSpecName: "kube-api-access-cm6r2") pod "d2afe25d-38c4-4ee7-a9b9-1a38a23eb7a7" (UID: "d2afe25d-38c4-4ee7-a9b9-1a38a23eb7a7"). InnerVolumeSpecName "kube-api-access-cm6r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:41 crc kubenswrapper[4908]: I0131 07:36:41.909022 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wvjq\" (UniqueName: \"kubernetes.io/projected/ee3a2975-d3eb-43a9-a977-af3b7a568a1c-kube-api-access-5wvjq\") pod \"redhat-marketplace-qdz5z\" (UID: \"ee3a2975-d3eb-43a9-a977-af3b7a568a1c\") " pod="openshift-marketplace/redhat-marketplace-qdz5z" Jan 31 07:36:41 crc kubenswrapper[4908]: I0131 07:36:41.909082 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee3a2975-d3eb-43a9-a977-af3b7a568a1c-utilities\") pod \"redhat-marketplace-qdz5z\" (UID: \"ee3a2975-d3eb-43a9-a977-af3b7a568a1c\") " pod="openshift-marketplace/redhat-marketplace-qdz5z" Jan 31 07:36:41 crc kubenswrapper[4908]: I0131 07:36:41.909481 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee3a2975-d3eb-43a9-a977-af3b7a568a1c-catalog-content\") pod \"redhat-marketplace-qdz5z\" (UID: \"ee3a2975-d3eb-43a9-a977-af3b7a568a1c\") " pod="openshift-marketplace/redhat-marketplace-qdz5z" Jan 31 07:36:41 crc kubenswrapper[4908]: I0131 07:36:41.909610 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm6r2\" (UniqueName: \"kubernetes.io/projected/d2afe25d-38c4-4ee7-a9b9-1a38a23eb7a7-kube-api-access-cm6r2\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:42 crc kubenswrapper[4908]: I0131 07:36:42.010566 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee3a2975-d3eb-43a9-a977-af3b7a568a1c-catalog-content\") pod \"redhat-marketplace-qdz5z\" (UID: \"ee3a2975-d3eb-43a9-a977-af3b7a568a1c\") " pod="openshift-marketplace/redhat-marketplace-qdz5z" Jan 31 07:36:42 crc kubenswrapper[4908]: I0131 07:36:42.010892 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wvjq\" (UniqueName: \"kubernetes.io/projected/ee3a2975-d3eb-43a9-a977-af3b7a568a1c-kube-api-access-5wvjq\") pod \"redhat-marketplace-qdz5z\" (UID: \"ee3a2975-d3eb-43a9-a977-af3b7a568a1c\") " pod="openshift-marketplace/redhat-marketplace-qdz5z" Jan 31 07:36:42 crc kubenswrapper[4908]: I0131 07:36:42.010943 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee3a2975-d3eb-43a9-a977-af3b7a568a1c-utilities\") pod \"redhat-marketplace-qdz5z\" (UID: \"ee3a2975-d3eb-43a9-a977-af3b7a568a1c\") " pod="openshift-marketplace/redhat-marketplace-qdz5z" Jan 31 07:36:42 crc kubenswrapper[4908]: I0131 07:36:42.011331 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee3a2975-d3eb-43a9-a977-af3b7a568a1c-catalog-content\") pod \"redhat-marketplace-qdz5z\" (UID: \"ee3a2975-d3eb-43a9-a977-af3b7a568a1c\") " pod="openshift-marketplace/redhat-marketplace-qdz5z" Jan 31 07:36:42 crc kubenswrapper[4908]: I0131 07:36:42.011463 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee3a2975-d3eb-43a9-a977-af3b7a568a1c-utilities\") pod \"redhat-marketplace-qdz5z\" (UID: \"ee3a2975-d3eb-43a9-a977-af3b7a568a1c\") " pod="openshift-marketplace/redhat-marketplace-qdz5z" Jan 31 07:36:42 crc kubenswrapper[4908]: I0131 07:36:42.035493 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wvjq\" (UniqueName: \"kubernetes.io/projected/ee3a2975-d3eb-43a9-a977-af3b7a568a1c-kube-api-access-5wvjq\") pod \"redhat-marketplace-qdz5z\" (UID: \"ee3a2975-d3eb-43a9-a977-af3b7a568a1c\") " pod="openshift-marketplace/redhat-marketplace-qdz5z" Jan 31 07:36:42 crc kubenswrapper[4908]: I0131 07:36:42.064770 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdz5z" Jan 31 07:36:42 crc kubenswrapper[4908]: I0131 07:36:42.286100 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdz5z"] Jan 31 07:36:42 crc kubenswrapper[4908]: W0131 07:36:42.299202 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee3a2975_d3eb_43a9_a977_af3b7a568a1c.slice/crio-a03bde0a0e7fdcd4e07a4b068b32f5927bc87a7de013ab3f829b32ec2fed1b9f WatchSource:0}: Error finding container a03bde0a0e7fdcd4e07a4b068b32f5927bc87a7de013ab3f829b32ec2fed1b9f: Status 404 returned error can't find the container with id a03bde0a0e7fdcd4e07a4b068b32f5927bc87a7de013ab3f829b32ec2fed1b9f Jan 31 07:36:42 crc kubenswrapper[4908]: I0131 07:36:42.563330 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdz5z" event={"ID":"ee3a2975-d3eb-43a9-a977-af3b7a568a1c","Type":"ContainerStarted","Data":"a03bde0a0e7fdcd4e07a4b068b32f5927bc87a7de013ab3f829b32ec2fed1b9f"} Jan 31 07:36:42 crc kubenswrapper[4908]: I0131 07:36:42.565718 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xsx6m" Jan 31 07:36:42 crc kubenswrapper[4908]: I0131 07:36:42.566356 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xsx6m" event={"ID":"d2afe25d-38c4-4ee7-a9b9-1a38a23eb7a7","Type":"ContainerDied","Data":"bffa37ceb8b6fd519f5d2a6b76114a7a5ee0ef0b8303106b89fa5072ce332f96"} Jan 31 07:36:42 crc kubenswrapper[4908]: I0131 07:36:42.566633 4908 scope.go:117] "RemoveContainer" containerID="ba2d4b8dea2359d664d9d7c19a8ca0f838671e7ee0ba47c3bba467e1c2dbaaa3" Jan 31 07:36:42 crc kubenswrapper[4908]: I0131 07:36:42.581909 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-xsx6m"] Jan 31 07:36:42 crc kubenswrapper[4908]: I0131 07:36:42.591447 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-xsx6m"] Jan 31 07:36:43 crc kubenswrapper[4908]: I0131 07:36:43.949533 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2afe25d-38c4-4ee7-a9b9-1a38a23eb7a7" path="/var/lib/kubelet/pods/d2afe25d-38c4-4ee7-a9b9-1a38a23eb7a7/volumes" Jan 31 07:36:44 crc kubenswrapper[4908]: I0131 07:36:44.582073 4908 generic.go:334] "Generic (PLEG): container finished" podID="ee3a2975-d3eb-43a9-a977-af3b7a568a1c" containerID="5af1d2b0b08314c43375a2ff50a766d39327f4551d600726bfa6e46c75c5cfea" exitCode=0 Jan 31 07:36:44 crc kubenswrapper[4908]: I0131 07:36:44.582119 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdz5z" event={"ID":"ee3a2975-d3eb-43a9-a977-af3b7a568a1c","Type":"ContainerDied","Data":"5af1d2b0b08314c43375a2ff50a766d39327f4551d600726bfa6e46c75c5cfea"} Jan 31 07:36:47 crc kubenswrapper[4908]: I0131 07:36:47.601993 4908 generic.go:334] "Generic (PLEG): container finished" podID="ee3a2975-d3eb-43a9-a977-af3b7a568a1c" containerID="54c45ddc21541b3fab4b09440fb11a9703c20e47f73ff5b03a06821a3117fb15" exitCode=0 Jan 31 07:36:47 crc kubenswrapper[4908]: I0131 07:36:47.602087 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdz5z" event={"ID":"ee3a2975-d3eb-43a9-a977-af3b7a568a1c","Type":"ContainerDied","Data":"54c45ddc21541b3fab4b09440fb11a9703c20e47f73ff5b03a06821a3117fb15"} Jan 31 07:36:48 crc kubenswrapper[4908]: I0131 07:36:48.611103 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdz5z" event={"ID":"ee3a2975-d3eb-43a9-a977-af3b7a568a1c","Type":"ContainerStarted","Data":"3e3e34f3db7469a5ba5f6d7dc2b7dd5ae54e0267405c59f4ec8adf686dac6e2c"} Jan 31 07:36:48 crc kubenswrapper[4908]: I0131 07:36:48.640895 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qdz5z" podStartSLOduration=4.049150356 podStartE2EDuration="7.640876213s" podCreationTimestamp="2026-01-31 07:36:41 +0000 UTC" firstStartedPulling="2026-01-31 07:36:44.584243635 +0000 UTC m=+911.200188299" lastFinishedPulling="2026-01-31 07:36:48.175969462 +0000 UTC m=+914.791914156" observedRunningTime="2026-01-31 07:36:48.636781889 +0000 UTC m=+915.252726543" watchObservedRunningTime="2026-01-31 07:36:48.640876213 +0000 UTC m=+915.256820867" Jan 31 07:36:50 crc kubenswrapper[4908]: I0131 07:36:50.287266 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-62zcb" Jan 31 07:36:50 crc kubenswrapper[4908]: I0131 07:36:50.287605 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-62zcb" Jan 31 07:36:50 crc kubenswrapper[4908]: I0131 07:36:50.315902 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-62zcb" Jan 31 07:36:50 crc kubenswrapper[4908]: I0131 07:36:50.648663 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-62zcb" Jan 31 07:36:52 crc kubenswrapper[4908]: I0131 07:36:52.065947 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qdz5z" Jan 31 07:36:52 crc kubenswrapper[4908]: I0131 07:36:52.066277 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qdz5z" Jan 31 07:36:52 crc kubenswrapper[4908]: I0131 07:36:52.110913 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qdz5z" Jan 31 07:36:56 crc kubenswrapper[4908]: I0131 07:36:56.143955 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qnqj6"] Jan 31 07:36:56 crc kubenswrapper[4908]: I0131 07:36:56.145620 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qnqj6" Jan 31 07:36:56 crc kubenswrapper[4908]: I0131 07:36:56.157481 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qnqj6"] Jan 31 07:36:56 crc kubenswrapper[4908]: I0131 07:36:56.288062 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctj26\" (UniqueName: \"kubernetes.io/projected/1867a8a2-ed70-4a9f-a1a2-7329b27688a8-kube-api-access-ctj26\") pod \"community-operators-qnqj6\" (UID: \"1867a8a2-ed70-4a9f-a1a2-7329b27688a8\") " pod="openshift-marketplace/community-operators-qnqj6" Jan 31 07:36:56 crc kubenswrapper[4908]: I0131 07:36:56.288129 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1867a8a2-ed70-4a9f-a1a2-7329b27688a8-catalog-content\") pod \"community-operators-qnqj6\" (UID: \"1867a8a2-ed70-4a9f-a1a2-7329b27688a8\") " pod="openshift-marketplace/community-operators-qnqj6" Jan 31 07:36:56 crc kubenswrapper[4908]: I0131 07:36:56.288202 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1867a8a2-ed70-4a9f-a1a2-7329b27688a8-utilities\") pod \"community-operators-qnqj6\" (UID: \"1867a8a2-ed70-4a9f-a1a2-7329b27688a8\") " pod="openshift-marketplace/community-operators-qnqj6" Jan 31 07:36:56 crc kubenswrapper[4908]: I0131 07:36:56.389156 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1867a8a2-ed70-4a9f-a1a2-7329b27688a8-utilities\") pod \"community-operators-qnqj6\" (UID: \"1867a8a2-ed70-4a9f-a1a2-7329b27688a8\") " pod="openshift-marketplace/community-operators-qnqj6" Jan 31 07:36:56 crc kubenswrapper[4908]: I0131 07:36:56.389233 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctj26\" (UniqueName: \"kubernetes.io/projected/1867a8a2-ed70-4a9f-a1a2-7329b27688a8-kube-api-access-ctj26\") pod \"community-operators-qnqj6\" (UID: \"1867a8a2-ed70-4a9f-a1a2-7329b27688a8\") " pod="openshift-marketplace/community-operators-qnqj6" Jan 31 07:36:56 crc kubenswrapper[4908]: I0131 07:36:56.389261 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1867a8a2-ed70-4a9f-a1a2-7329b27688a8-catalog-content\") pod \"community-operators-qnqj6\" (UID: \"1867a8a2-ed70-4a9f-a1a2-7329b27688a8\") " pod="openshift-marketplace/community-operators-qnqj6" Jan 31 07:36:56 crc kubenswrapper[4908]: I0131 07:36:56.389732 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1867a8a2-ed70-4a9f-a1a2-7329b27688a8-catalog-content\") pod \"community-operators-qnqj6\" (UID: \"1867a8a2-ed70-4a9f-a1a2-7329b27688a8\") " pod="openshift-marketplace/community-operators-qnqj6" Jan 31 07:36:56 crc kubenswrapper[4908]: I0131 07:36:56.389941 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1867a8a2-ed70-4a9f-a1a2-7329b27688a8-utilities\") pod \"community-operators-qnqj6\" (UID: \"1867a8a2-ed70-4a9f-a1a2-7329b27688a8\") " pod="openshift-marketplace/community-operators-qnqj6" Jan 31 07:36:56 crc kubenswrapper[4908]: I0131 07:36:56.412284 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctj26\" (UniqueName: \"kubernetes.io/projected/1867a8a2-ed70-4a9f-a1a2-7329b27688a8-kube-api-access-ctj26\") pod \"community-operators-qnqj6\" (UID: \"1867a8a2-ed70-4a9f-a1a2-7329b27688a8\") " pod="openshift-marketplace/community-operators-qnqj6" Jan 31 07:36:56 crc kubenswrapper[4908]: I0131 07:36:56.463532 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qnqj6" Jan 31 07:36:56 crc kubenswrapper[4908]: I0131 07:36:56.803185 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb"] Jan 31 07:36:56 crc kubenswrapper[4908]: I0131 07:36:56.804507 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb" Jan 31 07:36:56 crc kubenswrapper[4908]: I0131 07:36:56.807299 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-nm9pp" Jan 31 07:36:56 crc kubenswrapper[4908]: I0131 07:36:56.827064 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb"] Jan 31 07:36:56 crc kubenswrapper[4908]: I0131 07:36:56.872123 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qnqj6"] Jan 31 07:36:56 crc kubenswrapper[4908]: I0131 07:36:56.897827 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/914ddeaf-aa45-4a08-a266-d166da23a80b-util\") pod \"3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb\" (UID: \"914ddeaf-aa45-4a08-a266-d166da23a80b\") " pod="openstack-operators/3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb" Jan 31 07:36:56 crc kubenswrapper[4908]: I0131 07:36:56.897884 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxrxc\" (UniqueName: \"kubernetes.io/projected/914ddeaf-aa45-4a08-a266-d166da23a80b-kube-api-access-bxrxc\") pod \"3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb\" (UID: \"914ddeaf-aa45-4a08-a266-d166da23a80b\") " pod="openstack-operators/3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb" Jan 31 07:36:56 crc kubenswrapper[4908]: I0131 07:36:56.897921 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/914ddeaf-aa45-4a08-a266-d166da23a80b-bundle\") pod \"3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb\" (UID: \"914ddeaf-aa45-4a08-a266-d166da23a80b\") " pod="openstack-operators/3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb" Jan 31 07:36:56 crc kubenswrapper[4908]: I0131 07:36:56.999837 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/914ddeaf-aa45-4a08-a266-d166da23a80b-util\") pod \"3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb\" (UID: \"914ddeaf-aa45-4a08-a266-d166da23a80b\") " pod="openstack-operators/3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb" Jan 31 07:36:56 crc kubenswrapper[4908]: I0131 07:36:56.999891 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxrxc\" (UniqueName: \"kubernetes.io/projected/914ddeaf-aa45-4a08-a266-d166da23a80b-kube-api-access-bxrxc\") pod \"3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb\" (UID: \"914ddeaf-aa45-4a08-a266-d166da23a80b\") " pod="openstack-operators/3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb" Jan 31 07:36:57 crc kubenswrapper[4908]: I0131 07:36:56.999922 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/914ddeaf-aa45-4a08-a266-d166da23a80b-bundle\") pod \"3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb\" (UID: \"914ddeaf-aa45-4a08-a266-d166da23a80b\") " pod="openstack-operators/3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb" Jan 31 07:36:57 crc kubenswrapper[4908]: I0131 07:36:57.000458 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/914ddeaf-aa45-4a08-a266-d166da23a80b-util\") pod \"3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb\" (UID: \"914ddeaf-aa45-4a08-a266-d166da23a80b\") " pod="openstack-operators/3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb" Jan 31 07:36:57 crc kubenswrapper[4908]: I0131 07:36:57.000512 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/914ddeaf-aa45-4a08-a266-d166da23a80b-bundle\") pod \"3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb\" (UID: \"914ddeaf-aa45-4a08-a266-d166da23a80b\") " pod="openstack-operators/3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb" Jan 31 07:36:57 crc kubenswrapper[4908]: I0131 07:36:57.020487 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxrxc\" (UniqueName: \"kubernetes.io/projected/914ddeaf-aa45-4a08-a266-d166da23a80b-kube-api-access-bxrxc\") pod \"3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb\" (UID: \"914ddeaf-aa45-4a08-a266-d166da23a80b\") " pod="openstack-operators/3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb" Jan 31 07:36:57 crc kubenswrapper[4908]: I0131 07:36:57.124374 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb" Jan 31 07:36:57 crc kubenswrapper[4908]: I0131 07:36:57.333521 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb"] Jan 31 07:36:57 crc kubenswrapper[4908]: I0131 07:36:57.680727 4908 generic.go:334] "Generic (PLEG): container finished" podID="1867a8a2-ed70-4a9f-a1a2-7329b27688a8" containerID="149c869328750216035c0811e78bc31ab788d7a0c79d4f59143d38e3e0245572" exitCode=0 Jan 31 07:36:57 crc kubenswrapper[4908]: I0131 07:36:57.681234 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qnqj6" event={"ID":"1867a8a2-ed70-4a9f-a1a2-7329b27688a8","Type":"ContainerDied","Data":"149c869328750216035c0811e78bc31ab788d7a0c79d4f59143d38e3e0245572"} Jan 31 07:36:57 crc kubenswrapper[4908]: I0131 07:36:57.681265 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qnqj6" event={"ID":"1867a8a2-ed70-4a9f-a1a2-7329b27688a8","Type":"ContainerStarted","Data":"9d85d95e0dabe69c39292ec63ffd4e966783514993537ede32ec192ae76b3532"} Jan 31 07:36:57 crc kubenswrapper[4908]: I0131 07:36:57.687301 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb" event={"ID":"914ddeaf-aa45-4a08-a266-d166da23a80b","Type":"ContainerStarted","Data":"4a673a475173155687e744d2f7f9d1a40dacd7a15befb1ad8a557e284c52a2de"} Jan 31 07:36:57 crc kubenswrapper[4908]: I0131 07:36:57.687401 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb" event={"ID":"914ddeaf-aa45-4a08-a266-d166da23a80b","Type":"ContainerStarted","Data":"8da6a9fd225bf48f39537b76290e9ed9c8b274814526f7b72ab08f97ebe546d4"} Jan 31 07:36:58 crc kubenswrapper[4908]: I0131 07:36:58.694417 4908 generic.go:334] "Generic (PLEG): container finished" podID="914ddeaf-aa45-4a08-a266-d166da23a80b" containerID="4a673a475173155687e744d2f7f9d1a40dacd7a15befb1ad8a557e284c52a2de" exitCode=0 Jan 31 07:36:58 crc kubenswrapper[4908]: I0131 07:36:58.694463 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb" event={"ID":"914ddeaf-aa45-4a08-a266-d166da23a80b","Type":"ContainerDied","Data":"4a673a475173155687e744d2f7f9d1a40dacd7a15befb1ad8a557e284c52a2de"} Jan 31 07:36:59 crc kubenswrapper[4908]: I0131 07:36:59.701042 4908 generic.go:334] "Generic (PLEG): container finished" podID="914ddeaf-aa45-4a08-a266-d166da23a80b" containerID="a49ed10f7cc446960e55906894756f17d821dd570edd671b50becd830c321e3e" exitCode=0 Jan 31 07:36:59 crc kubenswrapper[4908]: I0131 07:36:59.701116 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb" event={"ID":"914ddeaf-aa45-4a08-a266-d166da23a80b","Type":"ContainerDied","Data":"a49ed10f7cc446960e55906894756f17d821dd570edd671b50becd830c321e3e"} Jan 31 07:37:01 crc kubenswrapper[4908]: I0131 07:37:01.718701 4908 generic.go:334] "Generic (PLEG): container finished" podID="1867a8a2-ed70-4a9f-a1a2-7329b27688a8" containerID="3bff52837495e7e96af8bf91f7a2975951d670921c77eaea7ab6611949052788" exitCode=0 Jan 31 07:37:01 crc kubenswrapper[4908]: I0131 07:37:01.718778 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qnqj6" event={"ID":"1867a8a2-ed70-4a9f-a1a2-7329b27688a8","Type":"ContainerDied","Data":"3bff52837495e7e96af8bf91f7a2975951d670921c77eaea7ab6611949052788"} Jan 31 07:37:01 crc kubenswrapper[4908]: I0131 07:37:01.722677 4908 generic.go:334] "Generic (PLEG): container finished" podID="914ddeaf-aa45-4a08-a266-d166da23a80b" containerID="648499c764093f45265c4c979dcfc3f41f176e6ce4fb23bf6ac9470c4ddd85c7" exitCode=0 Jan 31 07:37:01 crc kubenswrapper[4908]: I0131 07:37:01.722725 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb" event={"ID":"914ddeaf-aa45-4a08-a266-d166da23a80b","Type":"ContainerDied","Data":"648499c764093f45265c4c979dcfc3f41f176e6ce4fb23bf6ac9470c4ddd85c7"} Jan 31 07:37:02 crc kubenswrapper[4908]: I0131 07:37:02.114640 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qdz5z" Jan 31 07:37:02 crc kubenswrapper[4908]: I0131 07:37:02.729714 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qnqj6" event={"ID":"1867a8a2-ed70-4a9f-a1a2-7329b27688a8","Type":"ContainerStarted","Data":"bf9a022fcedcaa7753e197971cb13f40fbbfd9f4ad07a3d083287c69b0cad54d"} Jan 31 07:37:02 crc kubenswrapper[4908]: I0131 07:37:02.749284 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qnqj6" podStartSLOduration=2.239108932 podStartE2EDuration="6.749268872s" podCreationTimestamp="2026-01-31 07:36:56 +0000 UTC" firstStartedPulling="2026-01-31 07:36:57.684199577 +0000 UTC m=+924.300144271" lastFinishedPulling="2026-01-31 07:37:02.194359557 +0000 UTC m=+928.810304211" observedRunningTime="2026-01-31 07:37:02.747335933 +0000 UTC m=+929.363280587" watchObservedRunningTime="2026-01-31 07:37:02.749268872 +0000 UTC m=+929.365213526" Jan 31 07:37:02 crc kubenswrapper[4908]: I0131 07:37:02.998076 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb" Jan 31 07:37:03 crc kubenswrapper[4908]: I0131 07:37:03.097729 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/914ddeaf-aa45-4a08-a266-d166da23a80b-bundle\") pod \"914ddeaf-aa45-4a08-a266-d166da23a80b\" (UID: \"914ddeaf-aa45-4a08-a266-d166da23a80b\") " Jan 31 07:37:03 crc kubenswrapper[4908]: I0131 07:37:03.097836 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxrxc\" (UniqueName: \"kubernetes.io/projected/914ddeaf-aa45-4a08-a266-d166da23a80b-kube-api-access-bxrxc\") pod \"914ddeaf-aa45-4a08-a266-d166da23a80b\" (UID: \"914ddeaf-aa45-4a08-a266-d166da23a80b\") " Jan 31 07:37:03 crc kubenswrapper[4908]: I0131 07:37:03.098598 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/914ddeaf-aa45-4a08-a266-d166da23a80b-bundle" (OuterVolumeSpecName: "bundle") pod "914ddeaf-aa45-4a08-a266-d166da23a80b" (UID: "914ddeaf-aa45-4a08-a266-d166da23a80b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:37:03 crc kubenswrapper[4908]: I0131 07:37:03.098785 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/914ddeaf-aa45-4a08-a266-d166da23a80b-util\") pod \"914ddeaf-aa45-4a08-a266-d166da23a80b\" (UID: \"914ddeaf-aa45-4a08-a266-d166da23a80b\") " Jan 31 07:37:03 crc kubenswrapper[4908]: I0131 07:37:03.099397 4908 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/914ddeaf-aa45-4a08-a266-d166da23a80b-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:37:03 crc kubenswrapper[4908]: I0131 07:37:03.103005 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/914ddeaf-aa45-4a08-a266-d166da23a80b-kube-api-access-bxrxc" (OuterVolumeSpecName: "kube-api-access-bxrxc") pod "914ddeaf-aa45-4a08-a266-d166da23a80b" (UID: "914ddeaf-aa45-4a08-a266-d166da23a80b"). InnerVolumeSpecName "kube-api-access-bxrxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:37:03 crc kubenswrapper[4908]: I0131 07:37:03.118821 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/914ddeaf-aa45-4a08-a266-d166da23a80b-util" (OuterVolumeSpecName: "util") pod "914ddeaf-aa45-4a08-a266-d166da23a80b" (UID: "914ddeaf-aa45-4a08-a266-d166da23a80b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:37:03 crc kubenswrapper[4908]: I0131 07:37:03.200366 4908 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/914ddeaf-aa45-4a08-a266-d166da23a80b-util\") on node \"crc\" DevicePath \"\"" Jan 31 07:37:03 crc kubenswrapper[4908]: I0131 07:37:03.200405 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxrxc\" (UniqueName: \"kubernetes.io/projected/914ddeaf-aa45-4a08-a266-d166da23a80b-kube-api-access-bxrxc\") on node \"crc\" DevicePath \"\"" Jan 31 07:37:03 crc kubenswrapper[4908]: I0131 07:37:03.736247 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb" Jan 31 07:37:03 crc kubenswrapper[4908]: I0131 07:37:03.736268 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb" event={"ID":"914ddeaf-aa45-4a08-a266-d166da23a80b","Type":"ContainerDied","Data":"8da6a9fd225bf48f39537b76290e9ed9c8b274814526f7b72ab08f97ebe546d4"} Jan 31 07:37:03 crc kubenswrapper[4908]: I0131 07:37:03.736346 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8da6a9fd225bf48f39537b76290e9ed9c8b274814526f7b72ab08f97ebe546d4" Jan 31 07:37:05 crc kubenswrapper[4908]: I0131 07:37:05.737044 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdz5z"] Jan 31 07:37:05 crc kubenswrapper[4908]: I0131 07:37:05.737577 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qdz5z" podUID="ee3a2975-d3eb-43a9-a977-af3b7a568a1c" containerName="registry-server" containerID="cri-o://3e3e34f3db7469a5ba5f6d7dc2b7dd5ae54e0267405c59f4ec8adf686dac6e2c" gracePeriod=2 Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.110499 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdz5z" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.235016 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee3a2975-d3eb-43a9-a977-af3b7a568a1c-utilities\") pod \"ee3a2975-d3eb-43a9-a977-af3b7a568a1c\" (UID: \"ee3a2975-d3eb-43a9-a977-af3b7a568a1c\") " Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.235105 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wvjq\" (UniqueName: \"kubernetes.io/projected/ee3a2975-d3eb-43a9-a977-af3b7a568a1c-kube-api-access-5wvjq\") pod \"ee3a2975-d3eb-43a9-a977-af3b7a568a1c\" (UID: \"ee3a2975-d3eb-43a9-a977-af3b7a568a1c\") " Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.235153 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee3a2975-d3eb-43a9-a977-af3b7a568a1c-catalog-content\") pod \"ee3a2975-d3eb-43a9-a977-af3b7a568a1c\" (UID: \"ee3a2975-d3eb-43a9-a977-af3b7a568a1c\") " Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.236368 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee3a2975-d3eb-43a9-a977-af3b7a568a1c-utilities" (OuterVolumeSpecName: "utilities") pod "ee3a2975-d3eb-43a9-a977-af3b7a568a1c" (UID: "ee3a2975-d3eb-43a9-a977-af3b7a568a1c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.240031 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee3a2975-d3eb-43a9-a977-af3b7a568a1c-kube-api-access-5wvjq" (OuterVolumeSpecName: "kube-api-access-5wvjq") pod "ee3a2975-d3eb-43a9-a977-af3b7a568a1c" (UID: "ee3a2975-d3eb-43a9-a977-af3b7a568a1c"). InnerVolumeSpecName "kube-api-access-5wvjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.260351 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee3a2975-d3eb-43a9-a977-af3b7a568a1c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee3a2975-d3eb-43a9-a977-af3b7a568a1c" (UID: "ee3a2975-d3eb-43a9-a977-af3b7a568a1c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.336927 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee3a2975-d3eb-43a9-a977-af3b7a568a1c-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.336966 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wvjq\" (UniqueName: \"kubernetes.io/projected/ee3a2975-d3eb-43a9-a977-af3b7a568a1c-kube-api-access-5wvjq\") on node \"crc\" DevicePath \"\"" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.336993 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee3a2975-d3eb-43a9-a977-af3b7a568a1c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.464532 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qnqj6" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.464607 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qnqj6" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.512672 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qnqj6" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.760568 4908 generic.go:334] "Generic (PLEG): container finished" podID="ee3a2975-d3eb-43a9-a977-af3b7a568a1c" containerID="3e3e34f3db7469a5ba5f6d7dc2b7dd5ae54e0267405c59f4ec8adf686dac6e2c" exitCode=0 Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.760608 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdz5z" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.760621 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdz5z" event={"ID":"ee3a2975-d3eb-43a9-a977-af3b7a568a1c","Type":"ContainerDied","Data":"3e3e34f3db7469a5ba5f6d7dc2b7dd5ae54e0267405c59f4ec8adf686dac6e2c"} Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.761722 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdz5z" event={"ID":"ee3a2975-d3eb-43a9-a977-af3b7a568a1c","Type":"ContainerDied","Data":"a03bde0a0e7fdcd4e07a4b068b32f5927bc87a7de013ab3f829b32ec2fed1b9f"} Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.761740 4908 scope.go:117] "RemoveContainer" containerID="3e3e34f3db7469a5ba5f6d7dc2b7dd5ae54e0267405c59f4ec8adf686dac6e2c" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.777309 4908 scope.go:117] "RemoveContainer" containerID="54c45ddc21541b3fab4b09440fb11a9703c20e47f73ff5b03a06821a3117fb15" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.791672 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdz5z"] Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.798208 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdz5z"] Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.808129 4908 scope.go:117] "RemoveContainer" containerID="5af1d2b0b08314c43375a2ff50a766d39327f4551d600726bfa6e46c75c5cfea" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.828287 4908 scope.go:117] "RemoveContainer" containerID="3e3e34f3db7469a5ba5f6d7dc2b7dd5ae54e0267405c59f4ec8adf686dac6e2c" Jan 31 07:37:06 crc kubenswrapper[4908]: E0131 07:37:06.828743 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e3e34f3db7469a5ba5f6d7dc2b7dd5ae54e0267405c59f4ec8adf686dac6e2c\": container with ID starting with 3e3e34f3db7469a5ba5f6d7dc2b7dd5ae54e0267405c59f4ec8adf686dac6e2c not found: ID does not exist" containerID="3e3e34f3db7469a5ba5f6d7dc2b7dd5ae54e0267405c59f4ec8adf686dac6e2c" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.828773 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e3e34f3db7469a5ba5f6d7dc2b7dd5ae54e0267405c59f4ec8adf686dac6e2c"} err="failed to get container status \"3e3e34f3db7469a5ba5f6d7dc2b7dd5ae54e0267405c59f4ec8adf686dac6e2c\": rpc error: code = NotFound desc = could not find container \"3e3e34f3db7469a5ba5f6d7dc2b7dd5ae54e0267405c59f4ec8adf686dac6e2c\": container with ID starting with 3e3e34f3db7469a5ba5f6d7dc2b7dd5ae54e0267405c59f4ec8adf686dac6e2c not found: ID does not exist" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.828797 4908 scope.go:117] "RemoveContainer" containerID="54c45ddc21541b3fab4b09440fb11a9703c20e47f73ff5b03a06821a3117fb15" Jan 31 07:37:06 crc kubenswrapper[4908]: E0131 07:37:06.830363 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54c45ddc21541b3fab4b09440fb11a9703c20e47f73ff5b03a06821a3117fb15\": container with ID starting with 54c45ddc21541b3fab4b09440fb11a9703c20e47f73ff5b03a06821a3117fb15 not found: ID does not exist" containerID="54c45ddc21541b3fab4b09440fb11a9703c20e47f73ff5b03a06821a3117fb15" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.830408 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54c45ddc21541b3fab4b09440fb11a9703c20e47f73ff5b03a06821a3117fb15"} err="failed to get container status \"54c45ddc21541b3fab4b09440fb11a9703c20e47f73ff5b03a06821a3117fb15\": rpc error: code = NotFound desc = could not find container \"54c45ddc21541b3fab4b09440fb11a9703c20e47f73ff5b03a06821a3117fb15\": container with ID starting with 54c45ddc21541b3fab4b09440fb11a9703c20e47f73ff5b03a06821a3117fb15 not found: ID does not exist" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.830442 4908 scope.go:117] "RemoveContainer" containerID="5af1d2b0b08314c43375a2ff50a766d39327f4551d600726bfa6e46c75c5cfea" Jan 31 07:37:06 crc kubenswrapper[4908]: E0131 07:37:06.831477 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5af1d2b0b08314c43375a2ff50a766d39327f4551d600726bfa6e46c75c5cfea\": container with ID starting with 5af1d2b0b08314c43375a2ff50a766d39327f4551d600726bfa6e46c75c5cfea not found: ID does not exist" containerID="5af1d2b0b08314c43375a2ff50a766d39327f4551d600726bfa6e46c75c5cfea" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.831548 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5af1d2b0b08314c43375a2ff50a766d39327f4551d600726bfa6e46c75c5cfea"} err="failed to get container status \"5af1d2b0b08314c43375a2ff50a766d39327f4551d600726bfa6e46c75c5cfea\": rpc error: code = NotFound desc = could not find container \"5af1d2b0b08314c43375a2ff50a766d39327f4551d600726bfa6e46c75c5cfea\": container with ID starting with 5af1d2b0b08314c43375a2ff50a766d39327f4551d600726bfa6e46c75c5cfea not found: ID does not exist" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.904906 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-86bfc46b97-tmj4g"] Jan 31 07:37:06 crc kubenswrapper[4908]: E0131 07:37:06.905148 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee3a2975-d3eb-43a9-a977-af3b7a568a1c" containerName="extract-content" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.905167 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee3a2975-d3eb-43a9-a977-af3b7a568a1c" containerName="extract-content" Jan 31 07:37:06 crc kubenswrapper[4908]: E0131 07:37:06.905178 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914ddeaf-aa45-4a08-a266-d166da23a80b" containerName="extract" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.905186 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="914ddeaf-aa45-4a08-a266-d166da23a80b" containerName="extract" Jan 31 07:37:06 crc kubenswrapper[4908]: E0131 07:37:06.905195 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914ddeaf-aa45-4a08-a266-d166da23a80b" containerName="util" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.905201 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="914ddeaf-aa45-4a08-a266-d166da23a80b" containerName="util" Jan 31 07:37:06 crc kubenswrapper[4908]: E0131 07:37:06.905214 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee3a2975-d3eb-43a9-a977-af3b7a568a1c" containerName="extract-utilities" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.905220 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee3a2975-d3eb-43a9-a977-af3b7a568a1c" containerName="extract-utilities" Jan 31 07:37:06 crc kubenswrapper[4908]: E0131 07:37:06.905229 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914ddeaf-aa45-4a08-a266-d166da23a80b" containerName="pull" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.905235 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="914ddeaf-aa45-4a08-a266-d166da23a80b" containerName="pull" Jan 31 07:37:06 crc kubenswrapper[4908]: E0131 07:37:06.905246 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee3a2975-d3eb-43a9-a977-af3b7a568a1c" containerName="registry-server" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.905252 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee3a2975-d3eb-43a9-a977-af3b7a568a1c" containerName="registry-server" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.905365 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee3a2975-d3eb-43a9-a977-af3b7a568a1c" containerName="registry-server" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.905376 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="914ddeaf-aa45-4a08-a266-d166da23a80b" containerName="extract" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.905734 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-86bfc46b97-tmj4g" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.908391 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-wnt8l" Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.927383 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-86bfc46b97-tmj4g"] Jan 31 07:37:06 crc kubenswrapper[4908]: I0131 07:37:06.945127 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5l2r\" (UniqueName: \"kubernetes.io/projected/5a322882-c0d9-45ec-803c-6e1ea6270dbb-kube-api-access-z5l2r\") pod \"openstack-operator-controller-init-86bfc46b97-tmj4g\" (UID: \"5a322882-c0d9-45ec-803c-6e1ea6270dbb\") " pod="openstack-operators/openstack-operator-controller-init-86bfc46b97-tmj4g" Jan 31 07:37:07 crc kubenswrapper[4908]: I0131 07:37:07.046241 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5l2r\" (UniqueName: \"kubernetes.io/projected/5a322882-c0d9-45ec-803c-6e1ea6270dbb-kube-api-access-z5l2r\") pod \"openstack-operator-controller-init-86bfc46b97-tmj4g\" (UID: \"5a322882-c0d9-45ec-803c-6e1ea6270dbb\") " pod="openstack-operators/openstack-operator-controller-init-86bfc46b97-tmj4g" Jan 31 07:37:07 crc kubenswrapper[4908]: I0131 07:37:07.062472 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5l2r\" (UniqueName: \"kubernetes.io/projected/5a322882-c0d9-45ec-803c-6e1ea6270dbb-kube-api-access-z5l2r\") pod \"openstack-operator-controller-init-86bfc46b97-tmj4g\" (UID: \"5a322882-c0d9-45ec-803c-6e1ea6270dbb\") " pod="openstack-operators/openstack-operator-controller-init-86bfc46b97-tmj4g" Jan 31 07:37:07 crc kubenswrapper[4908]: I0131 07:37:07.221792 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-86bfc46b97-tmj4g" Jan 31 07:37:07 crc kubenswrapper[4908]: I0131 07:37:07.420239 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-86bfc46b97-tmj4g"] Jan 31 07:37:07 crc kubenswrapper[4908]: I0131 07:37:07.767877 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-86bfc46b97-tmj4g" event={"ID":"5a322882-c0d9-45ec-803c-6e1ea6270dbb","Type":"ContainerStarted","Data":"eb05750c683051f162921cde95e210254b36d119f206a469f9397f0aae263622"} Jan 31 07:37:07 crc kubenswrapper[4908]: I0131 07:37:07.958939 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee3a2975-d3eb-43a9-a977-af3b7a568a1c" path="/var/lib/kubelet/pods/ee3a2975-d3eb-43a9-a977-af3b7a568a1c/volumes" Jan 31 07:37:13 crc kubenswrapper[4908]: I0131 07:37:13.147716 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qnc98"] Jan 31 07:37:13 crc kubenswrapper[4908]: I0131 07:37:13.149865 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qnc98" Jan 31 07:37:13 crc kubenswrapper[4908]: I0131 07:37:13.191142 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qnc98"] Jan 31 07:37:13 crc kubenswrapper[4908]: I0131 07:37:13.241951 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/454e4fcb-98e0-436f-a1af-2efd4c10f34d-catalog-content\") pod \"certified-operators-qnc98\" (UID: \"454e4fcb-98e0-436f-a1af-2efd4c10f34d\") " pod="openshift-marketplace/certified-operators-qnc98" Jan 31 07:37:13 crc kubenswrapper[4908]: I0131 07:37:13.242377 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/454e4fcb-98e0-436f-a1af-2efd4c10f34d-utilities\") pod \"certified-operators-qnc98\" (UID: \"454e4fcb-98e0-436f-a1af-2efd4c10f34d\") " pod="openshift-marketplace/certified-operators-qnc98" Jan 31 07:37:13 crc kubenswrapper[4908]: I0131 07:37:13.242652 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nszh\" (UniqueName: \"kubernetes.io/projected/454e4fcb-98e0-436f-a1af-2efd4c10f34d-kube-api-access-8nszh\") pod \"certified-operators-qnc98\" (UID: \"454e4fcb-98e0-436f-a1af-2efd4c10f34d\") " pod="openshift-marketplace/certified-operators-qnc98" Jan 31 07:37:13 crc kubenswrapper[4908]: I0131 07:37:13.343908 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nszh\" (UniqueName: \"kubernetes.io/projected/454e4fcb-98e0-436f-a1af-2efd4c10f34d-kube-api-access-8nszh\") pod \"certified-operators-qnc98\" (UID: \"454e4fcb-98e0-436f-a1af-2efd4c10f34d\") " pod="openshift-marketplace/certified-operators-qnc98" Jan 31 07:37:13 crc kubenswrapper[4908]: I0131 07:37:13.344189 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/454e4fcb-98e0-436f-a1af-2efd4c10f34d-catalog-content\") pod \"certified-operators-qnc98\" (UID: \"454e4fcb-98e0-436f-a1af-2efd4c10f34d\") " pod="openshift-marketplace/certified-operators-qnc98" Jan 31 07:37:13 crc kubenswrapper[4908]: I0131 07:37:13.344740 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/454e4fcb-98e0-436f-a1af-2efd4c10f34d-catalog-content\") pod \"certified-operators-qnc98\" (UID: \"454e4fcb-98e0-436f-a1af-2efd4c10f34d\") " pod="openshift-marketplace/certified-operators-qnc98" Jan 31 07:37:13 crc kubenswrapper[4908]: I0131 07:37:13.345143 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/454e4fcb-98e0-436f-a1af-2efd4c10f34d-utilities\") pod \"certified-operators-qnc98\" (UID: \"454e4fcb-98e0-436f-a1af-2efd4c10f34d\") " pod="openshift-marketplace/certified-operators-qnc98" Jan 31 07:37:13 crc kubenswrapper[4908]: I0131 07:37:13.345502 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/454e4fcb-98e0-436f-a1af-2efd4c10f34d-utilities\") pod \"certified-operators-qnc98\" (UID: \"454e4fcb-98e0-436f-a1af-2efd4c10f34d\") " pod="openshift-marketplace/certified-operators-qnc98" Jan 31 07:37:13 crc kubenswrapper[4908]: I0131 07:37:13.366781 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nszh\" (UniqueName: \"kubernetes.io/projected/454e4fcb-98e0-436f-a1af-2efd4c10f34d-kube-api-access-8nszh\") pod \"certified-operators-qnc98\" (UID: \"454e4fcb-98e0-436f-a1af-2efd4c10f34d\") " pod="openshift-marketplace/certified-operators-qnc98" Jan 31 07:37:13 crc kubenswrapper[4908]: I0131 07:37:13.482966 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qnc98" Jan 31 07:37:13 crc kubenswrapper[4908]: I0131 07:37:13.748601 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qnc98"] Jan 31 07:37:13 crc kubenswrapper[4908]: W0131 07:37:13.753309 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod454e4fcb_98e0_436f_a1af_2efd4c10f34d.slice/crio-dedaddddb0562772e3a7c02c38c37bba10d9bc3cad4e5ddc8e605618912a1f29 WatchSource:0}: Error finding container dedaddddb0562772e3a7c02c38c37bba10d9bc3cad4e5ddc8e605618912a1f29: Status 404 returned error can't find the container with id dedaddddb0562772e3a7c02c38c37bba10d9bc3cad4e5ddc8e605618912a1f29 Jan 31 07:37:13 crc kubenswrapper[4908]: I0131 07:37:13.807629 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qnc98" event={"ID":"454e4fcb-98e0-436f-a1af-2efd4c10f34d","Type":"ContainerStarted","Data":"dedaddddb0562772e3a7c02c38c37bba10d9bc3cad4e5ddc8e605618912a1f29"} Jan 31 07:37:13 crc kubenswrapper[4908]: I0131 07:37:13.809397 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-86bfc46b97-tmj4g" event={"ID":"5a322882-c0d9-45ec-803c-6e1ea6270dbb","Type":"ContainerStarted","Data":"6beae21f00d5423aa610a97812498b9cc878df89d21172af04f6e599aa2a540f"} Jan 31 07:37:13 crc kubenswrapper[4908]: I0131 07:37:13.809581 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-86bfc46b97-tmj4g" Jan 31 07:37:14 crc kubenswrapper[4908]: I0131 07:37:14.817906 4908 generic.go:334] "Generic (PLEG): container finished" podID="454e4fcb-98e0-436f-a1af-2efd4c10f34d" containerID="b4cbd62eb3656b443e55ed3bfc5740c911ef25ca19827803511d69e672e90bb3" exitCode=0 Jan 31 07:37:14 crc kubenswrapper[4908]: I0131 07:37:14.817966 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qnc98" event={"ID":"454e4fcb-98e0-436f-a1af-2efd4c10f34d","Type":"ContainerDied","Data":"b4cbd62eb3656b443e55ed3bfc5740c911ef25ca19827803511d69e672e90bb3"} Jan 31 07:37:14 crc kubenswrapper[4908]: I0131 07:37:14.837619 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-86bfc46b97-tmj4g" podStartSLOduration=3.060229556 podStartE2EDuration="8.837594107s" podCreationTimestamp="2026-01-31 07:37:06 +0000 UTC" firstStartedPulling="2026-01-31 07:37:07.438550438 +0000 UTC m=+934.054495102" lastFinishedPulling="2026-01-31 07:37:13.215914999 +0000 UTC m=+939.831859653" observedRunningTime="2026-01-31 07:37:13.850354082 +0000 UTC m=+940.466298736" watchObservedRunningTime="2026-01-31 07:37:14.837594107 +0000 UTC m=+941.453538771" Jan 31 07:37:15 crc kubenswrapper[4908]: I0131 07:37:15.825885 4908 generic.go:334] "Generic (PLEG): container finished" podID="454e4fcb-98e0-436f-a1af-2efd4c10f34d" containerID="15be9fc8550f88482378a4aefb479e64f4086e929afe0348688e770bdb09c965" exitCode=0 Jan 31 07:37:15 crc kubenswrapper[4908]: I0131 07:37:15.825948 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qnc98" event={"ID":"454e4fcb-98e0-436f-a1af-2efd4c10f34d","Type":"ContainerDied","Data":"15be9fc8550f88482378a4aefb479e64f4086e929afe0348688e770bdb09c965"} Jan 31 07:37:16 crc kubenswrapper[4908]: I0131 07:37:16.507185 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qnqj6" Jan 31 07:37:16 crc kubenswrapper[4908]: I0131 07:37:16.833904 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qnc98" event={"ID":"454e4fcb-98e0-436f-a1af-2efd4c10f34d","Type":"ContainerStarted","Data":"5a7427a9a5cdbadbe9bd4ffbb89650374f7be522c419d565f70df49c11e05fb3"} Jan 31 07:37:16 crc kubenswrapper[4908]: I0131 07:37:16.854359 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qnc98" podStartSLOduration=2.467244358 podStartE2EDuration="3.854345866s" podCreationTimestamp="2026-01-31 07:37:13 +0000 UTC" firstStartedPulling="2026-01-31 07:37:14.819604119 +0000 UTC m=+941.435548773" lastFinishedPulling="2026-01-31 07:37:16.206705627 +0000 UTC m=+942.822650281" observedRunningTime="2026-01-31 07:37:16.852696794 +0000 UTC m=+943.468641458" watchObservedRunningTime="2026-01-31 07:37:16.854345866 +0000 UTC m=+943.470290520" Jan 31 07:37:18 crc kubenswrapper[4908]: I0131 07:37:18.370530 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qnqj6"] Jan 31 07:37:18 crc kubenswrapper[4908]: I0131 07:37:18.739658 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l7vpt"] Jan 31 07:37:18 crc kubenswrapper[4908]: I0131 07:37:18.740446 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l7vpt" podUID="db4e226f-ab5f-4ce2-bfbb-baefd681a009" containerName="registry-server" containerID="cri-o://a5644a3c659130828b8dc95a05a27fc510ba96a79840f353f050b1dac542e3e0" gracePeriod=2 Jan 31 07:37:19 crc kubenswrapper[4908]: I0131 07:37:19.166239 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l7vpt" Jan 31 07:37:19 crc kubenswrapper[4908]: I0131 07:37:19.228638 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db4e226f-ab5f-4ce2-bfbb-baefd681a009-utilities\") pod \"db4e226f-ab5f-4ce2-bfbb-baefd681a009\" (UID: \"db4e226f-ab5f-4ce2-bfbb-baefd681a009\") " Jan 31 07:37:19 crc kubenswrapper[4908]: I0131 07:37:19.228871 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db4e226f-ab5f-4ce2-bfbb-baefd681a009-catalog-content\") pod \"db4e226f-ab5f-4ce2-bfbb-baefd681a009\" (UID: \"db4e226f-ab5f-4ce2-bfbb-baefd681a009\") " Jan 31 07:37:19 crc kubenswrapper[4908]: I0131 07:37:19.228949 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwbn6\" (UniqueName: \"kubernetes.io/projected/db4e226f-ab5f-4ce2-bfbb-baefd681a009-kube-api-access-hwbn6\") pod \"db4e226f-ab5f-4ce2-bfbb-baefd681a009\" (UID: \"db4e226f-ab5f-4ce2-bfbb-baefd681a009\") " Jan 31 07:37:19 crc kubenswrapper[4908]: I0131 07:37:19.229793 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db4e226f-ab5f-4ce2-bfbb-baefd681a009-utilities" (OuterVolumeSpecName: "utilities") pod "db4e226f-ab5f-4ce2-bfbb-baefd681a009" (UID: "db4e226f-ab5f-4ce2-bfbb-baefd681a009"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:37:19 crc kubenswrapper[4908]: I0131 07:37:19.235688 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db4e226f-ab5f-4ce2-bfbb-baefd681a009-kube-api-access-hwbn6" (OuterVolumeSpecName: "kube-api-access-hwbn6") pod "db4e226f-ab5f-4ce2-bfbb-baefd681a009" (UID: "db4e226f-ab5f-4ce2-bfbb-baefd681a009"). InnerVolumeSpecName "kube-api-access-hwbn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:37:19 crc kubenswrapper[4908]: I0131 07:37:19.288603 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db4e226f-ab5f-4ce2-bfbb-baefd681a009-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db4e226f-ab5f-4ce2-bfbb-baefd681a009" (UID: "db4e226f-ab5f-4ce2-bfbb-baefd681a009"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:37:19 crc kubenswrapper[4908]: I0131 07:37:19.330309 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db4e226f-ab5f-4ce2-bfbb-baefd681a009-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:37:19 crc kubenswrapper[4908]: I0131 07:37:19.330340 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwbn6\" (UniqueName: \"kubernetes.io/projected/db4e226f-ab5f-4ce2-bfbb-baefd681a009-kube-api-access-hwbn6\") on node \"crc\" DevicePath \"\"" Jan 31 07:37:19 crc kubenswrapper[4908]: I0131 07:37:19.330350 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db4e226f-ab5f-4ce2-bfbb-baefd681a009-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:37:19 crc kubenswrapper[4908]: I0131 07:37:19.854065 4908 generic.go:334] "Generic (PLEG): container finished" podID="db4e226f-ab5f-4ce2-bfbb-baefd681a009" containerID="a5644a3c659130828b8dc95a05a27fc510ba96a79840f353f050b1dac542e3e0" exitCode=0 Jan 31 07:37:19 crc kubenswrapper[4908]: I0131 07:37:19.854098 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7vpt" event={"ID":"db4e226f-ab5f-4ce2-bfbb-baefd681a009","Type":"ContainerDied","Data":"a5644a3c659130828b8dc95a05a27fc510ba96a79840f353f050b1dac542e3e0"} Jan 31 07:37:19 crc kubenswrapper[4908]: I0131 07:37:19.854378 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7vpt" event={"ID":"db4e226f-ab5f-4ce2-bfbb-baefd681a009","Type":"ContainerDied","Data":"806b3340b1118c083693c84fc095f2aa3783cd8c4d3e7ca31cd302062524403f"} Jan 31 07:37:19 crc kubenswrapper[4908]: I0131 07:37:19.854396 4908 scope.go:117] "RemoveContainer" containerID="a5644a3c659130828b8dc95a05a27fc510ba96a79840f353f050b1dac542e3e0" Jan 31 07:37:19 crc kubenswrapper[4908]: I0131 07:37:19.854145 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l7vpt" Jan 31 07:37:19 crc kubenswrapper[4908]: I0131 07:37:19.879465 4908 scope.go:117] "RemoveContainer" containerID="c9315fc8b91e5f9234117cfe9ef0bdf7ba4d263e8618c3b9b38965605e685f6e" Jan 31 07:37:19 crc kubenswrapper[4908]: I0131 07:37:19.887342 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l7vpt"] Jan 31 07:37:19 crc kubenswrapper[4908]: I0131 07:37:19.893363 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l7vpt"] Jan 31 07:37:19 crc kubenswrapper[4908]: I0131 07:37:19.914558 4908 scope.go:117] "RemoveContainer" containerID="7590b743161d99dd7318484424437cf51396f50bee3cce8231535c3b5c5765e1" Jan 31 07:37:19 crc kubenswrapper[4908]: I0131 07:37:19.936363 4908 scope.go:117] "RemoveContainer" containerID="a5644a3c659130828b8dc95a05a27fc510ba96a79840f353f050b1dac542e3e0" Jan 31 07:37:19 crc kubenswrapper[4908]: E0131 07:37:19.936909 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5644a3c659130828b8dc95a05a27fc510ba96a79840f353f050b1dac542e3e0\": container with ID starting with a5644a3c659130828b8dc95a05a27fc510ba96a79840f353f050b1dac542e3e0 not found: ID does not exist" containerID="a5644a3c659130828b8dc95a05a27fc510ba96a79840f353f050b1dac542e3e0" Jan 31 07:37:19 crc kubenswrapper[4908]: I0131 07:37:19.936939 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5644a3c659130828b8dc95a05a27fc510ba96a79840f353f050b1dac542e3e0"} err="failed to get container status \"a5644a3c659130828b8dc95a05a27fc510ba96a79840f353f050b1dac542e3e0\": rpc error: code = NotFound desc = could not find container \"a5644a3c659130828b8dc95a05a27fc510ba96a79840f353f050b1dac542e3e0\": container with ID starting with a5644a3c659130828b8dc95a05a27fc510ba96a79840f353f050b1dac542e3e0 not found: ID does not exist" Jan 31 07:37:19 crc kubenswrapper[4908]: I0131 07:37:19.936962 4908 scope.go:117] "RemoveContainer" containerID="c9315fc8b91e5f9234117cfe9ef0bdf7ba4d263e8618c3b9b38965605e685f6e" Jan 31 07:37:19 crc kubenswrapper[4908]: E0131 07:37:19.937292 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9315fc8b91e5f9234117cfe9ef0bdf7ba4d263e8618c3b9b38965605e685f6e\": container with ID starting with c9315fc8b91e5f9234117cfe9ef0bdf7ba4d263e8618c3b9b38965605e685f6e not found: ID does not exist" containerID="c9315fc8b91e5f9234117cfe9ef0bdf7ba4d263e8618c3b9b38965605e685f6e" Jan 31 07:37:19 crc kubenswrapper[4908]: I0131 07:37:19.937318 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9315fc8b91e5f9234117cfe9ef0bdf7ba4d263e8618c3b9b38965605e685f6e"} err="failed to get container status \"c9315fc8b91e5f9234117cfe9ef0bdf7ba4d263e8618c3b9b38965605e685f6e\": rpc error: code = NotFound desc = could not find container \"c9315fc8b91e5f9234117cfe9ef0bdf7ba4d263e8618c3b9b38965605e685f6e\": container with ID starting with c9315fc8b91e5f9234117cfe9ef0bdf7ba4d263e8618c3b9b38965605e685f6e not found: ID does not exist" Jan 31 07:37:19 crc kubenswrapper[4908]: I0131 07:37:19.937338 4908 scope.go:117] "RemoveContainer" containerID="7590b743161d99dd7318484424437cf51396f50bee3cce8231535c3b5c5765e1" Jan 31 07:37:19 crc kubenswrapper[4908]: E0131 07:37:19.944901 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7590b743161d99dd7318484424437cf51396f50bee3cce8231535c3b5c5765e1\": container with ID starting with 7590b743161d99dd7318484424437cf51396f50bee3cce8231535c3b5c5765e1 not found: ID does not exist" containerID="7590b743161d99dd7318484424437cf51396f50bee3cce8231535c3b5c5765e1" Jan 31 07:37:19 crc kubenswrapper[4908]: I0131 07:37:19.944927 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7590b743161d99dd7318484424437cf51396f50bee3cce8231535c3b5c5765e1"} err="failed to get container status \"7590b743161d99dd7318484424437cf51396f50bee3cce8231535c3b5c5765e1\": rpc error: code = NotFound desc = could not find container \"7590b743161d99dd7318484424437cf51396f50bee3cce8231535c3b5c5765e1\": container with ID starting with 7590b743161d99dd7318484424437cf51396f50bee3cce8231535c3b5c5765e1 not found: ID does not exist" Jan 31 07:37:19 crc kubenswrapper[4908]: I0131 07:37:19.958416 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db4e226f-ab5f-4ce2-bfbb-baefd681a009" path="/var/lib/kubelet/pods/db4e226f-ab5f-4ce2-bfbb-baefd681a009/volumes" Jan 31 07:37:23 crc kubenswrapper[4908]: I0131 07:37:23.483929 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qnc98" Jan 31 07:37:23 crc kubenswrapper[4908]: I0131 07:37:23.484669 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qnc98" Jan 31 07:37:23 crc kubenswrapper[4908]: I0131 07:37:23.530188 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qnc98" Jan 31 07:37:23 crc kubenswrapper[4908]: I0131 07:37:23.925049 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qnc98" Jan 31 07:37:25 crc kubenswrapper[4908]: I0131 07:37:25.934924 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qnc98"] Jan 31 07:37:25 crc kubenswrapper[4908]: I0131 07:37:25.935151 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qnc98" podUID="454e4fcb-98e0-436f-a1af-2efd4c10f34d" containerName="registry-server" containerID="cri-o://5a7427a9a5cdbadbe9bd4ffbb89650374f7be522c419d565f70df49c11e05fb3" gracePeriod=2 Jan 31 07:37:26 crc kubenswrapper[4908]: I0131 07:37:26.258840 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qnc98" Jan 31 07:37:26 crc kubenswrapper[4908]: I0131 07:37:26.412719 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/454e4fcb-98e0-436f-a1af-2efd4c10f34d-utilities\") pod \"454e4fcb-98e0-436f-a1af-2efd4c10f34d\" (UID: \"454e4fcb-98e0-436f-a1af-2efd4c10f34d\") " Jan 31 07:37:26 crc kubenswrapper[4908]: I0131 07:37:26.412788 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/454e4fcb-98e0-436f-a1af-2efd4c10f34d-catalog-content\") pod \"454e4fcb-98e0-436f-a1af-2efd4c10f34d\" (UID: \"454e4fcb-98e0-436f-a1af-2efd4c10f34d\") " Jan 31 07:37:26 crc kubenswrapper[4908]: I0131 07:37:26.412865 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nszh\" (UniqueName: \"kubernetes.io/projected/454e4fcb-98e0-436f-a1af-2efd4c10f34d-kube-api-access-8nszh\") pod \"454e4fcb-98e0-436f-a1af-2efd4c10f34d\" (UID: \"454e4fcb-98e0-436f-a1af-2efd4c10f34d\") " Jan 31 07:37:26 crc kubenswrapper[4908]: I0131 07:37:26.413774 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/454e4fcb-98e0-436f-a1af-2efd4c10f34d-utilities" (OuterVolumeSpecName: "utilities") pod "454e4fcb-98e0-436f-a1af-2efd4c10f34d" (UID: "454e4fcb-98e0-436f-a1af-2efd4c10f34d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:37:26 crc kubenswrapper[4908]: I0131 07:37:26.418368 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/454e4fcb-98e0-436f-a1af-2efd4c10f34d-kube-api-access-8nszh" (OuterVolumeSpecName: "kube-api-access-8nszh") pod "454e4fcb-98e0-436f-a1af-2efd4c10f34d" (UID: "454e4fcb-98e0-436f-a1af-2efd4c10f34d"). InnerVolumeSpecName "kube-api-access-8nszh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:37:26 crc kubenswrapper[4908]: I0131 07:37:26.514569 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nszh\" (UniqueName: \"kubernetes.io/projected/454e4fcb-98e0-436f-a1af-2efd4c10f34d-kube-api-access-8nszh\") on node \"crc\" DevicePath \"\"" Jan 31 07:37:26 crc kubenswrapper[4908]: I0131 07:37:26.514601 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/454e4fcb-98e0-436f-a1af-2efd4c10f34d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:37:26 crc kubenswrapper[4908]: I0131 07:37:26.899113 4908 generic.go:334] "Generic (PLEG): container finished" podID="454e4fcb-98e0-436f-a1af-2efd4c10f34d" containerID="5a7427a9a5cdbadbe9bd4ffbb89650374f7be522c419d565f70df49c11e05fb3" exitCode=0 Jan 31 07:37:26 crc kubenswrapper[4908]: I0131 07:37:26.899172 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qnc98" Jan 31 07:37:26 crc kubenswrapper[4908]: I0131 07:37:26.899191 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qnc98" event={"ID":"454e4fcb-98e0-436f-a1af-2efd4c10f34d","Type":"ContainerDied","Data":"5a7427a9a5cdbadbe9bd4ffbb89650374f7be522c419d565f70df49c11e05fb3"} Jan 31 07:37:26 crc kubenswrapper[4908]: I0131 07:37:26.899533 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qnc98" event={"ID":"454e4fcb-98e0-436f-a1af-2efd4c10f34d","Type":"ContainerDied","Data":"dedaddddb0562772e3a7c02c38c37bba10d9bc3cad4e5ddc8e605618912a1f29"} Jan 31 07:37:26 crc kubenswrapper[4908]: I0131 07:37:26.899551 4908 scope.go:117] "RemoveContainer" containerID="5a7427a9a5cdbadbe9bd4ffbb89650374f7be522c419d565f70df49c11e05fb3" Jan 31 07:37:26 crc kubenswrapper[4908]: I0131 07:37:26.916356 4908 scope.go:117] "RemoveContainer" containerID="15be9fc8550f88482378a4aefb479e64f4086e929afe0348688e770bdb09c965" Jan 31 07:37:26 crc kubenswrapper[4908]: I0131 07:37:26.935730 4908 scope.go:117] "RemoveContainer" containerID="b4cbd62eb3656b443e55ed3bfc5740c911ef25ca19827803511d69e672e90bb3" Jan 31 07:37:26 crc kubenswrapper[4908]: I0131 07:37:26.967092 4908 scope.go:117] "RemoveContainer" containerID="5a7427a9a5cdbadbe9bd4ffbb89650374f7be522c419d565f70df49c11e05fb3" Jan 31 07:37:26 crc kubenswrapper[4908]: E0131 07:37:26.967634 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a7427a9a5cdbadbe9bd4ffbb89650374f7be522c419d565f70df49c11e05fb3\": container with ID starting with 5a7427a9a5cdbadbe9bd4ffbb89650374f7be522c419d565f70df49c11e05fb3 not found: ID does not exist" containerID="5a7427a9a5cdbadbe9bd4ffbb89650374f7be522c419d565f70df49c11e05fb3" Jan 31 07:37:26 crc kubenswrapper[4908]: I0131 07:37:26.967712 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a7427a9a5cdbadbe9bd4ffbb89650374f7be522c419d565f70df49c11e05fb3"} err="failed to get container status \"5a7427a9a5cdbadbe9bd4ffbb89650374f7be522c419d565f70df49c11e05fb3\": rpc error: code = NotFound desc = could not find container \"5a7427a9a5cdbadbe9bd4ffbb89650374f7be522c419d565f70df49c11e05fb3\": container with ID starting with 5a7427a9a5cdbadbe9bd4ffbb89650374f7be522c419d565f70df49c11e05fb3 not found: ID does not exist" Jan 31 07:37:26 crc kubenswrapper[4908]: I0131 07:37:26.967756 4908 scope.go:117] "RemoveContainer" containerID="15be9fc8550f88482378a4aefb479e64f4086e929afe0348688e770bdb09c965" Jan 31 07:37:26 crc kubenswrapper[4908]: E0131 07:37:26.968256 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15be9fc8550f88482378a4aefb479e64f4086e929afe0348688e770bdb09c965\": container with ID starting with 15be9fc8550f88482378a4aefb479e64f4086e929afe0348688e770bdb09c965 not found: ID does not exist" containerID="15be9fc8550f88482378a4aefb479e64f4086e929afe0348688e770bdb09c965" Jan 31 07:37:26 crc kubenswrapper[4908]: I0131 07:37:26.968298 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15be9fc8550f88482378a4aefb479e64f4086e929afe0348688e770bdb09c965"} err="failed to get container status \"15be9fc8550f88482378a4aefb479e64f4086e929afe0348688e770bdb09c965\": rpc error: code = NotFound desc = could not find container \"15be9fc8550f88482378a4aefb479e64f4086e929afe0348688e770bdb09c965\": container with ID starting with 15be9fc8550f88482378a4aefb479e64f4086e929afe0348688e770bdb09c965 not found: ID does not exist" Jan 31 07:37:26 crc kubenswrapper[4908]: I0131 07:37:26.968323 4908 scope.go:117] "RemoveContainer" containerID="b4cbd62eb3656b443e55ed3bfc5740c911ef25ca19827803511d69e672e90bb3" Jan 31 07:37:26 crc kubenswrapper[4908]: E0131 07:37:26.968708 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4cbd62eb3656b443e55ed3bfc5740c911ef25ca19827803511d69e672e90bb3\": container with ID starting with b4cbd62eb3656b443e55ed3bfc5740c911ef25ca19827803511d69e672e90bb3 not found: ID does not exist" containerID="b4cbd62eb3656b443e55ed3bfc5740c911ef25ca19827803511d69e672e90bb3" Jan 31 07:37:26 crc kubenswrapper[4908]: I0131 07:37:26.968750 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4cbd62eb3656b443e55ed3bfc5740c911ef25ca19827803511d69e672e90bb3"} err="failed to get container status \"b4cbd62eb3656b443e55ed3bfc5740c911ef25ca19827803511d69e672e90bb3\": rpc error: code = NotFound desc = could not find container \"b4cbd62eb3656b443e55ed3bfc5740c911ef25ca19827803511d69e672e90bb3\": container with ID starting with b4cbd62eb3656b443e55ed3bfc5740c911ef25ca19827803511d69e672e90bb3 not found: ID does not exist" Jan 31 07:37:27 crc kubenswrapper[4908]: I0131 07:37:27.227736 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-86bfc46b97-tmj4g" Jan 31 07:37:27 crc kubenswrapper[4908]: I0131 07:37:27.398308 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/454e4fcb-98e0-436f-a1af-2efd4c10f34d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "454e4fcb-98e0-436f-a1af-2efd4c10f34d" (UID: "454e4fcb-98e0-436f-a1af-2efd4c10f34d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:37:27 crc kubenswrapper[4908]: I0131 07:37:27.426714 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/454e4fcb-98e0-436f-a1af-2efd4c10f34d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:37:27 crc kubenswrapper[4908]: I0131 07:37:27.535000 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qnc98"] Jan 31 07:37:27 crc kubenswrapper[4908]: I0131 07:37:27.549627 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qnc98"] Jan 31 07:37:27 crc kubenswrapper[4908]: I0131 07:37:27.946720 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="454e4fcb-98e0-436f-a1af-2efd4c10f34d" path="/var/lib/kubelet/pods/454e4fcb-98e0-436f-a1af-2efd4c10f34d/volumes" Jan 31 07:37:40 crc kubenswrapper[4908]: I0131 07:37:40.431340 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:37:40 crc kubenswrapper[4908]: I0131 07:37:40.431892 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.012547 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-9xflb"] Jan 31 07:37:47 crc kubenswrapper[4908]: E0131 07:37:47.013322 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db4e226f-ab5f-4ce2-bfbb-baefd681a009" containerName="extract-utilities" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.013335 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="db4e226f-ab5f-4ce2-bfbb-baefd681a009" containerName="extract-utilities" Jan 31 07:37:47 crc kubenswrapper[4908]: E0131 07:37:47.013350 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="454e4fcb-98e0-436f-a1af-2efd4c10f34d" containerName="extract-content" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.013358 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="454e4fcb-98e0-436f-a1af-2efd4c10f34d" containerName="extract-content" Jan 31 07:37:47 crc kubenswrapper[4908]: E0131 07:37:47.013370 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db4e226f-ab5f-4ce2-bfbb-baefd681a009" containerName="extract-content" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.013378 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="db4e226f-ab5f-4ce2-bfbb-baefd681a009" containerName="extract-content" Jan 31 07:37:47 crc kubenswrapper[4908]: E0131 07:37:47.013387 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="454e4fcb-98e0-436f-a1af-2efd4c10f34d" containerName="registry-server" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.013394 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="454e4fcb-98e0-436f-a1af-2efd4c10f34d" containerName="registry-server" Jan 31 07:37:47 crc kubenswrapper[4908]: E0131 07:37:47.013407 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="454e4fcb-98e0-436f-a1af-2efd4c10f34d" containerName="extract-utilities" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.013412 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="454e4fcb-98e0-436f-a1af-2efd4c10f34d" containerName="extract-utilities" Jan 31 07:37:47 crc kubenswrapper[4908]: E0131 07:37:47.013433 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db4e226f-ab5f-4ce2-bfbb-baefd681a009" containerName="registry-server" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.013438 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="db4e226f-ab5f-4ce2-bfbb-baefd681a009" containerName="registry-server" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.013533 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="454e4fcb-98e0-436f-a1af-2efd4c10f34d" containerName="registry-server" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.013551 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="db4e226f-ab5f-4ce2-bfbb-baefd681a009" containerName="registry-server" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.013905 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-9xflb" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.016597 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-tmsbs" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.026812 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-9xflb"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.049663 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-nkfsp"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.050462 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-nkfsp" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.052517 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-d4khs"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.052874 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-f7f48" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.053218 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-d4khs" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.065280 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-4pdtk" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.081429 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-d4khs"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.092055 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-nkfsp"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.095294 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-944q9"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.096835 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-944q9" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.102905 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-djc5w" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.120955 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-944q9"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.136807 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-p4nmk"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.139048 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-p4nmk" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.145650 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-vsdhk" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.161651 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fjkv\" (UniqueName: \"kubernetes.io/projected/cccc8258-9b99-4e94-911f-46cd1f95e2b7-kube-api-access-5fjkv\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-9xflb\" (UID: \"cccc8258-9b99-4e94-911f-46cd1f95e2b7\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-9xflb" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.161733 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5dnf\" (UniqueName: \"kubernetes.io/projected/94ede047-b24c-4510-946c-1fcc23ce8862-kube-api-access-g5dnf\") pod \"cinder-operator-controller-manager-8d874c8fc-d4khs\" (UID: \"94ede047-b24c-4510-946c-1fcc23ce8862\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-d4khs" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.161771 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjh4b\" (UniqueName: \"kubernetes.io/projected/fda2b57a-aff7-4b0f-baf1-dcc00fb5aa32-kube-api-access-kjh4b\") pod \"designate-operator-controller-manager-6d9697b7f4-nkfsp\" (UID: \"fda2b57a-aff7-4b0f-baf1-dcc00fb5aa32\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-nkfsp" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.177608 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-z9bfq"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.178615 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-z9bfq" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.181380 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-p4nmk"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.184711 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-z9bfq"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.189830 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-5m7q7" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.192429 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-kqzsg"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.193525 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-kqzsg" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.196575 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-kqzsg"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.205246 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.207474 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-xf5vr" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.263098 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-f254v"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.264520 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2decce95-3c8d-4a0e-b624-dcb914947d90-cert\") pod \"infra-operator-controller-manager-79955696d6-kqzsg\" (UID: \"2decce95-3c8d-4a0e-b624-dcb914947d90\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-kqzsg" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.264582 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fjkv\" (UniqueName: \"kubernetes.io/projected/cccc8258-9b99-4e94-911f-46cd1f95e2b7-kube-api-access-5fjkv\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-9xflb\" (UID: \"cccc8258-9b99-4e94-911f-46cd1f95e2b7\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-9xflb" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.264620 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbjwv\" (UniqueName: \"kubernetes.io/projected/2decce95-3c8d-4a0e-b624-dcb914947d90-kube-api-access-jbjwv\") pod \"infra-operator-controller-manager-79955696d6-kqzsg\" (UID: \"2decce95-3c8d-4a0e-b624-dcb914947d90\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-kqzsg" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.264642 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwq69\" (UniqueName: \"kubernetes.io/projected/0f7ae625-c53e-4e59-8fa6-357e0cf2e058-kube-api-access-lwq69\") pod \"heat-operator-controller-manager-69d6db494d-p4nmk\" (UID: \"0f7ae625-c53e-4e59-8fa6-357e0cf2e058\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-p4nmk" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.264665 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz5bx\" (UniqueName: \"kubernetes.io/projected/1103bcc6-b9d1-44b0-9206-1cf316e40aa1-kube-api-access-kz5bx\") pod \"horizon-operator-controller-manager-5fb775575f-z9bfq\" (UID: \"1103bcc6-b9d1-44b0-9206-1cf316e40aa1\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-z9bfq" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.264689 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5dnf\" (UniqueName: \"kubernetes.io/projected/94ede047-b24c-4510-946c-1fcc23ce8862-kube-api-access-g5dnf\") pod \"cinder-operator-controller-manager-8d874c8fc-d4khs\" (UID: \"94ede047-b24c-4510-946c-1fcc23ce8862\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-d4khs" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.264718 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjh4b\" (UniqueName: \"kubernetes.io/projected/fda2b57a-aff7-4b0f-baf1-dcc00fb5aa32-kube-api-access-kjh4b\") pod \"designate-operator-controller-manager-6d9697b7f4-nkfsp\" (UID: \"fda2b57a-aff7-4b0f-baf1-dcc00fb5aa32\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-nkfsp" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.264739 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4df4\" (UniqueName: \"kubernetes.io/projected/51c0e91e-58e4-4cdf-a06a-b79078097f32-kube-api-access-x4df4\") pod \"glance-operator-controller-manager-8886f4c47-944q9\" (UID: \"51c0e91e-58e4-4cdf-a06a-b79078097f32\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-944q9" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.264996 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-f254v" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.274318 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-fvt6p"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.275601 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-fvt6p" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.281691 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-ml8x8" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.309511 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-f254v"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.323875 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-xkjvb" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.333554 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-fvt6p"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.340811 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-brc72"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.341840 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-brc72" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.343198 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5dnf\" (UniqueName: \"kubernetes.io/projected/94ede047-b24c-4510-946c-1fcc23ce8862-kube-api-access-g5dnf\") pod \"cinder-operator-controller-manager-8d874c8fc-d4khs\" (UID: \"94ede047-b24c-4510-946c-1fcc23ce8862\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-d4khs" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.349477 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjh4b\" (UniqueName: \"kubernetes.io/projected/fda2b57a-aff7-4b0f-baf1-dcc00fb5aa32-kube-api-access-kjh4b\") pod \"designate-operator-controller-manager-6d9697b7f4-nkfsp\" (UID: \"fda2b57a-aff7-4b0f-baf1-dcc00fb5aa32\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-nkfsp" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.349666 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-8vzg8" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.354110 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fjkv\" (UniqueName: \"kubernetes.io/projected/cccc8258-9b99-4e94-911f-46cd1f95e2b7-kube-api-access-5fjkv\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-9xflb\" (UID: \"cccc8258-9b99-4e94-911f-46cd1f95e2b7\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-9xflb" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.361033 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-brc72"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.366394 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl5t2\" (UniqueName: \"kubernetes.io/projected/0ba21e20-1850-4eea-9eb1-c07fcb41619f-kube-api-access-xl5t2\") pod \"ironic-operator-controller-manager-5f4b8bd54d-f254v\" (UID: \"0ba21e20-1850-4eea-9eb1-c07fcb41619f\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-f254v" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.366440 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2decce95-3c8d-4a0e-b624-dcb914947d90-cert\") pod \"infra-operator-controller-manager-79955696d6-kqzsg\" (UID: \"2decce95-3c8d-4a0e-b624-dcb914947d90\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-kqzsg" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.366464 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj997\" (UniqueName: \"kubernetes.io/projected/d689356c-4f7d-46ed-927d-ce6b20bb4906-kube-api-access-lj997\") pod \"keystone-operator-controller-manager-84f48565d4-fvt6p\" (UID: \"d689356c-4f7d-46ed-927d-ce6b20bb4906\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-fvt6p" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.366485 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlmpt\" (UniqueName: \"kubernetes.io/projected/ed8b4596-65fa-4171-bde7-1507bc3fe80b-kube-api-access-mlmpt\") pod \"manila-operator-controller-manager-7dd968899f-brc72\" (UID: \"ed8b4596-65fa-4171-bde7-1507bc3fe80b\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-brc72" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.366516 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbjwv\" (UniqueName: \"kubernetes.io/projected/2decce95-3c8d-4a0e-b624-dcb914947d90-kube-api-access-jbjwv\") pod \"infra-operator-controller-manager-79955696d6-kqzsg\" (UID: \"2decce95-3c8d-4a0e-b624-dcb914947d90\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-kqzsg" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.366536 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwq69\" (UniqueName: \"kubernetes.io/projected/0f7ae625-c53e-4e59-8fa6-357e0cf2e058-kube-api-access-lwq69\") pod \"heat-operator-controller-manager-69d6db494d-p4nmk\" (UID: \"0f7ae625-c53e-4e59-8fa6-357e0cf2e058\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-p4nmk" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.366554 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz5bx\" (UniqueName: \"kubernetes.io/projected/1103bcc6-b9d1-44b0-9206-1cf316e40aa1-kube-api-access-kz5bx\") pod \"horizon-operator-controller-manager-5fb775575f-z9bfq\" (UID: \"1103bcc6-b9d1-44b0-9206-1cf316e40aa1\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-z9bfq" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.366582 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4df4\" (UniqueName: \"kubernetes.io/projected/51c0e91e-58e4-4cdf-a06a-b79078097f32-kube-api-access-x4df4\") pod \"glance-operator-controller-manager-8886f4c47-944q9\" (UID: \"51c0e91e-58e4-4cdf-a06a-b79078097f32\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-944q9" Jan 31 07:37:47 crc kubenswrapper[4908]: E0131 07:37:47.366906 4908 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 07:37:47 crc kubenswrapper[4908]: E0131 07:37:47.366943 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2decce95-3c8d-4a0e-b624-dcb914947d90-cert podName:2decce95-3c8d-4a0e-b624-dcb914947d90 nodeName:}" failed. No retries permitted until 2026-01-31 07:37:47.866928388 +0000 UTC m=+974.482873042 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2decce95-3c8d-4a0e-b624-dcb914947d90-cert") pod "infra-operator-controller-manager-79955696d6-kqzsg" (UID: "2decce95-3c8d-4a0e-b624-dcb914947d90") : secret "infra-operator-webhook-server-cert" not found Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.368977 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-j6b5m"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.370075 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-j6b5m" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.372357 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-nkfsp" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.384055 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-srd9l" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.384774 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-d4khs" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.420293 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz5bx\" (UniqueName: \"kubernetes.io/projected/1103bcc6-b9d1-44b0-9206-1cf316e40aa1-kube-api-access-kz5bx\") pod \"horizon-operator-controller-manager-5fb775575f-z9bfq\" (UID: \"1103bcc6-b9d1-44b0-9206-1cf316e40aa1\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-z9bfq" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.422111 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwq69\" (UniqueName: \"kubernetes.io/projected/0f7ae625-c53e-4e59-8fa6-357e0cf2e058-kube-api-access-lwq69\") pod \"heat-operator-controller-manager-69d6db494d-p4nmk\" (UID: \"0f7ae625-c53e-4e59-8fa6-357e0cf2e058\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-p4nmk" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.432800 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-h8xqc"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.433594 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-h8xqc" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.437245 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4df4\" (UniqueName: \"kubernetes.io/projected/51c0e91e-58e4-4cdf-a06a-b79078097f32-kube-api-access-x4df4\") pod \"glance-operator-controller-manager-8886f4c47-944q9\" (UID: \"51c0e91e-58e4-4cdf-a06a-b79078097f32\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-944q9" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.451876 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-wczl6" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.452072 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-bf666"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.452886 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bf666" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.466410 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-gp68k" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.467171 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-j6b5m"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.468841 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-p4nmk" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.471561 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-j8gx2"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.473101 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-j8gx2" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.483224 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbjwv\" (UniqueName: \"kubernetes.io/projected/2decce95-3c8d-4a0e-b624-dcb914947d90-kube-api-access-jbjwv\") pod \"infra-operator-controller-manager-79955696d6-kqzsg\" (UID: \"2decce95-3c8d-4a0e-b624-dcb914947d90\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-kqzsg" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.483609 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-gb97r" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.484216 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlmpt\" (UniqueName: \"kubernetes.io/projected/ed8b4596-65fa-4171-bde7-1507bc3fe80b-kube-api-access-mlmpt\") pod \"manila-operator-controller-manager-7dd968899f-brc72\" (UID: \"ed8b4596-65fa-4171-bde7-1507bc3fe80b\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-brc72" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.484299 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgvpg\" (UniqueName: \"kubernetes.io/projected/1a292336-6b89-4bd0-9f25-28190fad7f20-kube-api-access-hgvpg\") pod \"nova-operator-controller-manager-55bff696bd-bf666\" (UID: \"1a292336-6b89-4bd0-9f25-28190fad7f20\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bf666" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.484336 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5gv5\" (UniqueName: \"kubernetes.io/projected/1d777955-e0fc-4554-8e40-b17bdaaf752f-kube-api-access-k5gv5\") pod \"neutron-operator-controller-manager-585dbc889-h8xqc\" (UID: \"1d777955-e0fc-4554-8e40-b17bdaaf752f\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-h8xqc" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.484406 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl5t2\" (UniqueName: \"kubernetes.io/projected/0ba21e20-1850-4eea-9eb1-c07fcb41619f-kube-api-access-xl5t2\") pod \"ironic-operator-controller-manager-5f4b8bd54d-f254v\" (UID: \"0ba21e20-1850-4eea-9eb1-c07fcb41619f\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-f254v" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.484450 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj997\" (UniqueName: \"kubernetes.io/projected/d689356c-4f7d-46ed-927d-ce6b20bb4906-kube-api-access-lj997\") pod \"keystone-operator-controller-manager-84f48565d4-fvt6p\" (UID: \"d689356c-4f7d-46ed-927d-ce6b20bb4906\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-fvt6p" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.484479 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q55c2\" (UniqueName: \"kubernetes.io/projected/8d3bf73c-1178-4613-a68a-7897eaf053e9-kube-api-access-q55c2\") pod \"mariadb-operator-controller-manager-67bf948998-j6b5m\" (UID: \"8d3bf73c-1178-4613-a68a-7897eaf053e9\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-j6b5m" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.492800 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-h8xqc"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.508713 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl5t2\" (UniqueName: \"kubernetes.io/projected/0ba21e20-1850-4eea-9eb1-c07fcb41619f-kube-api-access-xl5t2\") pod \"ironic-operator-controller-manager-5f4b8bd54d-f254v\" (UID: \"0ba21e20-1850-4eea-9eb1-c07fcb41619f\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-f254v" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.510419 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-z9bfq" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.514044 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-bf666"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.523452 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj997\" (UniqueName: \"kubernetes.io/projected/d689356c-4f7d-46ed-927d-ce6b20bb4906-kube-api-access-lj997\") pod \"keystone-operator-controller-manager-84f48565d4-fvt6p\" (UID: \"d689356c-4f7d-46ed-927d-ce6b20bb4906\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-fvt6p" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.541216 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlmpt\" (UniqueName: \"kubernetes.io/projected/ed8b4596-65fa-4171-bde7-1507bc3fe80b-kube-api-access-mlmpt\") pod \"manila-operator-controller-manager-7dd968899f-brc72\" (UID: \"ed8b4596-65fa-4171-bde7-1507bc3fe80b\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-brc72" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.552508 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-j8gx2"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.563830 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.564803 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.576647 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-8wt7w" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.576909 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.583310 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-nf2q7"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.584392 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-nf2q7" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.587669 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q55c2\" (UniqueName: \"kubernetes.io/projected/8d3bf73c-1178-4613-a68a-7897eaf053e9-kube-api-access-q55c2\") pod \"mariadb-operator-controller-manager-67bf948998-j6b5m\" (UID: \"8d3bf73c-1178-4613-a68a-7897eaf053e9\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-j6b5m" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.587740 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf98x\" (UniqueName: \"kubernetes.io/projected/043d998e-7d47-4223-8bb5-8aa2f4a16b9c-kube-api-access-jf98x\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl\" (UID: \"043d998e-7d47-4223-8bb5-8aa2f4a16b9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.587770 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgvpg\" (UniqueName: \"kubernetes.io/projected/1a292336-6b89-4bd0-9f25-28190fad7f20-kube-api-access-hgvpg\") pod \"nova-operator-controller-manager-55bff696bd-bf666\" (UID: \"1a292336-6b89-4bd0-9f25-28190fad7f20\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bf666" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.587797 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5gv5\" (UniqueName: \"kubernetes.io/projected/1d777955-e0fc-4554-8e40-b17bdaaf752f-kube-api-access-k5gv5\") pod \"neutron-operator-controller-manager-585dbc889-h8xqc\" (UID: \"1d777955-e0fc-4554-8e40-b17bdaaf752f\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-h8xqc" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.587816 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4hx2\" (UniqueName: \"kubernetes.io/projected/e8436c75-0fe8-4fd9-98f0-7c2f9c41ef61-kube-api-access-d4hx2\") pod \"octavia-operator-controller-manager-6687f8d877-j8gx2\" (UID: \"e8436c75-0fe8-4fd9-98f0-7c2f9c41ef61\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-j8gx2" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.587854 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/043d998e-7d47-4223-8bb5-8aa2f4a16b9c-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl\" (UID: \"043d998e-7d47-4223-8bb5-8aa2f4a16b9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.591156 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-tsh88" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.600067 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-2nnfd"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.600930 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2nnfd" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.608390 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-htb5z" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.608559 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-nf2q7"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.609499 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-f254v" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.609875 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.620830 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q55c2\" (UniqueName: \"kubernetes.io/projected/8d3bf73c-1178-4613-a68a-7897eaf053e9-kube-api-access-q55c2\") pod \"mariadb-operator-controller-manager-67bf948998-j6b5m\" (UID: \"8d3bf73c-1178-4613-a68a-7897eaf053e9\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-j6b5m" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.623946 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bgpbn"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.625195 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgvpg\" (UniqueName: \"kubernetes.io/projected/1a292336-6b89-4bd0-9f25-28190fad7f20-kube-api-access-hgvpg\") pod \"nova-operator-controller-manager-55bff696bd-bf666\" (UID: \"1a292336-6b89-4bd0-9f25-28190fad7f20\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bf666" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.625412 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bgpbn" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.627288 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-npbj9" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.630460 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-fvt6p" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.630619 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5gv5\" (UniqueName: \"kubernetes.io/projected/1d777955-e0fc-4554-8e40-b17bdaaf752f-kube-api-access-k5gv5\") pod \"neutron-operator-controller-manager-585dbc889-h8xqc\" (UID: \"1d777955-e0fc-4554-8e40-b17bdaaf752f\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-h8xqc" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.631258 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-rcz4z"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.631495 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-9xflb" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.632705 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rcz4z" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.637202 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-w8b9f" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.689404 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf98x\" (UniqueName: \"kubernetes.io/projected/043d998e-7d47-4223-8bb5-8aa2f4a16b9c-kube-api-access-jf98x\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl\" (UID: \"043d998e-7d47-4223-8bb5-8aa2f4a16b9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.689468 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4hx2\" (UniqueName: \"kubernetes.io/projected/e8436c75-0fe8-4fd9-98f0-7c2f9c41ef61-kube-api-access-d4hx2\") pod \"octavia-operator-controller-manager-6687f8d877-j8gx2\" (UID: \"e8436c75-0fe8-4fd9-98f0-7c2f9c41ef61\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-j8gx2" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.689494 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2qhq\" (UniqueName: \"kubernetes.io/projected/f8753705-b38b-4124-ad36-013ca716e47e-kube-api-access-r2qhq\") pod \"swift-operator-controller-manager-68fc8c869-rcz4z\" (UID: \"f8753705-b38b-4124-ad36-013ca716e47e\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rcz4z" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.689525 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxj7c\" (UniqueName: \"kubernetes.io/projected/d00296fc-50ea-4b6e-9c9b-492ea9027347-kube-api-access-lxj7c\") pod \"telemetry-operator-controller-manager-64b5b76f97-bgpbn\" (UID: \"d00296fc-50ea-4b6e-9c9b-492ea9027347\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bgpbn" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.689545 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/043d998e-7d47-4223-8bb5-8aa2f4a16b9c-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl\" (UID: \"043d998e-7d47-4223-8bb5-8aa2f4a16b9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.689577 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-864nb\" (UniqueName: \"kubernetes.io/projected/349437a4-aa6e-4e78-95e5-0ee664eba158-kube-api-access-864nb\") pod \"placement-operator-controller-manager-5b964cf4cd-2nnfd\" (UID: \"349437a4-aa6e-4e78-95e5-0ee664eba158\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2nnfd" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.689614 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvl9h\" (UniqueName: \"kubernetes.io/projected/c3d892f3-217a-4c11-9625-4b0dfffeaca0-kube-api-access-vvl9h\") pod \"ovn-operator-controller-manager-788c46999f-nf2q7\" (UID: \"c3d892f3-217a-4c11-9625-4b0dfffeaca0\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-nf2q7" Jan 31 07:37:47 crc kubenswrapper[4908]: E0131 07:37:47.689826 4908 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 07:37:47 crc kubenswrapper[4908]: E0131 07:37:47.689859 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/043d998e-7d47-4223-8bb5-8aa2f4a16b9c-cert podName:043d998e-7d47-4223-8bb5-8aa2f4a16b9c nodeName:}" failed. No retries permitted until 2026-01-31 07:37:48.189846613 +0000 UTC m=+974.805791257 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/043d998e-7d47-4223-8bb5-8aa2f4a16b9c-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl" (UID: "043d998e-7d47-4223-8bb5-8aa2f4a16b9c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.702038 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-86c469f8fb-d6v25"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.702856 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-86c469f8fb-d6v25" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.711348 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-q6sqx" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.716108 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4hx2\" (UniqueName: \"kubernetes.io/projected/e8436c75-0fe8-4fd9-98f0-7c2f9c41ef61-kube-api-access-d4hx2\") pod \"octavia-operator-controller-manager-6687f8d877-j8gx2\" (UID: \"e8436c75-0fe8-4fd9-98f0-7c2f9c41ef61\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-j8gx2" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.724929 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf98x\" (UniqueName: \"kubernetes.io/projected/043d998e-7d47-4223-8bb5-8aa2f4a16b9c-kube-api-access-jf98x\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl\" (UID: \"043d998e-7d47-4223-8bb5-8aa2f4a16b9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.726277 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-944q9" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.750223 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-2nnfd"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.782394 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-86c469f8fb-d6v25"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.790086 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-brc72" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.791002 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-864nb\" (UniqueName: \"kubernetes.io/projected/349437a4-aa6e-4e78-95e5-0ee664eba158-kube-api-access-864nb\") pod \"placement-operator-controller-manager-5b964cf4cd-2nnfd\" (UID: \"349437a4-aa6e-4e78-95e5-0ee664eba158\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2nnfd" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.791069 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrnnn\" (UniqueName: \"kubernetes.io/projected/317d11b5-971f-4f7c-8ab7-60b95122d08a-kube-api-access-lrnnn\") pod \"test-operator-controller-manager-86c469f8fb-d6v25\" (UID: \"317d11b5-971f-4f7c-8ab7-60b95122d08a\") " pod="openstack-operators/test-operator-controller-manager-86c469f8fb-d6v25" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.791119 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvl9h\" (UniqueName: \"kubernetes.io/projected/c3d892f3-217a-4c11-9625-4b0dfffeaca0-kube-api-access-vvl9h\") pod \"ovn-operator-controller-manager-788c46999f-nf2q7\" (UID: \"c3d892f3-217a-4c11-9625-4b0dfffeaca0\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-nf2q7" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.791209 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2qhq\" (UniqueName: \"kubernetes.io/projected/f8753705-b38b-4124-ad36-013ca716e47e-kube-api-access-r2qhq\") pod \"swift-operator-controller-manager-68fc8c869-rcz4z\" (UID: \"f8753705-b38b-4124-ad36-013ca716e47e\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rcz4z" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.791252 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxj7c\" (UniqueName: \"kubernetes.io/projected/d00296fc-50ea-4b6e-9c9b-492ea9027347-kube-api-access-lxj7c\") pod \"telemetry-operator-controller-manager-64b5b76f97-bgpbn\" (UID: \"d00296fc-50ea-4b6e-9c9b-492ea9027347\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bgpbn" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.794268 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bgpbn"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.808206 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-v9f72"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.809213 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-v9f72" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.815938 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-wf6xj" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.816569 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-rcz4z"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.816598 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvl9h\" (UniqueName: \"kubernetes.io/projected/c3d892f3-217a-4c11-9625-4b0dfffeaca0-kube-api-access-vvl9h\") pod \"ovn-operator-controller-manager-788c46999f-nf2q7\" (UID: \"c3d892f3-217a-4c11-9625-4b0dfffeaca0\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-nf2q7" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.831660 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-j6b5m" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.831783 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-864nb\" (UniqueName: \"kubernetes.io/projected/349437a4-aa6e-4e78-95e5-0ee664eba158-kube-api-access-864nb\") pod \"placement-operator-controller-manager-5b964cf4cd-2nnfd\" (UID: \"349437a4-aa6e-4e78-95e5-0ee664eba158\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2nnfd" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.845487 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-h8xqc" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.858369 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxj7c\" (UniqueName: \"kubernetes.io/projected/d00296fc-50ea-4b6e-9c9b-492ea9027347-kube-api-access-lxj7c\") pod \"telemetry-operator-controller-manager-64b5b76f97-bgpbn\" (UID: \"d00296fc-50ea-4b6e-9c9b-492ea9027347\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bgpbn" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.863113 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2qhq\" (UniqueName: \"kubernetes.io/projected/f8753705-b38b-4124-ad36-013ca716e47e-kube-api-access-r2qhq\") pod \"swift-operator-controller-manager-68fc8c869-rcz4z\" (UID: \"f8753705-b38b-4124-ad36-013ca716e47e\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rcz4z" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.875016 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-v9f72"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.881905 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bf666" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.894157 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swfrl\" (UniqueName: \"kubernetes.io/projected/4601f2a2-ec79-4ab3-a3b5-d176ac2359e8-kube-api-access-swfrl\") pod \"watcher-operator-controller-manager-564965969-v9f72\" (UID: \"4601f2a2-ec79-4ab3-a3b5-d176ac2359e8\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-v9f72" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.894290 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2decce95-3c8d-4a0e-b624-dcb914947d90-cert\") pod \"infra-operator-controller-manager-79955696d6-kqzsg\" (UID: \"2decce95-3c8d-4a0e-b624-dcb914947d90\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-kqzsg" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.894347 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrnnn\" (UniqueName: \"kubernetes.io/projected/317d11b5-971f-4f7c-8ab7-60b95122d08a-kube-api-access-lrnnn\") pod \"test-operator-controller-manager-86c469f8fb-d6v25\" (UID: \"317d11b5-971f-4f7c-8ab7-60b95122d08a\") " pod="openstack-operators/test-operator-controller-manager-86c469f8fb-d6v25" Jan 31 07:37:47 crc kubenswrapper[4908]: E0131 07:37:47.894886 4908 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 07:37:47 crc kubenswrapper[4908]: E0131 07:37:47.894948 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2decce95-3c8d-4a0e-b624-dcb914947d90-cert podName:2decce95-3c8d-4a0e-b624-dcb914947d90 nodeName:}" failed. No retries permitted until 2026-01-31 07:37:48.894931047 +0000 UTC m=+975.510875701 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2decce95-3c8d-4a0e-b624-dcb914947d90-cert") pod "infra-operator-controller-manager-79955696d6-kqzsg" (UID: "2decce95-3c8d-4a0e-b624-dcb914947d90") : secret "infra-operator-webhook-server-cert" not found Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.916467 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrnnn\" (UniqueName: \"kubernetes.io/projected/317d11b5-971f-4f7c-8ab7-60b95122d08a-kube-api-access-lrnnn\") pod \"test-operator-controller-manager-86c469f8fb-d6v25\" (UID: \"317d11b5-971f-4f7c-8ab7-60b95122d08a\") " pod="openstack-operators/test-operator-controller-manager-86c469f8fb-d6v25" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.927026 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-j8gx2" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.965418 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-nf2q7" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.989856 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2nnfd" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.990252 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-64c4b754d-2gx64"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.991163 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-64c4b754d-2gx64"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.991197 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phddh"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.991839 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phddh"] Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.991952 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phddh" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.992027 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-64c4b754d-2gx64" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.995343 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.995655 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-xfvdk" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.995840 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.997021 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-metrics-certs\") pod \"openstack-operator-controller-manager-64c4b754d-2gx64\" (UID: \"6117707d-0a29-4b0b-bf9e-c23c308952b4\") " pod="openstack-operators/openstack-operator-controller-manager-64c4b754d-2gx64" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.997082 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzcn5\" (UniqueName: \"kubernetes.io/projected/6117707d-0a29-4b0b-bf9e-c23c308952b4-kube-api-access-wzcn5\") pod \"openstack-operator-controller-manager-64c4b754d-2gx64\" (UID: \"6117707d-0a29-4b0b-bf9e-c23c308952b4\") " pod="openstack-operators/openstack-operator-controller-manager-64c4b754d-2gx64" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.997136 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swfrl\" (UniqueName: \"kubernetes.io/projected/4601f2a2-ec79-4ab3-a3b5-d176ac2359e8-kube-api-access-swfrl\") pod \"watcher-operator-controller-manager-564965969-v9f72\" (UID: \"4601f2a2-ec79-4ab3-a3b5-d176ac2359e8\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-v9f72" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.997197 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fv6g\" (UniqueName: \"kubernetes.io/projected/78b4780a-6127-426f-9d27-754ab311f0f8-kube-api-access-2fv6g\") pod \"rabbitmq-cluster-operator-manager-668c99d594-phddh\" (UID: \"78b4780a-6127-426f-9d27-754ab311f0f8\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phddh" Jan 31 07:37:47 crc kubenswrapper[4908]: I0131 07:37:47.997225 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-webhook-certs\") pod \"openstack-operator-controller-manager-64c4b754d-2gx64\" (UID: \"6117707d-0a29-4b0b-bf9e-c23c308952b4\") " pod="openstack-operators/openstack-operator-controller-manager-64c4b754d-2gx64" Jan 31 07:37:48 crc kubenswrapper[4908]: I0131 07:37:48.003336 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-cr6hd" Jan 31 07:37:48 crc kubenswrapper[4908]: I0131 07:37:48.033889 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bgpbn" Jan 31 07:37:48 crc kubenswrapper[4908]: I0131 07:37:48.038033 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swfrl\" (UniqueName: \"kubernetes.io/projected/4601f2a2-ec79-4ab3-a3b5-d176ac2359e8-kube-api-access-swfrl\") pod \"watcher-operator-controller-manager-564965969-v9f72\" (UID: \"4601f2a2-ec79-4ab3-a3b5-d176ac2359e8\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-v9f72" Jan 31 07:37:48 crc kubenswrapper[4908]: I0131 07:37:48.060087 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rcz4z" Jan 31 07:37:48 crc kubenswrapper[4908]: I0131 07:37:48.081304 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-86c469f8fb-d6v25" Jan 31 07:37:48 crc kubenswrapper[4908]: I0131 07:37:48.099146 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fv6g\" (UniqueName: \"kubernetes.io/projected/78b4780a-6127-426f-9d27-754ab311f0f8-kube-api-access-2fv6g\") pod \"rabbitmq-cluster-operator-manager-668c99d594-phddh\" (UID: \"78b4780a-6127-426f-9d27-754ab311f0f8\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phddh" Jan 31 07:37:48 crc kubenswrapper[4908]: I0131 07:37:48.099205 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-webhook-certs\") pod \"openstack-operator-controller-manager-64c4b754d-2gx64\" (UID: \"6117707d-0a29-4b0b-bf9e-c23c308952b4\") " pod="openstack-operators/openstack-operator-controller-manager-64c4b754d-2gx64" Jan 31 07:37:48 crc kubenswrapper[4908]: I0131 07:37:48.099302 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-metrics-certs\") pod \"openstack-operator-controller-manager-64c4b754d-2gx64\" (UID: \"6117707d-0a29-4b0b-bf9e-c23c308952b4\") " pod="openstack-operators/openstack-operator-controller-manager-64c4b754d-2gx64" Jan 31 07:37:48 crc kubenswrapper[4908]: I0131 07:37:48.099334 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzcn5\" (UniqueName: \"kubernetes.io/projected/6117707d-0a29-4b0b-bf9e-c23c308952b4-kube-api-access-wzcn5\") pod \"openstack-operator-controller-manager-64c4b754d-2gx64\" (UID: \"6117707d-0a29-4b0b-bf9e-c23c308952b4\") " pod="openstack-operators/openstack-operator-controller-manager-64c4b754d-2gx64" Jan 31 07:37:48 crc kubenswrapper[4908]: E0131 07:37:48.100529 4908 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 07:37:48 crc kubenswrapper[4908]: E0131 07:37:48.100580 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-webhook-certs podName:6117707d-0a29-4b0b-bf9e-c23c308952b4 nodeName:}" failed. No retries permitted until 2026-01-31 07:37:48.600562705 +0000 UTC m=+975.216507359 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-webhook-certs") pod "openstack-operator-controller-manager-64c4b754d-2gx64" (UID: "6117707d-0a29-4b0b-bf9e-c23c308952b4") : secret "webhook-server-cert" not found Jan 31 07:37:48 crc kubenswrapper[4908]: E0131 07:37:48.100752 4908 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 07:37:48 crc kubenswrapper[4908]: E0131 07:37:48.100785 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-metrics-certs podName:6117707d-0a29-4b0b-bf9e-c23c308952b4 nodeName:}" failed. No retries permitted until 2026-01-31 07:37:48.600775541 +0000 UTC m=+975.216720195 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-metrics-certs") pod "openstack-operator-controller-manager-64c4b754d-2gx64" (UID: "6117707d-0a29-4b0b-bf9e-c23c308952b4") : secret "metrics-server-cert" not found Jan 31 07:37:48 crc kubenswrapper[4908]: I0131 07:37:48.141962 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fv6g\" (UniqueName: \"kubernetes.io/projected/78b4780a-6127-426f-9d27-754ab311f0f8-kube-api-access-2fv6g\") pod \"rabbitmq-cluster-operator-manager-668c99d594-phddh\" (UID: \"78b4780a-6127-426f-9d27-754ab311f0f8\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phddh" Jan 31 07:37:48 crc kubenswrapper[4908]: I0131 07:37:48.145836 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzcn5\" (UniqueName: \"kubernetes.io/projected/6117707d-0a29-4b0b-bf9e-c23c308952b4-kube-api-access-wzcn5\") pod \"openstack-operator-controller-manager-64c4b754d-2gx64\" (UID: \"6117707d-0a29-4b0b-bf9e-c23c308952b4\") " pod="openstack-operators/openstack-operator-controller-manager-64c4b754d-2gx64" Jan 31 07:37:48 crc kubenswrapper[4908]: I0131 07:37:48.193179 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-v9f72" Jan 31 07:37:48 crc kubenswrapper[4908]: I0131 07:37:48.201445 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/043d998e-7d47-4223-8bb5-8aa2f4a16b9c-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl\" (UID: \"043d998e-7d47-4223-8bb5-8aa2f4a16b9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl" Jan 31 07:37:48 crc kubenswrapper[4908]: E0131 07:37:48.201620 4908 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 07:37:48 crc kubenswrapper[4908]: E0131 07:37:48.201671 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/043d998e-7d47-4223-8bb5-8aa2f4a16b9c-cert podName:043d998e-7d47-4223-8bb5-8aa2f4a16b9c nodeName:}" failed. No retries permitted until 2026-01-31 07:37:49.201656891 +0000 UTC m=+975.817601545 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/043d998e-7d47-4223-8bb5-8aa2f4a16b9c-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl" (UID: "043d998e-7d47-4223-8bb5-8aa2f4a16b9c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 07:37:48 crc kubenswrapper[4908]: I0131 07:37:48.222465 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-d4khs"] Jan 31 07:37:48 crc kubenswrapper[4908]: I0131 07:37:48.359599 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phddh" Jan 31 07:37:48 crc kubenswrapper[4908]: I0131 07:37:48.479152 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-p4nmk"] Jan 31 07:37:48 crc kubenswrapper[4908]: I0131 07:37:48.498705 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-z9bfq"] Jan 31 07:37:48 crc kubenswrapper[4908]: I0131 07:37:48.503580 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-f254v"] Jan 31 07:37:48 crc kubenswrapper[4908]: I0131 07:37:48.510346 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-nkfsp"] Jan 31 07:37:48 crc kubenswrapper[4908]: W0131 07:37:48.538261 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfda2b57a_aff7_4b0f_baf1_dcc00fb5aa32.slice/crio-04ca454a75a0c2f60c4698541587e480a81087ddebef1857b5cdde2a48dcc46c WatchSource:0}: Error finding container 04ca454a75a0c2f60c4698541587e480a81087ddebef1857b5cdde2a48dcc46c: Status 404 returned error can't find the container with id 04ca454a75a0c2f60c4698541587e480a81087ddebef1857b5cdde2a48dcc46c Jan 31 07:37:48 crc kubenswrapper[4908]: I0131 07:37:48.610032 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-webhook-certs\") pod \"openstack-operator-controller-manager-64c4b754d-2gx64\" (UID: \"6117707d-0a29-4b0b-bf9e-c23c308952b4\") " pod="openstack-operators/openstack-operator-controller-manager-64c4b754d-2gx64" Jan 31 07:37:48 crc kubenswrapper[4908]: I0131 07:37:48.610111 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-metrics-certs\") pod \"openstack-operator-controller-manager-64c4b754d-2gx64\" (UID: \"6117707d-0a29-4b0b-bf9e-c23c308952b4\") " pod="openstack-operators/openstack-operator-controller-manager-64c4b754d-2gx64" Jan 31 07:37:48 crc kubenswrapper[4908]: E0131 07:37:48.610162 4908 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 07:37:48 crc kubenswrapper[4908]: E0131 07:37:48.610220 4908 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 07:37:48 crc kubenswrapper[4908]: E0131 07:37:48.610224 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-webhook-certs podName:6117707d-0a29-4b0b-bf9e-c23c308952b4 nodeName:}" failed. No retries permitted until 2026-01-31 07:37:49.610205699 +0000 UTC m=+976.226150353 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-webhook-certs") pod "openstack-operator-controller-manager-64c4b754d-2gx64" (UID: "6117707d-0a29-4b0b-bf9e-c23c308952b4") : secret "webhook-server-cert" not found Jan 31 07:37:48 crc kubenswrapper[4908]: E0131 07:37:48.610260 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-metrics-certs podName:6117707d-0a29-4b0b-bf9e-c23c308952b4 nodeName:}" failed. No retries permitted until 2026-01-31 07:37:49.61024978 +0000 UTC m=+976.226194434 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-metrics-certs") pod "openstack-operator-controller-manager-64c4b754d-2gx64" (UID: "6117707d-0a29-4b0b-bf9e-c23c308952b4") : secret "metrics-server-cert" not found Jan 31 07:37:48 crc kubenswrapper[4908]: I0131 07:37:48.623342 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-9xflb"] Jan 31 07:37:48 crc kubenswrapper[4908]: I0131 07:37:48.671485 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-fvt6p"] Jan 31 07:37:48 crc kubenswrapper[4908]: I0131 07:37:48.685271 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-944q9"] Jan 31 07:37:48 crc kubenswrapper[4908]: I0131 07:37:48.697963 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-j6b5m"] Jan 31 07:37:48 crc kubenswrapper[4908]: I0131 07:37:48.815412 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-2nnfd"] Jan 31 07:37:48 crc kubenswrapper[4908]: I0131 07:37:48.822676 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bgpbn"] Jan 31 07:37:48 crc kubenswrapper[4908]: I0131 07:37:48.829129 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-bf666"] Jan 31 07:37:48 crc kubenswrapper[4908]: I0131 07:37:48.833411 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-j8gx2"] Jan 31 07:37:48 crc kubenswrapper[4908]: W0131 07:37:48.835216 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd00296fc_50ea_4b6e_9c9b_492ea9027347.slice/crio-d7cb8fa0004d7fca6a5e3e8e56b668df064fc52a3424061181a5925c420db05c WatchSource:0}: Error finding container d7cb8fa0004d7fca6a5e3e8e56b668df064fc52a3424061181a5925c420db05c: Status 404 returned error can't find the container with id d7cb8fa0004d7fca6a5e3e8e56b668df064fc52a3424061181a5925c420db05c Jan 31 07:37:48 crc kubenswrapper[4908]: E0131 07:37:48.842630 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d4hx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-j8gx2_openstack-operators(e8436c75-0fe8-4fd9-98f0-7c2f9c41ef61): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 07:37:48 crc kubenswrapper[4908]: E0131 07:37:48.845909 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-j8gx2" podUID="e8436c75-0fe8-4fd9-98f0-7c2f9c41ef61" Jan 31 07:37:48 crc kubenswrapper[4908]: I0131 07:37:48.920112 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2decce95-3c8d-4a0e-b624-dcb914947d90-cert\") pod \"infra-operator-controller-manager-79955696d6-kqzsg\" (UID: \"2decce95-3c8d-4a0e-b624-dcb914947d90\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-kqzsg" Jan 31 07:37:48 crc kubenswrapper[4908]: E0131 07:37:48.920308 4908 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 07:37:48 crc kubenswrapper[4908]: E0131 07:37:48.920357 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2decce95-3c8d-4a0e-b624-dcb914947d90-cert podName:2decce95-3c8d-4a0e-b624-dcb914947d90 nodeName:}" failed. No retries permitted until 2026-01-31 07:37:50.920343268 +0000 UTC m=+977.536287922 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2decce95-3c8d-4a0e-b624-dcb914947d90-cert") pod "infra-operator-controller-manager-79955696d6-kqzsg" (UID: "2decce95-3c8d-4a0e-b624-dcb914947d90") : secret "infra-operator-webhook-server-cert" not found Jan 31 07:37:49 crc kubenswrapper[4908]: I0131 07:37:49.025937 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-86c469f8fb-d6v25"] Jan 31 07:37:49 crc kubenswrapper[4908]: I0131 07:37:49.032490 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-v9f72"] Jan 31 07:37:49 crc kubenswrapper[4908]: I0131 07:37:49.043614 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-h8xqc"] Jan 31 07:37:49 crc kubenswrapper[4908]: I0131 07:37:49.070084 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-brc72"] Jan 31 07:37:49 crc kubenswrapper[4908]: E0131 07:37:49.077023 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-swfrl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-v9f72_openstack-operators(4601f2a2-ec79-4ab3-a3b5-d176ac2359e8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 07:37:49 crc kubenswrapper[4908]: E0131 07:37:49.077364 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r2qhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-rcz4z_openstack-operators(f8753705-b38b-4124-ad36-013ca716e47e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 07:37:49 crc kubenswrapper[4908]: E0131 07:37:49.078200 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-v9f72" podUID="4601f2a2-ec79-4ab3-a3b5-d176ac2359e8" Jan 31 07:37:49 crc kubenswrapper[4908]: E0131 07:37:49.078566 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rcz4z" podUID="f8753705-b38b-4124-ad36-013ca716e47e" Jan 31 07:37:49 crc kubenswrapper[4908]: E0131 07:37:49.079619 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vvl9h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-nf2q7_openstack-operators(c3d892f3-217a-4c11-9625-4b0dfffeaca0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 07:37:49 crc kubenswrapper[4908]: I0131 07:37:49.081335 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-rcz4z"] Jan 31 07:37:49 crc kubenswrapper[4908]: E0131 07:37:49.081513 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-nf2q7" podUID="c3d892f3-217a-4c11-9625-4b0dfffeaca0" Jan 31 07:37:49 crc kubenswrapper[4908]: W0131 07:37:49.081880 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod317d11b5_971f_4f7c_8ab7_60b95122d08a.slice/crio-cbaa8c4df7f8ccd1c641342e4c6b62e84b15cabd307a0a3c81f1b818be74dadb WatchSource:0}: Error finding container cbaa8c4df7f8ccd1c641342e4c6b62e84b15cabd307a0a3c81f1b818be74dadb: Status 404 returned error can't find the container with id cbaa8c4df7f8ccd1c641342e4c6b62e84b15cabd307a0a3c81f1b818be74dadb Jan 31 07:37:49 crc kubenswrapper[4908]: I0131 07:37:49.087963 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-fvt6p" event={"ID":"d689356c-4f7d-46ed-927d-ce6b20bb4906","Type":"ContainerStarted","Data":"c5432b9aa5a5fe5a13e8aae0fa855202748e4cc00e6f73eff535d95eee699960"} Jan 31 07:37:49 crc kubenswrapper[4908]: E0131 07:37:49.088213 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2fv6g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-phddh_openstack-operators(78b4780a-6127-426f-9d27-754ab311f0f8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 07:37:49 crc kubenswrapper[4908]: E0131 07:37:49.089403 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phddh" podUID="78b4780a-6127-426f-9d27-754ab311f0f8" Jan 31 07:37:49 crc kubenswrapper[4908]: I0131 07:37:49.089996 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-944q9" event={"ID":"51c0e91e-58e4-4cdf-a06a-b79078097f32","Type":"ContainerStarted","Data":"2ad9df86edb4743143f889abd9ae9504091511658ffbabeb9af640f9150a6975"} Jan 31 07:37:49 crc kubenswrapper[4908]: I0131 07:37:49.090613 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-nf2q7"] Jan 31 07:37:49 crc kubenswrapper[4908]: E0131 07:37:49.090962 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.129.56.217:5001/openstack-k8s-operators/test-operator:102c241c987a7f788e46c748cdf8180a88940ad3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lrnnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-86c469f8fb-d6v25_openstack-operators(317d11b5-971f-4f7c-8ab7-60b95122d08a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 07:37:49 crc kubenswrapper[4908]: E0131 07:37:49.092944 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-86c469f8fb-d6v25" podUID="317d11b5-971f-4f7c-8ab7-60b95122d08a" Jan 31 07:37:49 crc kubenswrapper[4908]: I0131 07:37:49.094880 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-j6b5m" event={"ID":"8d3bf73c-1178-4613-a68a-7897eaf053e9","Type":"ContainerStarted","Data":"9ccf558b162c0ae7328a636413b16ccc7ce516954491143e596dfa3950796228"} Jan 31 07:37:49 crc kubenswrapper[4908]: I0131 07:37:49.099796 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-d4khs" event={"ID":"94ede047-b24c-4510-946c-1fcc23ce8862","Type":"ContainerStarted","Data":"1dcf608883d9d87440fbb3c9a1fcc543665e7736d8fbe549cc335f1bf491af42"} Jan 31 07:37:49 crc kubenswrapper[4908]: I0131 07:37:49.101204 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-9xflb" event={"ID":"cccc8258-9b99-4e94-911f-46cd1f95e2b7","Type":"ContainerStarted","Data":"dd2ab0ff7b5a8b0dfc8b6c28e4135de434229989714e3ca5e138be94952aaccd"} Jan 31 07:37:49 crc kubenswrapper[4908]: I0131 07:37:49.103914 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phddh"] Jan 31 07:37:49 crc kubenswrapper[4908]: I0131 07:37:49.113165 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bf666" event={"ID":"1a292336-6b89-4bd0-9f25-28190fad7f20","Type":"ContainerStarted","Data":"1fd37beb910d0936e647019ecd5118d9dcfd9a69dd271a20e7eb830fe873cc22"} Jan 31 07:37:49 crc kubenswrapper[4908]: I0131 07:37:49.117401 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2nnfd" event={"ID":"349437a4-aa6e-4e78-95e5-0ee664eba158","Type":"ContainerStarted","Data":"16e4e2c9c5fe507c20397e2a942fa9206bb5c17f370a7a57a82fb354dc77a3fd"} Jan 31 07:37:49 crc kubenswrapper[4908]: I0131 07:37:49.121936 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-z9bfq" event={"ID":"1103bcc6-b9d1-44b0-9206-1cf316e40aa1","Type":"ContainerStarted","Data":"0f68c5fa1d933ea394cb3db772bbcb5e921e066031add530d0b213dfb8e0fa9f"} Jan 31 07:37:49 crc kubenswrapper[4908]: I0131 07:37:49.126087 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bgpbn" event={"ID":"d00296fc-50ea-4b6e-9c9b-492ea9027347","Type":"ContainerStarted","Data":"d7cb8fa0004d7fca6a5e3e8e56b668df064fc52a3424061181a5925c420db05c"} Jan 31 07:37:49 crc kubenswrapper[4908]: I0131 07:37:49.127690 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-p4nmk" event={"ID":"0f7ae625-c53e-4e59-8fa6-357e0cf2e058","Type":"ContainerStarted","Data":"9e14ca0d333fc5c536d3ff91100b3826a2db0aef4f246f292c6fa698ccbff4f9"} Jan 31 07:37:49 crc kubenswrapper[4908]: I0131 07:37:49.137339 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-nkfsp" event={"ID":"fda2b57a-aff7-4b0f-baf1-dcc00fb5aa32","Type":"ContainerStarted","Data":"04ca454a75a0c2f60c4698541587e480a81087ddebef1857b5cdde2a48dcc46c"} Jan 31 07:37:49 crc kubenswrapper[4908]: I0131 07:37:49.149836 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-j8gx2" event={"ID":"e8436c75-0fe8-4fd9-98f0-7c2f9c41ef61","Type":"ContainerStarted","Data":"d94446960dc91464ff6003d64ebfd6c8229b52f3f5494e441493e12297b8227d"} Jan 31 07:37:49 crc kubenswrapper[4908]: I0131 07:37:49.152911 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-f254v" event={"ID":"0ba21e20-1850-4eea-9eb1-c07fcb41619f","Type":"ContainerStarted","Data":"8a44ef73998aec6465dd5023a5f9b25498a122ca8368857448f945a68ebf40e7"} Jan 31 07:37:49 crc kubenswrapper[4908]: E0131 07:37:49.158650 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-j8gx2" podUID="e8436c75-0fe8-4fd9-98f0-7c2f9c41ef61" Jan 31 07:37:49 crc kubenswrapper[4908]: I0131 07:37:49.230264 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/043d998e-7d47-4223-8bb5-8aa2f4a16b9c-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl\" (UID: \"043d998e-7d47-4223-8bb5-8aa2f4a16b9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl" Jan 31 07:37:49 crc kubenswrapper[4908]: E0131 07:37:49.230417 4908 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 07:37:49 crc kubenswrapper[4908]: E0131 07:37:49.230467 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/043d998e-7d47-4223-8bb5-8aa2f4a16b9c-cert podName:043d998e-7d47-4223-8bb5-8aa2f4a16b9c nodeName:}" failed. No retries permitted until 2026-01-31 07:37:51.230453676 +0000 UTC m=+977.846398330 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/043d998e-7d47-4223-8bb5-8aa2f4a16b9c-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl" (UID: "043d998e-7d47-4223-8bb5-8aa2f4a16b9c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 07:37:49 crc kubenswrapper[4908]: I0131 07:37:49.651768 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-webhook-certs\") pod \"openstack-operator-controller-manager-64c4b754d-2gx64\" (UID: \"6117707d-0a29-4b0b-bf9e-c23c308952b4\") " pod="openstack-operators/openstack-operator-controller-manager-64c4b754d-2gx64" Jan 31 07:37:49 crc kubenswrapper[4908]: I0131 07:37:49.652238 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-metrics-certs\") pod \"openstack-operator-controller-manager-64c4b754d-2gx64\" (UID: \"6117707d-0a29-4b0b-bf9e-c23c308952b4\") " pod="openstack-operators/openstack-operator-controller-manager-64c4b754d-2gx64" Jan 31 07:37:49 crc kubenswrapper[4908]: E0131 07:37:49.652055 4908 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 07:37:49 crc kubenswrapper[4908]: E0131 07:37:49.652490 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-webhook-certs podName:6117707d-0a29-4b0b-bf9e-c23c308952b4 nodeName:}" failed. No retries permitted until 2026-01-31 07:37:51.652471209 +0000 UTC m=+978.268415863 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-webhook-certs") pod "openstack-operator-controller-manager-64c4b754d-2gx64" (UID: "6117707d-0a29-4b0b-bf9e-c23c308952b4") : secret "webhook-server-cert" not found Jan 31 07:37:49 crc kubenswrapper[4908]: E0131 07:37:49.652585 4908 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 07:37:49 crc kubenswrapper[4908]: E0131 07:37:49.652650 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-metrics-certs podName:6117707d-0a29-4b0b-bf9e-c23c308952b4 nodeName:}" failed. No retries permitted until 2026-01-31 07:37:51.652630443 +0000 UTC m=+978.268575097 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-metrics-certs") pod "openstack-operator-controller-manager-64c4b754d-2gx64" (UID: "6117707d-0a29-4b0b-bf9e-c23c308952b4") : secret "metrics-server-cert" not found Jan 31 07:37:50 crc kubenswrapper[4908]: I0131 07:37:50.166499 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-h8xqc" event={"ID":"1d777955-e0fc-4554-8e40-b17bdaaf752f","Type":"ContainerStarted","Data":"18ed040698a75023a6d2acfe5738147d94b557432787e95db7a17051effd54cc"} Jan 31 07:37:50 crc kubenswrapper[4908]: I0131 07:37:50.168621 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-v9f72" event={"ID":"4601f2a2-ec79-4ab3-a3b5-d176ac2359e8","Type":"ContainerStarted","Data":"acbfe11d11b9eb0512a64b21a29a30467a13600d3dbdc07ba30c2e95b2cd9a4a"} Jan 31 07:37:50 crc kubenswrapper[4908]: E0131 07:37:50.170152 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-v9f72" podUID="4601f2a2-ec79-4ab3-a3b5-d176ac2359e8" Jan 31 07:37:50 crc kubenswrapper[4908]: I0131 07:37:50.172888 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-86c469f8fb-d6v25" event={"ID":"317d11b5-971f-4f7c-8ab7-60b95122d08a","Type":"ContainerStarted","Data":"cbaa8c4df7f8ccd1c641342e4c6b62e84b15cabd307a0a3c81f1b818be74dadb"} Jan 31 07:37:50 crc kubenswrapper[4908]: E0131 07:37:50.174154 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.217:5001/openstack-k8s-operators/test-operator:102c241c987a7f788e46c748cdf8180a88940ad3\\\"\"" pod="openstack-operators/test-operator-controller-manager-86c469f8fb-d6v25" podUID="317d11b5-971f-4f7c-8ab7-60b95122d08a" Jan 31 07:37:50 crc kubenswrapper[4908]: I0131 07:37:50.176210 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rcz4z" event={"ID":"f8753705-b38b-4124-ad36-013ca716e47e","Type":"ContainerStarted","Data":"4d4ac8891ffa466604bc6b559c0e2d38ae2f4e56fe19ac50a1e6b1f1db836c78"} Jan 31 07:37:50 crc kubenswrapper[4908]: E0131 07:37:50.178023 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rcz4z" podUID="f8753705-b38b-4124-ad36-013ca716e47e" Jan 31 07:37:50 crc kubenswrapper[4908]: I0131 07:37:50.178251 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-nf2q7" event={"ID":"c3d892f3-217a-4c11-9625-4b0dfffeaca0","Type":"ContainerStarted","Data":"29dafe51260df8ba5ff32e864a024b884e19bafa16742223f0a6892341db3bb9"} Jan 31 07:37:50 crc kubenswrapper[4908]: E0131 07:37:50.179776 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-nf2q7" podUID="c3d892f3-217a-4c11-9625-4b0dfffeaca0" Jan 31 07:37:50 crc kubenswrapper[4908]: I0131 07:37:50.181038 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phddh" event={"ID":"78b4780a-6127-426f-9d27-754ab311f0f8","Type":"ContainerStarted","Data":"932633113aec32fd5f508f5b4a139064f220f35c462fad72c25287fbb3cf7106"} Jan 31 07:37:50 crc kubenswrapper[4908]: E0131 07:37:50.192151 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phddh" podUID="78b4780a-6127-426f-9d27-754ab311f0f8" Jan 31 07:37:50 crc kubenswrapper[4908]: I0131 07:37:50.195615 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-brc72" event={"ID":"ed8b4596-65fa-4171-bde7-1507bc3fe80b","Type":"ContainerStarted","Data":"a56cd9cd54ba02506b155a8a17d136ee02d7485016b0cfe126bf000fdf35c23b"} Jan 31 07:37:50 crc kubenswrapper[4908]: E0131 07:37:50.196934 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-j8gx2" podUID="e8436c75-0fe8-4fd9-98f0-7c2f9c41ef61" Jan 31 07:37:50 crc kubenswrapper[4908]: I0131 07:37:50.985596 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2decce95-3c8d-4a0e-b624-dcb914947d90-cert\") pod \"infra-operator-controller-manager-79955696d6-kqzsg\" (UID: \"2decce95-3c8d-4a0e-b624-dcb914947d90\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-kqzsg" Jan 31 07:37:50 crc kubenswrapper[4908]: E0131 07:37:50.986154 4908 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 07:37:50 crc kubenswrapper[4908]: E0131 07:37:50.986211 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2decce95-3c8d-4a0e-b624-dcb914947d90-cert podName:2decce95-3c8d-4a0e-b624-dcb914947d90 nodeName:}" failed. No retries permitted until 2026-01-31 07:37:54.986194002 +0000 UTC m=+981.602138656 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2decce95-3c8d-4a0e-b624-dcb914947d90-cert") pod "infra-operator-controller-manager-79955696d6-kqzsg" (UID: "2decce95-3c8d-4a0e-b624-dcb914947d90") : secret "infra-operator-webhook-server-cert" not found Jan 31 07:37:51 crc kubenswrapper[4908]: E0131 07:37:51.211907 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.217:5001/openstack-k8s-operators/test-operator:102c241c987a7f788e46c748cdf8180a88940ad3\\\"\"" pod="openstack-operators/test-operator-controller-manager-86c469f8fb-d6v25" podUID="317d11b5-971f-4f7c-8ab7-60b95122d08a" Jan 31 07:37:51 crc kubenswrapper[4908]: E0131 07:37:51.212235 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phddh" podUID="78b4780a-6127-426f-9d27-754ab311f0f8" Jan 31 07:37:51 crc kubenswrapper[4908]: E0131 07:37:51.212298 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rcz4z" podUID="f8753705-b38b-4124-ad36-013ca716e47e" Jan 31 07:37:51 crc kubenswrapper[4908]: E0131 07:37:51.212334 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-nf2q7" podUID="c3d892f3-217a-4c11-9625-4b0dfffeaca0" Jan 31 07:37:51 crc kubenswrapper[4908]: E0131 07:37:51.212392 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-v9f72" podUID="4601f2a2-ec79-4ab3-a3b5-d176ac2359e8" Jan 31 07:37:51 crc kubenswrapper[4908]: I0131 07:37:51.298360 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/043d998e-7d47-4223-8bb5-8aa2f4a16b9c-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl\" (UID: \"043d998e-7d47-4223-8bb5-8aa2f4a16b9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl" Jan 31 07:37:51 crc kubenswrapper[4908]: E0131 07:37:51.302153 4908 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 07:37:51 crc kubenswrapper[4908]: E0131 07:37:51.302215 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/043d998e-7d47-4223-8bb5-8aa2f4a16b9c-cert podName:043d998e-7d47-4223-8bb5-8aa2f4a16b9c nodeName:}" failed. No retries permitted until 2026-01-31 07:37:55.302196837 +0000 UTC m=+981.918141491 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/043d998e-7d47-4223-8bb5-8aa2f4a16b9c-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl" (UID: "043d998e-7d47-4223-8bb5-8aa2f4a16b9c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 07:37:51 crc kubenswrapper[4908]: I0131 07:37:51.706493 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-metrics-certs\") pod \"openstack-operator-controller-manager-64c4b754d-2gx64\" (UID: \"6117707d-0a29-4b0b-bf9e-c23c308952b4\") " pod="openstack-operators/openstack-operator-controller-manager-64c4b754d-2gx64" Jan 31 07:37:51 crc kubenswrapper[4908]: I0131 07:37:51.706646 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-webhook-certs\") pod \"openstack-operator-controller-manager-64c4b754d-2gx64\" (UID: \"6117707d-0a29-4b0b-bf9e-c23c308952b4\") " pod="openstack-operators/openstack-operator-controller-manager-64c4b754d-2gx64" Jan 31 07:37:51 crc kubenswrapper[4908]: E0131 07:37:51.706684 4908 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 07:37:51 crc kubenswrapper[4908]: E0131 07:37:51.706773 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-metrics-certs podName:6117707d-0a29-4b0b-bf9e-c23c308952b4 nodeName:}" failed. No retries permitted until 2026-01-31 07:37:55.706750645 +0000 UTC m=+982.322695349 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-metrics-certs") pod "openstack-operator-controller-manager-64c4b754d-2gx64" (UID: "6117707d-0a29-4b0b-bf9e-c23c308952b4") : secret "metrics-server-cert" not found Jan 31 07:37:51 crc kubenswrapper[4908]: E0131 07:37:51.706774 4908 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 07:37:51 crc kubenswrapper[4908]: E0131 07:37:51.706815 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-webhook-certs podName:6117707d-0a29-4b0b-bf9e-c23c308952b4 nodeName:}" failed. No retries permitted until 2026-01-31 07:37:55.706806107 +0000 UTC m=+982.322750831 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-webhook-certs") pod "openstack-operator-controller-manager-64c4b754d-2gx64" (UID: "6117707d-0a29-4b0b-bf9e-c23c308952b4") : secret "webhook-server-cert" not found Jan 31 07:37:55 crc kubenswrapper[4908]: I0131 07:37:55.065201 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2decce95-3c8d-4a0e-b624-dcb914947d90-cert\") pod \"infra-operator-controller-manager-79955696d6-kqzsg\" (UID: \"2decce95-3c8d-4a0e-b624-dcb914947d90\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-kqzsg" Jan 31 07:37:55 crc kubenswrapper[4908]: E0131 07:37:55.065355 4908 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 07:37:55 crc kubenswrapper[4908]: E0131 07:37:55.065611 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2decce95-3c8d-4a0e-b624-dcb914947d90-cert podName:2decce95-3c8d-4a0e-b624-dcb914947d90 nodeName:}" failed. No retries permitted until 2026-01-31 07:38:03.065594388 +0000 UTC m=+989.681539042 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2decce95-3c8d-4a0e-b624-dcb914947d90-cert") pod "infra-operator-controller-manager-79955696d6-kqzsg" (UID: "2decce95-3c8d-4a0e-b624-dcb914947d90") : secret "infra-operator-webhook-server-cert" not found Jan 31 07:37:55 crc kubenswrapper[4908]: I0131 07:37:55.369833 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/043d998e-7d47-4223-8bb5-8aa2f4a16b9c-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl\" (UID: \"043d998e-7d47-4223-8bb5-8aa2f4a16b9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl" Jan 31 07:37:55 crc kubenswrapper[4908]: E0131 07:37:55.370024 4908 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 07:37:55 crc kubenswrapper[4908]: E0131 07:37:55.370117 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/043d998e-7d47-4223-8bb5-8aa2f4a16b9c-cert podName:043d998e-7d47-4223-8bb5-8aa2f4a16b9c nodeName:}" failed. No retries permitted until 2026-01-31 07:38:03.370092936 +0000 UTC m=+989.986037660 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/043d998e-7d47-4223-8bb5-8aa2f4a16b9c-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl" (UID: "043d998e-7d47-4223-8bb5-8aa2f4a16b9c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 07:37:55 crc kubenswrapper[4908]: I0131 07:37:55.775232 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-metrics-certs\") pod \"openstack-operator-controller-manager-64c4b754d-2gx64\" (UID: \"6117707d-0a29-4b0b-bf9e-c23c308952b4\") " pod="openstack-operators/openstack-operator-controller-manager-64c4b754d-2gx64" Jan 31 07:37:55 crc kubenswrapper[4908]: I0131 07:37:55.775342 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-webhook-certs\") pod \"openstack-operator-controller-manager-64c4b754d-2gx64\" (UID: \"6117707d-0a29-4b0b-bf9e-c23c308952b4\") " pod="openstack-operators/openstack-operator-controller-manager-64c4b754d-2gx64" Jan 31 07:37:55 crc kubenswrapper[4908]: E0131 07:37:55.775414 4908 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 07:37:55 crc kubenswrapper[4908]: E0131 07:37:55.775494 4908 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 07:37:55 crc kubenswrapper[4908]: E0131 07:37:55.775496 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-metrics-certs podName:6117707d-0a29-4b0b-bf9e-c23c308952b4 nodeName:}" failed. No retries permitted until 2026-01-31 07:38:03.775472815 +0000 UTC m=+990.391417469 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-metrics-certs") pod "openstack-operator-controller-manager-64c4b754d-2gx64" (UID: "6117707d-0a29-4b0b-bf9e-c23c308952b4") : secret "metrics-server-cert" not found Jan 31 07:37:55 crc kubenswrapper[4908]: E0131 07:37:55.775565 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-webhook-certs podName:6117707d-0a29-4b0b-bf9e-c23c308952b4 nodeName:}" failed. No retries permitted until 2026-01-31 07:38:03.775550427 +0000 UTC m=+990.391495081 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-webhook-certs") pod "openstack-operator-controller-manager-64c4b754d-2gx64" (UID: "6117707d-0a29-4b0b-bf9e-c23c308952b4") : secret "webhook-server-cert" not found Jan 31 07:38:02 crc kubenswrapper[4908]: E0131 07:38:02.155853 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Jan 31 07:38:02 crc kubenswrapper[4908]: E0131 07:38:02.156532 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q55c2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-j6b5m_openstack-operators(8d3bf73c-1178-4613-a68a-7897eaf053e9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:38:02 crc kubenswrapper[4908]: E0131 07:38:02.157798 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-j6b5m" podUID="8d3bf73c-1178-4613-a68a-7897eaf053e9" Jan 31 07:38:02 crc kubenswrapper[4908]: E0131 07:38:02.297672 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-j6b5m" podUID="8d3bf73c-1178-4613-a68a-7897eaf053e9" Jan 31 07:38:02 crc kubenswrapper[4908]: E0131 07:38:02.793765 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e" Jan 31 07:38:02 crc kubenswrapper[4908]: E0131 07:38:02.794898 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hgvpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-bf666_openstack-operators(1a292336-6b89-4bd0-9f25-28190fad7f20): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:38:02 crc kubenswrapper[4908]: E0131 07:38:02.796595 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bf666" podUID="1a292336-6b89-4bd0-9f25-28190fad7f20" Jan 31 07:38:03 crc kubenswrapper[4908]: I0131 07:38:03.086817 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2decce95-3c8d-4a0e-b624-dcb914947d90-cert\") pod \"infra-operator-controller-manager-79955696d6-kqzsg\" (UID: \"2decce95-3c8d-4a0e-b624-dcb914947d90\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-kqzsg" Jan 31 07:38:03 crc kubenswrapper[4908]: E0131 07:38:03.087092 4908 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 07:38:03 crc kubenswrapper[4908]: E0131 07:38:03.087147 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2decce95-3c8d-4a0e-b624-dcb914947d90-cert podName:2decce95-3c8d-4a0e-b624-dcb914947d90 nodeName:}" failed. No retries permitted until 2026-01-31 07:38:19.087129376 +0000 UTC m=+1005.703074040 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2decce95-3c8d-4a0e-b624-dcb914947d90-cert") pod "infra-operator-controller-manager-79955696d6-kqzsg" (UID: "2decce95-3c8d-4a0e-b624-dcb914947d90") : secret "infra-operator-webhook-server-cert" not found Jan 31 07:38:03 crc kubenswrapper[4908]: I0131 07:38:03.304040 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-z9bfq" event={"ID":"1103bcc6-b9d1-44b0-9206-1cf316e40aa1","Type":"ContainerStarted","Data":"eda0c371ba453ea3c566c106c77ef0137dd75b22f7cda35d4110ab13f14147ab"} Jan 31 07:38:03 crc kubenswrapper[4908]: I0131 07:38:03.304369 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-z9bfq" Jan 31 07:38:03 crc kubenswrapper[4908]: I0131 07:38:03.317757 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-944q9" event={"ID":"51c0e91e-58e4-4cdf-a06a-b79078097f32","Type":"ContainerStarted","Data":"1d99a359344ecd0ccc33b8cb7e5713ede787c6aeba12ed463ca5807dbb0e0dde"} Jan 31 07:38:03 crc kubenswrapper[4908]: I0131 07:38:03.318038 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-944q9" Jan 31 07:38:03 crc kubenswrapper[4908]: I0131 07:38:03.327884 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-h8xqc" event={"ID":"1d777955-e0fc-4554-8e40-b17bdaaf752f","Type":"ContainerStarted","Data":"876a0c4da4817beb21f0d49be5b3b6e35cd2f48beb749f29433efe04d012a145"} Jan 31 07:38:03 crc kubenswrapper[4908]: I0131 07:38:03.328683 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-h8xqc" Jan 31 07:38:03 crc kubenswrapper[4908]: I0131 07:38:03.332739 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bgpbn" event={"ID":"d00296fc-50ea-4b6e-9c9b-492ea9027347","Type":"ContainerStarted","Data":"48f65ccc012543c7eae501831c9ca8bd99df28d8aeb4cf7e4b95cdfc3ac9bf47"} Jan 31 07:38:03 crc kubenswrapper[4908]: I0131 07:38:03.333541 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bgpbn" Jan 31 07:38:03 crc kubenswrapper[4908]: I0131 07:38:03.335849 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-z9bfq" podStartSLOduration=2.095763574 podStartE2EDuration="16.335829965s" podCreationTimestamp="2026-01-31 07:37:47 +0000 UTC" firstStartedPulling="2026-01-31 07:37:48.542289979 +0000 UTC m=+975.158234633" lastFinishedPulling="2026-01-31 07:38:02.78235637 +0000 UTC m=+989.398301024" observedRunningTime="2026-01-31 07:38:03.333896597 +0000 UTC m=+989.949841281" watchObservedRunningTime="2026-01-31 07:38:03.335829965 +0000 UTC m=+989.951774619" Jan 31 07:38:03 crc kubenswrapper[4908]: I0131 07:38:03.352237 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-p4nmk" event={"ID":"0f7ae625-c53e-4e59-8fa6-357e0cf2e058","Type":"ContainerStarted","Data":"7fa912cb5c4d27dbd4cc96562e544863cc888264b7e8a45126fbd2760b98d2b8"} Jan 31 07:38:03 crc kubenswrapper[4908]: I0131 07:38:03.353205 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-p4nmk" Jan 31 07:38:03 crc kubenswrapper[4908]: I0131 07:38:03.373631 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-nkfsp" event={"ID":"fda2b57a-aff7-4b0f-baf1-dcc00fb5aa32","Type":"ContainerStarted","Data":"73f964cea0b75ca3104646dfc1f505d47f31e05d06e1677ce4d695ddcc04c6cc"} Jan 31 07:38:03 crc kubenswrapper[4908]: I0131 07:38:03.374471 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-nkfsp" Jan 31 07:38:03 crc kubenswrapper[4908]: I0131 07:38:03.376748 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-944q9" podStartSLOduration=2.298407407 podStartE2EDuration="16.376732493s" podCreationTimestamp="2026-01-31 07:37:47 +0000 UTC" firstStartedPulling="2026-01-31 07:37:48.695708987 +0000 UTC m=+975.311653641" lastFinishedPulling="2026-01-31 07:38:02.774034063 +0000 UTC m=+989.389978727" observedRunningTime="2026-01-31 07:38:03.373312338 +0000 UTC m=+989.989256992" watchObservedRunningTime="2026-01-31 07:38:03.376732493 +0000 UTC m=+989.992677147" Jan 31 07:38:03 crc kubenswrapper[4908]: I0131 07:38:03.386679 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2nnfd" event={"ID":"349437a4-aa6e-4e78-95e5-0ee664eba158","Type":"ContainerStarted","Data":"c7907c147174b5f1f069e7d6ec6da1109c9cdb3fa757475e1ea1db4418705330"} Jan 31 07:38:03 crc kubenswrapper[4908]: I0131 07:38:03.387335 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2nnfd" Jan 31 07:38:03 crc kubenswrapper[4908]: E0131 07:38:03.387738 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bf666" podUID="1a292336-6b89-4bd0-9f25-28190fad7f20" Jan 31 07:38:03 crc kubenswrapper[4908]: I0131 07:38:03.390989 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/043d998e-7d47-4223-8bb5-8aa2f4a16b9c-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl\" (UID: \"043d998e-7d47-4223-8bb5-8aa2f4a16b9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl" Jan 31 07:38:03 crc kubenswrapper[4908]: E0131 07:38:03.391172 4908 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 07:38:03 crc kubenswrapper[4908]: E0131 07:38:03.391228 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/043d998e-7d47-4223-8bb5-8aa2f4a16b9c-cert podName:043d998e-7d47-4223-8bb5-8aa2f4a16b9c nodeName:}" failed. No retries permitted until 2026-01-31 07:38:19.391208854 +0000 UTC m=+1006.007153508 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/043d998e-7d47-4223-8bb5-8aa2f4a16b9c-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl" (UID: "043d998e-7d47-4223-8bb5-8aa2f4a16b9c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 07:38:03 crc kubenswrapper[4908]: I0131 07:38:03.395497 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-h8xqc" podStartSLOduration=2.659803542 podStartE2EDuration="16.39548347s" podCreationTimestamp="2026-01-31 07:37:47 +0000 UTC" firstStartedPulling="2026-01-31 07:37:49.046032986 +0000 UTC m=+975.661977640" lastFinishedPulling="2026-01-31 07:38:02.781712914 +0000 UTC m=+989.397657568" observedRunningTime="2026-01-31 07:38:03.392347942 +0000 UTC m=+990.008292596" watchObservedRunningTime="2026-01-31 07:38:03.39548347 +0000 UTC m=+990.011428124" Jan 31 07:38:03 crc kubenswrapper[4908]: I0131 07:38:03.433924 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bgpbn" podStartSLOduration=2.497710077 podStartE2EDuration="16.433905086s" podCreationTimestamp="2026-01-31 07:37:47 +0000 UTC" firstStartedPulling="2026-01-31 07:37:48.837744432 +0000 UTC m=+975.453689086" lastFinishedPulling="2026-01-31 07:38:02.773939441 +0000 UTC m=+989.389884095" observedRunningTime="2026-01-31 07:38:03.419301363 +0000 UTC m=+990.035246037" watchObservedRunningTime="2026-01-31 07:38:03.433905086 +0000 UTC m=+990.049849740" Jan 31 07:38:03 crc kubenswrapper[4908]: I0131 07:38:03.486911 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2nnfd" podStartSLOduration=2.529101288 podStartE2EDuration="16.486891965s" podCreationTimestamp="2026-01-31 07:37:47 +0000 UTC" firstStartedPulling="2026-01-31 07:37:48.824736138 +0000 UTC m=+975.440680792" lastFinishedPulling="2026-01-31 07:38:02.782526815 +0000 UTC m=+989.398471469" observedRunningTime="2026-01-31 07:38:03.483681865 +0000 UTC m=+990.099626529" watchObservedRunningTime="2026-01-31 07:38:03.486891965 +0000 UTC m=+990.102836619" Jan 31 07:38:03 crc kubenswrapper[4908]: I0131 07:38:03.519129 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-nkfsp" podStartSLOduration=2.280711717 podStartE2EDuration="16.519105217s" podCreationTimestamp="2026-01-31 07:37:47 +0000 UTC" firstStartedPulling="2026-01-31 07:37:48.544008142 +0000 UTC m=+975.159952786" lastFinishedPulling="2026-01-31 07:38:02.782401632 +0000 UTC m=+989.398346286" observedRunningTime="2026-01-31 07:38:03.514236835 +0000 UTC m=+990.130181509" watchObservedRunningTime="2026-01-31 07:38:03.519105217 +0000 UTC m=+990.135049871" Jan 31 07:38:03 crc kubenswrapper[4908]: I0131 07:38:03.576642 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-p4nmk" podStartSLOduration=2.294730195 podStartE2EDuration="16.576623798s" podCreationTimestamp="2026-01-31 07:37:47 +0000 UTC" firstStartedPulling="2026-01-31 07:37:48.486260344 +0000 UTC m=+975.102204998" lastFinishedPulling="2026-01-31 07:38:02.768153947 +0000 UTC m=+989.384098601" observedRunningTime="2026-01-31 07:38:03.574129686 +0000 UTC m=+990.190074360" watchObservedRunningTime="2026-01-31 07:38:03.576623798 +0000 UTC m=+990.192568452" Jan 31 07:38:03 crc kubenswrapper[4908]: I0131 07:38:03.796942 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-webhook-certs\") pod \"openstack-operator-controller-manager-64c4b754d-2gx64\" (UID: \"6117707d-0a29-4b0b-bf9e-c23c308952b4\") " pod="openstack-operators/openstack-operator-controller-manager-64c4b754d-2gx64" Jan 31 07:38:03 crc kubenswrapper[4908]: I0131 07:38:03.797079 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-metrics-certs\") pod \"openstack-operator-controller-manager-64c4b754d-2gx64\" (UID: \"6117707d-0a29-4b0b-bf9e-c23c308952b4\") " pod="openstack-operators/openstack-operator-controller-manager-64c4b754d-2gx64" Jan 31 07:38:03 crc kubenswrapper[4908]: E0131 07:38:03.797239 4908 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 07:38:03 crc kubenswrapper[4908]: E0131 07:38:03.797304 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-metrics-certs podName:6117707d-0a29-4b0b-bf9e-c23c308952b4 nodeName:}" failed. No retries permitted until 2026-01-31 07:38:19.79728672 +0000 UTC m=+1006.413231374 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-metrics-certs") pod "openstack-operator-controller-manager-64c4b754d-2gx64" (UID: "6117707d-0a29-4b0b-bf9e-c23c308952b4") : secret "metrics-server-cert" not found Jan 31 07:38:03 crc kubenswrapper[4908]: I0131 07:38:03.802132 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-webhook-certs\") pod \"openstack-operator-controller-manager-64c4b754d-2gx64\" (UID: \"6117707d-0a29-4b0b-bf9e-c23c308952b4\") " pod="openstack-operators/openstack-operator-controller-manager-64c4b754d-2gx64" Jan 31 07:38:08 crc kubenswrapper[4908]: I0131 07:38:08.037665 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bgpbn" Jan 31 07:38:10 crc kubenswrapper[4908]: I0131 07:38:10.430820 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:38:10 crc kubenswrapper[4908]: I0131 07:38:10.431105 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:38:15 crc kubenswrapper[4908]: I0131 07:38:15.470647 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-fvt6p" event={"ID":"d689356c-4f7d-46ed-927d-ce6b20bb4906","Type":"ContainerStarted","Data":"7bf08a8426868cd491c3183293a83e8c413a5426042d59b9e10f7237145ea4dc"} Jan 31 07:38:15 crc kubenswrapper[4908]: I0131 07:38:15.472074 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-d4khs" event={"ID":"94ede047-b24c-4510-946c-1fcc23ce8862","Type":"ContainerStarted","Data":"679ae53b3f04510d256558960d302291b78700f67156a384615dcc65ee72784e"} Jan 31 07:38:15 crc kubenswrapper[4908]: E0131 07:38:15.542049 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.217:5001/openstack-k8s-operators/test-operator:102c241c987a7f788e46c748cdf8180a88940ad3" Jan 31 07:38:15 crc kubenswrapper[4908]: E0131 07:38:15.542114 4908 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.217:5001/openstack-k8s-operators/test-operator:102c241c987a7f788e46c748cdf8180a88940ad3" Jan 31 07:38:15 crc kubenswrapper[4908]: E0131 07:38:15.542268 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.129.56.217:5001/openstack-k8s-operators/test-operator:102c241c987a7f788e46c748cdf8180a88940ad3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lrnnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-86c469f8fb-d6v25_openstack-operators(317d11b5-971f-4f7c-8ab7-60b95122d08a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:38:15 crc kubenswrapper[4908]: E0131 07:38:15.543481 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-86c469f8fb-d6v25" podUID="317d11b5-971f-4f7c-8ab7-60b95122d08a" Jan 31 07:38:16 crc kubenswrapper[4908]: I0131 07:38:16.482686 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-d4khs" Jan 31 07:38:16 crc kubenswrapper[4908]: I0131 07:38:16.483061 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-fvt6p" Jan 31 07:38:16 crc kubenswrapper[4908]: I0131 07:38:16.499716 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-d4khs" podStartSLOduration=14.968192769 podStartE2EDuration="29.499698844s" podCreationTimestamp="2026-01-31 07:37:47 +0000 UTC" firstStartedPulling="2026-01-31 07:37:48.255972153 +0000 UTC m=+974.871916807" lastFinishedPulling="2026-01-31 07:38:02.787478228 +0000 UTC m=+989.403422882" observedRunningTime="2026-01-31 07:38:16.495379336 +0000 UTC m=+1003.111324000" watchObservedRunningTime="2026-01-31 07:38:16.499698844 +0000 UTC m=+1003.115643498" Jan 31 07:38:16 crc kubenswrapper[4908]: I0131 07:38:16.518738 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-fvt6p" podStartSLOduration=15.401964034 podStartE2EDuration="29.518717777s" podCreationTimestamp="2026-01-31 07:37:47 +0000 UTC" firstStartedPulling="2026-01-31 07:37:48.684299773 +0000 UTC m=+975.300244427" lastFinishedPulling="2026-01-31 07:38:02.801053516 +0000 UTC m=+989.416998170" observedRunningTime="2026-01-31 07:38:16.516800359 +0000 UTC m=+1003.132745023" watchObservedRunningTime="2026-01-31 07:38:16.518717777 +0000 UTC m=+1003.134662431" Jan 31 07:38:17 crc kubenswrapper[4908]: E0131 07:38:17.190313 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 31 07:38:17 crc kubenswrapper[4908]: E0131 07:38:17.190493 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2fv6g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-phddh_openstack-operators(78b4780a-6127-426f-9d27-754ab311f0f8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:38:17 crc kubenswrapper[4908]: E0131 07:38:17.192239 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phddh" podUID="78b4780a-6127-426f-9d27-754ab311f0f8" Jan 31 07:38:17 crc kubenswrapper[4908]: I0131 07:38:17.378454 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-nkfsp" Jan 31 07:38:17 crc kubenswrapper[4908]: I0131 07:38:17.471695 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-p4nmk" Jan 31 07:38:17 crc kubenswrapper[4908]: I0131 07:38:17.514544 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-z9bfq" Jan 31 07:38:17 crc kubenswrapper[4908]: I0131 07:38:17.729949 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-944q9" Jan 31 07:38:17 crc kubenswrapper[4908]: I0131 07:38:17.848344 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-h8xqc" Jan 31 07:38:17 crc kubenswrapper[4908]: I0131 07:38:17.994031 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2nnfd" Jan 31 07:38:19 crc kubenswrapper[4908]: I0131 07:38:19.133837 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2decce95-3c8d-4a0e-b624-dcb914947d90-cert\") pod \"infra-operator-controller-manager-79955696d6-kqzsg\" (UID: \"2decce95-3c8d-4a0e-b624-dcb914947d90\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-kqzsg" Jan 31 07:38:19 crc kubenswrapper[4908]: I0131 07:38:19.140180 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2decce95-3c8d-4a0e-b624-dcb914947d90-cert\") pod \"infra-operator-controller-manager-79955696d6-kqzsg\" (UID: \"2decce95-3c8d-4a0e-b624-dcb914947d90\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-kqzsg" Jan 31 07:38:19 crc kubenswrapper[4908]: I0131 07:38:19.326634 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-kqzsg" Jan 31 07:38:19 crc kubenswrapper[4908]: I0131 07:38:19.438135 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/043d998e-7d47-4223-8bb5-8aa2f4a16b9c-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl\" (UID: \"043d998e-7d47-4223-8bb5-8aa2f4a16b9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl" Jan 31 07:38:19 crc kubenswrapper[4908]: I0131 07:38:19.450509 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/043d998e-7d47-4223-8bb5-8aa2f4a16b9c-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl\" (UID: \"043d998e-7d47-4223-8bb5-8aa2f4a16b9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl" Jan 31 07:38:19 crc kubenswrapper[4908]: I0131 07:38:19.456833 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl" Jan 31 07:38:19 crc kubenswrapper[4908]: I0131 07:38:19.537214 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-f254v" event={"ID":"0ba21e20-1850-4eea-9eb1-c07fcb41619f","Type":"ContainerStarted","Data":"14e11e9decf3408ac5a3fc06593ccd23f23641735e9a82d0c6d2efcfbd6f042c"} Jan 31 07:38:19 crc kubenswrapper[4908]: I0131 07:38:19.538707 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-f254v" Jan 31 07:38:19 crc kubenswrapper[4908]: I0131 07:38:19.543639 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-9xflb" event={"ID":"cccc8258-9b99-4e94-911f-46cd1f95e2b7","Type":"ContainerStarted","Data":"7882d57faa08886d3c5cbbe2ce78daaf445bbe8601359ff480bc549feb6eb7c1"} Jan 31 07:38:19 crc kubenswrapper[4908]: I0131 07:38:19.544251 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-9xflb" Jan 31 07:38:19 crc kubenswrapper[4908]: I0131 07:38:19.564001 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-f254v" podStartSLOduration=18.314947982 podStartE2EDuration="32.563971577s" podCreationTimestamp="2026-01-31 07:37:47 +0000 UTC" firstStartedPulling="2026-01-31 07:37:48.543265703 +0000 UTC m=+975.159210357" lastFinishedPulling="2026-01-31 07:38:02.792289298 +0000 UTC m=+989.408233952" observedRunningTime="2026-01-31 07:38:19.562201933 +0000 UTC m=+1006.178146587" watchObservedRunningTime="2026-01-31 07:38:19.563971577 +0000 UTC m=+1006.179916231" Jan 31 07:38:19 crc kubenswrapper[4908]: I0131 07:38:19.572153 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-brc72" event={"ID":"ed8b4596-65fa-4171-bde7-1507bc3fe80b","Type":"ContainerStarted","Data":"348a599fb30c0b3a8de1cf41cfe1e6cc4fea404dddf3a237f8ab1391739014a9"} Jan 31 07:38:19 crc kubenswrapper[4908]: I0131 07:38:19.572750 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-brc72" Jan 31 07:38:19 crc kubenswrapper[4908]: I0131 07:38:19.611250 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-9xflb" podStartSLOduration=19.414356787 podStartE2EDuration="33.611231124s" podCreationTimestamp="2026-01-31 07:37:46 +0000 UTC" firstStartedPulling="2026-01-31 07:37:48.626755121 +0000 UTC m=+975.242699775" lastFinishedPulling="2026-01-31 07:38:02.823629458 +0000 UTC m=+989.439574112" observedRunningTime="2026-01-31 07:38:19.588752224 +0000 UTC m=+1006.204696878" watchObservedRunningTime="2026-01-31 07:38:19.611231124 +0000 UTC m=+1006.227175778" Jan 31 07:38:19 crc kubenswrapper[4908]: I0131 07:38:19.615234 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-brc72" podStartSLOduration=18.863542436 podStartE2EDuration="32.615212363s" podCreationTimestamp="2026-01-31 07:37:47 +0000 UTC" firstStartedPulling="2026-01-31 07:37:49.087832546 +0000 UTC m=+975.703777210" lastFinishedPulling="2026-01-31 07:38:02.839502483 +0000 UTC m=+989.455447137" observedRunningTime="2026-01-31 07:38:19.610127886 +0000 UTC m=+1006.226072550" watchObservedRunningTime="2026-01-31 07:38:19.615212363 +0000 UTC m=+1006.231157017" Jan 31 07:38:19 crc kubenswrapper[4908]: I0131 07:38:19.848507 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-metrics-certs\") pod \"openstack-operator-controller-manager-64c4b754d-2gx64\" (UID: \"6117707d-0a29-4b0b-bf9e-c23c308952b4\") " pod="openstack-operators/openstack-operator-controller-manager-64c4b754d-2gx64" Jan 31 07:38:19 crc kubenswrapper[4908]: I0131 07:38:19.868871 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6117707d-0a29-4b0b-bf9e-c23c308952b4-metrics-certs\") pod \"openstack-operator-controller-manager-64c4b754d-2gx64\" (UID: \"6117707d-0a29-4b0b-bf9e-c23c308952b4\") " pod="openstack-operators/openstack-operator-controller-manager-64c4b754d-2gx64" Jan 31 07:38:19 crc kubenswrapper[4908]: I0131 07:38:19.914448 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-kqzsg"] Jan 31 07:38:20 crc kubenswrapper[4908]: I0131 07:38:20.082917 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl"] Jan 31 07:38:20 crc kubenswrapper[4908]: I0131 07:38:20.145770 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-64c4b754d-2gx64" Jan 31 07:38:20 crc kubenswrapper[4908]: I0131 07:38:20.417662 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-64c4b754d-2gx64"] Jan 31 07:38:20 crc kubenswrapper[4908]: W0131 07:38:20.418463 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6117707d_0a29_4b0b_bf9e_c23c308952b4.slice/crio-344024abf722458693dd46d103b6cef49160f2f60cd8276c4986fe81d5c7391d WatchSource:0}: Error finding container 344024abf722458693dd46d103b6cef49160f2f60cd8276c4986fe81d5c7391d: Status 404 returned error can't find the container with id 344024abf722458693dd46d103b6cef49160f2f60cd8276c4986fe81d5c7391d Jan 31 07:38:20 crc kubenswrapper[4908]: I0131 07:38:20.582070 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl" event={"ID":"043d998e-7d47-4223-8bb5-8aa2f4a16b9c","Type":"ContainerStarted","Data":"714254e4e2fb25d8f03befc677481b3a4393b8aafeb3a7e904fe879cd4b87015"} Jan 31 07:38:20 crc kubenswrapper[4908]: I0131 07:38:20.583713 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-v9f72" event={"ID":"4601f2a2-ec79-4ab3-a3b5-d176ac2359e8","Type":"ContainerStarted","Data":"e0ee446ec7a679ef32ce3d036388d34e9a991dd59dda19513e9da55643cecae7"} Jan 31 07:38:20 crc kubenswrapper[4908]: I0131 07:38:20.584535 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-v9f72" Jan 31 07:38:20 crc kubenswrapper[4908]: I0131 07:38:20.590160 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-kqzsg" event={"ID":"2decce95-3c8d-4a0e-b624-dcb914947d90","Type":"ContainerStarted","Data":"c01e0fe0ae91d069c9c090ca33764bd839374ffe9da32f4a8a3350a5b35613c1"} Jan 31 07:38:20 crc kubenswrapper[4908]: I0131 07:38:20.591789 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rcz4z" event={"ID":"f8753705-b38b-4124-ad36-013ca716e47e","Type":"ContainerStarted","Data":"fbc865fa177fad504688dbac55cbe767bb93685edf25920f0961bf9bd416a508"} Jan 31 07:38:20 crc kubenswrapper[4908]: I0131 07:38:20.592379 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rcz4z" Jan 31 07:38:20 crc kubenswrapper[4908]: I0131 07:38:20.593962 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-j6b5m" event={"ID":"8d3bf73c-1178-4613-a68a-7897eaf053e9","Type":"ContainerStarted","Data":"2675ae337add70384f8786e37698e56b64da7b7e2cd6c38fe7fd20ac33552850"} Jan 31 07:38:20 crc kubenswrapper[4908]: I0131 07:38:20.594394 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-j6b5m" Jan 31 07:38:20 crc kubenswrapper[4908]: I0131 07:38:20.596658 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-j8gx2" event={"ID":"e8436c75-0fe8-4fd9-98f0-7c2f9c41ef61","Type":"ContainerStarted","Data":"6cfcbf06cb14ece03689de4f3959c5ed36e5bd4bbb3f079e8b29c4c27f1061dd"} Jan 31 07:38:20 crc kubenswrapper[4908]: I0131 07:38:20.596838 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-j8gx2" Jan 31 07:38:20 crc kubenswrapper[4908]: I0131 07:38:20.598248 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-64c4b754d-2gx64" event={"ID":"6117707d-0a29-4b0b-bf9e-c23c308952b4","Type":"ContainerStarted","Data":"344024abf722458693dd46d103b6cef49160f2f60cd8276c4986fe81d5c7391d"} Jan 31 07:38:20 crc kubenswrapper[4908]: I0131 07:38:20.598889 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-64c4b754d-2gx64" Jan 31 07:38:20 crc kubenswrapper[4908]: I0131 07:38:20.603916 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-v9f72" podStartSLOduration=3.470041306 podStartE2EDuration="33.603896468s" podCreationTimestamp="2026-01-31 07:37:47 +0000 UTC" firstStartedPulling="2026-01-31 07:37:49.076842943 +0000 UTC m=+975.692787597" lastFinishedPulling="2026-01-31 07:38:19.210698105 +0000 UTC m=+1005.826642759" observedRunningTime="2026-01-31 07:38:20.599408137 +0000 UTC m=+1007.215352791" watchObservedRunningTime="2026-01-31 07:38:20.603896468 +0000 UTC m=+1007.219841122" Jan 31 07:38:20 crc kubenswrapper[4908]: I0131 07:38:20.604350 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-nf2q7" event={"ID":"c3d892f3-217a-4c11-9625-4b0dfffeaca0","Type":"ContainerStarted","Data":"395006f1bfd352a10228e3de7a823025556cacc4cce5c7adaf710c72258a7053"} Jan 31 07:38:20 crc kubenswrapper[4908]: I0131 07:38:20.626943 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-nf2q7" Jan 31 07:38:20 crc kubenswrapper[4908]: I0131 07:38:20.643105 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bf666" event={"ID":"1a292336-6b89-4bd0-9f25-28190fad7f20","Type":"ContainerStarted","Data":"1ededea5baf86556cde2a03567f02caa9c435aa90ad2479da6fceaa3786dddba"} Jan 31 07:38:20 crc kubenswrapper[4908]: I0131 07:38:20.685380 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rcz4z" podStartSLOduration=3.592694349 podStartE2EDuration="33.685359006s" podCreationTimestamp="2026-01-31 07:37:47 +0000 UTC" firstStartedPulling="2026-01-31 07:37:49.077193882 +0000 UTC m=+975.693138536" lastFinishedPulling="2026-01-31 07:38:19.169858549 +0000 UTC m=+1005.785803193" observedRunningTime="2026-01-31 07:38:20.636866939 +0000 UTC m=+1007.252811593" watchObservedRunningTime="2026-01-31 07:38:20.685359006 +0000 UTC m=+1007.301303660" Jan 31 07:38:20 crc kubenswrapper[4908]: I0131 07:38:20.690524 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-j6b5m" podStartSLOduration=3.176982222 podStartE2EDuration="33.690509444s" podCreationTimestamp="2026-01-31 07:37:47 +0000 UTC" firstStartedPulling="2026-01-31 07:37:48.701695136 +0000 UTC m=+975.317639790" lastFinishedPulling="2026-01-31 07:38:19.215222358 +0000 UTC m=+1005.831167012" observedRunningTime="2026-01-31 07:38:20.673922901 +0000 UTC m=+1007.289867545" watchObservedRunningTime="2026-01-31 07:38:20.690509444 +0000 UTC m=+1007.306454098" Jan 31 07:38:20 crc kubenswrapper[4908]: I0131 07:38:20.748430 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-64c4b754d-2gx64" podStartSLOduration=33.748406075 podStartE2EDuration="33.748406075s" podCreationTimestamp="2026-01-31 07:37:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:20.723468294 +0000 UTC m=+1007.339412948" watchObservedRunningTime="2026-01-31 07:38:20.748406075 +0000 UTC m=+1007.364350729" Jan 31 07:38:20 crc kubenswrapper[4908]: I0131 07:38:20.750918 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-j8gx2" podStartSLOduration=3.377352729 podStartE2EDuration="33.750909567s" podCreationTimestamp="2026-01-31 07:37:47 +0000 UTC" firstStartedPulling="2026-01-31 07:37:48.84247105 +0000 UTC m=+975.458415704" lastFinishedPulling="2026-01-31 07:38:19.216027888 +0000 UTC m=+1005.831972542" observedRunningTime="2026-01-31 07:38:20.747355099 +0000 UTC m=+1007.363299773" watchObservedRunningTime="2026-01-31 07:38:20.750909567 +0000 UTC m=+1007.366854221" Jan 31 07:38:20 crc kubenswrapper[4908]: I0131 07:38:20.776447 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bf666" podStartSLOduration=3.397852479 podStartE2EDuration="33.776420702s" podCreationTimestamp="2026-01-31 07:37:47 +0000 UTC" firstStartedPulling="2026-01-31 07:37:48.837447665 +0000 UTC m=+975.453392319" lastFinishedPulling="2026-01-31 07:38:19.216015888 +0000 UTC m=+1005.831960542" observedRunningTime="2026-01-31 07:38:20.769757206 +0000 UTC m=+1007.385701870" watchObservedRunningTime="2026-01-31 07:38:20.776420702 +0000 UTC m=+1007.392365356" Jan 31 07:38:20 crc kubenswrapper[4908]: I0131 07:38:20.790678 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-nf2q7" podStartSLOduration=3.652494587 podStartE2EDuration="33.790657416s" podCreationTimestamp="2026-01-31 07:37:47 +0000 UTC" firstStartedPulling="2026-01-31 07:37:49.079497319 +0000 UTC m=+975.695441973" lastFinishedPulling="2026-01-31 07:38:19.217660148 +0000 UTC m=+1005.833604802" observedRunningTime="2026-01-31 07:38:20.787302563 +0000 UTC m=+1007.403247227" watchObservedRunningTime="2026-01-31 07:38:20.790657416 +0000 UTC m=+1007.406602070" Jan 31 07:38:21 crc kubenswrapper[4908]: I0131 07:38:21.656355 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-64c4b754d-2gx64" event={"ID":"6117707d-0a29-4b0b-bf9e-c23c308952b4","Type":"ContainerStarted","Data":"55bc0a84c023f1e1fcffd5da91d43f947dc7f2dca1a3f47e79e280334fefb910"} Jan 31 07:38:23 crc kubenswrapper[4908]: I0131 07:38:23.669850 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl" event={"ID":"043d998e-7d47-4223-8bb5-8aa2f4a16b9c","Type":"ContainerStarted","Data":"93626b98b2eff1fec523abeeb0e66a13704317a91912514da7ecbcd4515d1be4"} Jan 31 07:38:23 crc kubenswrapper[4908]: I0131 07:38:23.670181 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl" Jan 31 07:38:23 crc kubenswrapper[4908]: I0131 07:38:23.671878 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-kqzsg" event={"ID":"2decce95-3c8d-4a0e-b624-dcb914947d90","Type":"ContainerStarted","Data":"5f90aa9e7768ee613fd13a57b68829bc5a0df24a1bec338d79371f430c3a3cf6"} Jan 31 07:38:23 crc kubenswrapper[4908]: I0131 07:38:23.672134 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-kqzsg" Jan 31 07:38:23 crc kubenswrapper[4908]: I0131 07:38:23.705549 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl" podStartSLOduration=34.113593183 podStartE2EDuration="36.705525131s" podCreationTimestamp="2026-01-31 07:37:47 +0000 UTC" firstStartedPulling="2026-01-31 07:38:20.097778982 +0000 UTC m=+1006.713723626" lastFinishedPulling="2026-01-31 07:38:22.68971092 +0000 UTC m=+1009.305655574" observedRunningTime="2026-01-31 07:38:23.69747144 +0000 UTC m=+1010.313416094" watchObservedRunningTime="2026-01-31 07:38:23.705525131 +0000 UTC m=+1010.321469785" Jan 31 07:38:27 crc kubenswrapper[4908]: I0131 07:38:27.388164 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-d4khs" Jan 31 07:38:27 crc kubenswrapper[4908]: I0131 07:38:27.404651 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-kqzsg" podStartSLOduration=37.651541854 podStartE2EDuration="40.404634533s" podCreationTimestamp="2026-01-31 07:37:47 +0000 UTC" firstStartedPulling="2026-01-31 07:38:19.935165185 +0000 UTC m=+1006.551109839" lastFinishedPulling="2026-01-31 07:38:22.688257864 +0000 UTC m=+1009.304202518" observedRunningTime="2026-01-31 07:38:23.726453592 +0000 UTC m=+1010.342398246" watchObservedRunningTime="2026-01-31 07:38:27.404634533 +0000 UTC m=+1014.020579177" Jan 31 07:38:27 crc kubenswrapper[4908]: I0131 07:38:27.612701 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-f254v" Jan 31 07:38:27 crc kubenswrapper[4908]: I0131 07:38:27.634134 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-fvt6p" Jan 31 07:38:27 crc kubenswrapper[4908]: I0131 07:38:27.635516 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-9xflb" Jan 31 07:38:27 crc kubenswrapper[4908]: I0131 07:38:27.793170 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-brc72" Jan 31 07:38:27 crc kubenswrapper[4908]: I0131 07:38:27.834539 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-j6b5m" Jan 31 07:38:27 crc kubenswrapper[4908]: I0131 07:38:27.883009 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bf666" Jan 31 07:38:27 crc kubenswrapper[4908]: I0131 07:38:27.887570 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bf666" Jan 31 07:38:27 crc kubenswrapper[4908]: I0131 07:38:27.936201 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-j8gx2" Jan 31 07:38:27 crc kubenswrapper[4908]: I0131 07:38:27.974797 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-nf2q7" Jan 31 07:38:28 crc kubenswrapper[4908]: I0131 07:38:28.063093 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rcz4z" Jan 31 07:38:28 crc kubenswrapper[4908]: I0131 07:38:28.196453 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-v9f72" Jan 31 07:38:28 crc kubenswrapper[4908]: E0131 07:38:28.943782 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.217:5001/openstack-k8s-operators/test-operator:102c241c987a7f788e46c748cdf8180a88940ad3\\\"\"" pod="openstack-operators/test-operator-controller-manager-86c469f8fb-d6v25" podUID="317d11b5-971f-4f7c-8ab7-60b95122d08a" Jan 31 07:38:29 crc kubenswrapper[4908]: I0131 07:38:29.333067 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-kqzsg" Jan 31 07:38:29 crc kubenswrapper[4908]: I0131 07:38:29.462636 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl" Jan 31 07:38:29 crc kubenswrapper[4908]: E0131 07:38:29.945323 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phddh" podUID="78b4780a-6127-426f-9d27-754ab311f0f8" Jan 31 07:38:30 crc kubenswrapper[4908]: I0131 07:38:30.152123 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-64c4b754d-2gx64" Jan 31 07:38:40 crc kubenswrapper[4908]: I0131 07:38:40.431491 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:38:40 crc kubenswrapper[4908]: I0131 07:38:40.432089 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:38:40 crc kubenswrapper[4908]: I0131 07:38:40.432133 4908 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 07:38:40 crc kubenswrapper[4908]: I0131 07:38:40.432733 4908 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"58539bfd78268412e99de62573981b4cb5c5685bca0dc270f70e958484596b19"} pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 07:38:40 crc kubenswrapper[4908]: I0131 07:38:40.432781 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" containerID="cri-o://58539bfd78268412e99de62573981b4cb5c5685bca0dc270f70e958484596b19" gracePeriod=600 Jan 31 07:38:40 crc kubenswrapper[4908]: I0131 07:38:40.784030 4908 generic.go:334] "Generic (PLEG): container finished" podID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerID="58539bfd78268412e99de62573981b4cb5c5685bca0dc270f70e958484596b19" exitCode=0 Jan 31 07:38:40 crc kubenswrapper[4908]: I0131 07:38:40.784071 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerDied","Data":"58539bfd78268412e99de62573981b4cb5c5685bca0dc270f70e958484596b19"} Jan 31 07:38:40 crc kubenswrapper[4908]: I0131 07:38:40.784097 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerStarted","Data":"c224d07a5673ea9c5d3566a1e4b3b321889159f5901a3aea765d960e0553cfde"} Jan 31 07:38:40 crc kubenswrapper[4908]: I0131 07:38:40.784114 4908 scope.go:117] "RemoveContainer" containerID="8fb1fe09c148821fb5edb05d0d628b8701a9cd90e03f6d948ce3ba250379ba75" Jan 31 07:38:41 crc kubenswrapper[4908]: I0131 07:38:41.941580 4908 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 07:38:43 crc kubenswrapper[4908]: I0131 07:38:43.810771 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phddh" event={"ID":"78b4780a-6127-426f-9d27-754ab311f0f8","Type":"ContainerStarted","Data":"1fd4f84944c8b52e33a202f4a5e41fbe3684134ba87a8e273add9918acda26da"} Jan 31 07:38:43 crc kubenswrapper[4908]: I0131 07:38:43.829924 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phddh" podStartSLOduration=3.224020504 podStartE2EDuration="56.829905269s" podCreationTimestamp="2026-01-31 07:37:47 +0000 UTC" firstStartedPulling="2026-01-31 07:37:49.087997161 +0000 UTC m=+975.703941815" lastFinishedPulling="2026-01-31 07:38:42.693881926 +0000 UTC m=+1029.309826580" observedRunningTime="2026-01-31 07:38:43.825810777 +0000 UTC m=+1030.441755431" watchObservedRunningTime="2026-01-31 07:38:43.829905269 +0000 UTC m=+1030.445849923" Jan 31 07:38:46 crc kubenswrapper[4908]: I0131 07:38:46.829824 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-86c469f8fb-d6v25" event={"ID":"317d11b5-971f-4f7c-8ab7-60b95122d08a","Type":"ContainerStarted","Data":"c1c180c9e454772d9bffd5db5f48875b10c88e5cbde1ddac211a9266cab971f3"} Jan 31 07:38:46 crc kubenswrapper[4908]: I0131 07:38:46.830486 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-86c469f8fb-d6v25" Jan 31 07:38:46 crc kubenswrapper[4908]: I0131 07:38:46.859087 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-86c469f8fb-d6v25" podStartSLOduration=2.776600219 podStartE2EDuration="59.859069698s" podCreationTimestamp="2026-01-31 07:37:47 +0000 UTC" firstStartedPulling="2026-01-31 07:37:49.090701488 +0000 UTC m=+975.706646142" lastFinishedPulling="2026-01-31 07:38:46.173170967 +0000 UTC m=+1032.789115621" observedRunningTime="2026-01-31 07:38:46.856849422 +0000 UTC m=+1033.472794076" watchObservedRunningTime="2026-01-31 07:38:46.859069698 +0000 UTC m=+1033.475014352" Jan 31 07:38:58 crc kubenswrapper[4908]: I0131 07:38:58.084736 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-86c469f8fb-d6v25" Jan 31 07:39:15 crc kubenswrapper[4908]: I0131 07:39:15.306742 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-vn4f5"] Jan 31 07:39:15 crc kubenswrapper[4908]: I0131 07:39:15.308699 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-vn4f5" Jan 31 07:39:15 crc kubenswrapper[4908]: I0131 07:39:15.311047 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 31 07:39:15 crc kubenswrapper[4908]: I0131 07:39:15.311291 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 31 07:39:15 crc kubenswrapper[4908]: I0131 07:39:15.311544 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-plpmr" Jan 31 07:39:15 crc kubenswrapper[4908]: I0131 07:39:15.314198 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 31 07:39:15 crc kubenswrapper[4908]: I0131 07:39:15.330164 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-vn4f5"] Jan 31 07:39:15 crc kubenswrapper[4908]: I0131 07:39:15.425810 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-544z9"] Jan 31 07:39:15 crc kubenswrapper[4908]: I0131 07:39:15.426948 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-544z9" Jan 31 07:39:15 crc kubenswrapper[4908]: I0131 07:39:15.431854 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 31 07:39:15 crc kubenswrapper[4908]: I0131 07:39:15.432830 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-544z9"] Jan 31 07:39:15 crc kubenswrapper[4908]: I0131 07:39:15.449169 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5kh5\" (UniqueName: \"kubernetes.io/projected/c354654c-8b0b-48a4-a8ea-4cce2ba23701-kube-api-access-v5kh5\") pod \"dnsmasq-dns-675f4bcbfc-vn4f5\" (UID: \"c354654c-8b0b-48a4-a8ea-4cce2ba23701\") " pod="openstack/dnsmasq-dns-675f4bcbfc-vn4f5" Jan 31 07:39:15 crc kubenswrapper[4908]: I0131 07:39:15.449267 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c354654c-8b0b-48a4-a8ea-4cce2ba23701-config\") pod \"dnsmasq-dns-675f4bcbfc-vn4f5\" (UID: \"c354654c-8b0b-48a4-a8ea-4cce2ba23701\") " pod="openstack/dnsmasq-dns-675f4bcbfc-vn4f5" Jan 31 07:39:15 crc kubenswrapper[4908]: I0131 07:39:15.550614 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c07e4c1-245f-4cd9-a49b-c12ed2d585e2-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-544z9\" (UID: \"2c07e4c1-245f-4cd9-a49b-c12ed2d585e2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-544z9" Jan 31 07:39:15 crc kubenswrapper[4908]: I0131 07:39:15.550703 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5f6b\" (UniqueName: \"kubernetes.io/projected/2c07e4c1-245f-4cd9-a49b-c12ed2d585e2-kube-api-access-r5f6b\") pod \"dnsmasq-dns-78dd6ddcc-544z9\" (UID: \"2c07e4c1-245f-4cd9-a49b-c12ed2d585e2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-544z9" Jan 31 07:39:15 crc kubenswrapper[4908]: I0131 07:39:15.550732 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5kh5\" (UniqueName: \"kubernetes.io/projected/c354654c-8b0b-48a4-a8ea-4cce2ba23701-kube-api-access-v5kh5\") pod \"dnsmasq-dns-675f4bcbfc-vn4f5\" (UID: \"c354654c-8b0b-48a4-a8ea-4cce2ba23701\") " pod="openstack/dnsmasq-dns-675f4bcbfc-vn4f5" Jan 31 07:39:15 crc kubenswrapper[4908]: I0131 07:39:15.550782 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c07e4c1-245f-4cd9-a49b-c12ed2d585e2-config\") pod \"dnsmasq-dns-78dd6ddcc-544z9\" (UID: \"2c07e4c1-245f-4cd9-a49b-c12ed2d585e2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-544z9" Jan 31 07:39:15 crc kubenswrapper[4908]: I0131 07:39:15.550820 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c354654c-8b0b-48a4-a8ea-4cce2ba23701-config\") pod \"dnsmasq-dns-675f4bcbfc-vn4f5\" (UID: \"c354654c-8b0b-48a4-a8ea-4cce2ba23701\") " pod="openstack/dnsmasq-dns-675f4bcbfc-vn4f5" Jan 31 07:39:15 crc kubenswrapper[4908]: I0131 07:39:15.551795 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c354654c-8b0b-48a4-a8ea-4cce2ba23701-config\") pod \"dnsmasq-dns-675f4bcbfc-vn4f5\" (UID: \"c354654c-8b0b-48a4-a8ea-4cce2ba23701\") " pod="openstack/dnsmasq-dns-675f4bcbfc-vn4f5" Jan 31 07:39:15 crc kubenswrapper[4908]: I0131 07:39:15.581908 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5kh5\" (UniqueName: \"kubernetes.io/projected/c354654c-8b0b-48a4-a8ea-4cce2ba23701-kube-api-access-v5kh5\") pod \"dnsmasq-dns-675f4bcbfc-vn4f5\" (UID: \"c354654c-8b0b-48a4-a8ea-4cce2ba23701\") " pod="openstack/dnsmasq-dns-675f4bcbfc-vn4f5" Jan 31 07:39:15 crc kubenswrapper[4908]: I0131 07:39:15.652208 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c07e4c1-245f-4cd9-a49b-c12ed2d585e2-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-544z9\" (UID: \"2c07e4c1-245f-4cd9-a49b-c12ed2d585e2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-544z9" Jan 31 07:39:15 crc kubenswrapper[4908]: I0131 07:39:15.652531 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5f6b\" (UniqueName: \"kubernetes.io/projected/2c07e4c1-245f-4cd9-a49b-c12ed2d585e2-kube-api-access-r5f6b\") pod \"dnsmasq-dns-78dd6ddcc-544z9\" (UID: \"2c07e4c1-245f-4cd9-a49b-c12ed2d585e2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-544z9" Jan 31 07:39:15 crc kubenswrapper[4908]: I0131 07:39:15.652586 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c07e4c1-245f-4cd9-a49b-c12ed2d585e2-config\") pod \"dnsmasq-dns-78dd6ddcc-544z9\" (UID: \"2c07e4c1-245f-4cd9-a49b-c12ed2d585e2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-544z9" Jan 31 07:39:15 crc kubenswrapper[4908]: I0131 07:39:15.653345 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c07e4c1-245f-4cd9-a49b-c12ed2d585e2-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-544z9\" (UID: \"2c07e4c1-245f-4cd9-a49b-c12ed2d585e2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-544z9" Jan 31 07:39:15 crc kubenswrapper[4908]: I0131 07:39:15.653373 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c07e4c1-245f-4cd9-a49b-c12ed2d585e2-config\") pod \"dnsmasq-dns-78dd6ddcc-544z9\" (UID: \"2c07e4c1-245f-4cd9-a49b-c12ed2d585e2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-544z9" Jan 31 07:39:15 crc kubenswrapper[4908]: I0131 07:39:15.673064 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5f6b\" (UniqueName: \"kubernetes.io/projected/2c07e4c1-245f-4cd9-a49b-c12ed2d585e2-kube-api-access-r5f6b\") pod \"dnsmasq-dns-78dd6ddcc-544z9\" (UID: \"2c07e4c1-245f-4cd9-a49b-c12ed2d585e2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-544z9" Jan 31 07:39:15 crc kubenswrapper[4908]: I0131 07:39:15.680896 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-vn4f5" Jan 31 07:39:15 crc kubenswrapper[4908]: I0131 07:39:15.750313 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-544z9" Jan 31 07:39:16 crc kubenswrapper[4908]: I0131 07:39:16.168733 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-vn4f5"] Jan 31 07:39:16 crc kubenswrapper[4908]: I0131 07:39:16.219510 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-544z9"] Jan 31 07:39:16 crc kubenswrapper[4908]: W0131 07:39:16.220441 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c07e4c1_245f_4cd9_a49b_c12ed2d585e2.slice/crio-3739a5e422be45de99f87d1b0842528a4c2761dfbc72562182a5c72636452dc6 WatchSource:0}: Error finding container 3739a5e422be45de99f87d1b0842528a4c2761dfbc72562182a5c72636452dc6: Status 404 returned error can't find the container with id 3739a5e422be45de99f87d1b0842528a4c2761dfbc72562182a5c72636452dc6 Jan 31 07:39:17 crc kubenswrapper[4908]: I0131 07:39:17.023010 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-vn4f5" event={"ID":"c354654c-8b0b-48a4-a8ea-4cce2ba23701","Type":"ContainerStarted","Data":"80122a604ce456f7bb2653c5dc327276a0ce61ee484f2a89687e25911c48ff50"} Jan 31 07:39:17 crc kubenswrapper[4908]: I0131 07:39:17.024796 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-544z9" event={"ID":"2c07e4c1-245f-4cd9-a49b-c12ed2d585e2","Type":"ContainerStarted","Data":"3739a5e422be45de99f87d1b0842528a4c2761dfbc72562182a5c72636452dc6"} Jan 31 07:39:18 crc kubenswrapper[4908]: I0131 07:39:18.344816 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-vn4f5"] Jan 31 07:39:18 crc kubenswrapper[4908]: I0131 07:39:18.382554 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-l2hrp"] Jan 31 07:39:18 crc kubenswrapper[4908]: I0131 07:39:18.383744 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-l2hrp" Jan 31 07:39:18 crc kubenswrapper[4908]: I0131 07:39:18.393264 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-l2hrp"] Jan 31 07:39:18 crc kubenswrapper[4908]: I0131 07:39:18.401637 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fb7706f-2af6-4cf0-8221-9bbf78b261a0-dns-svc\") pod \"dnsmasq-dns-666b6646f7-l2hrp\" (UID: \"5fb7706f-2af6-4cf0-8221-9bbf78b261a0\") " pod="openstack/dnsmasq-dns-666b6646f7-l2hrp" Jan 31 07:39:18 crc kubenswrapper[4908]: I0131 07:39:18.401805 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgkvd\" (UniqueName: \"kubernetes.io/projected/5fb7706f-2af6-4cf0-8221-9bbf78b261a0-kube-api-access-kgkvd\") pod \"dnsmasq-dns-666b6646f7-l2hrp\" (UID: \"5fb7706f-2af6-4cf0-8221-9bbf78b261a0\") " pod="openstack/dnsmasq-dns-666b6646f7-l2hrp" Jan 31 07:39:18 crc kubenswrapper[4908]: I0131 07:39:18.401884 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fb7706f-2af6-4cf0-8221-9bbf78b261a0-config\") pod \"dnsmasq-dns-666b6646f7-l2hrp\" (UID: \"5fb7706f-2af6-4cf0-8221-9bbf78b261a0\") " pod="openstack/dnsmasq-dns-666b6646f7-l2hrp" Jan 31 07:39:18 crc kubenswrapper[4908]: I0131 07:39:18.503081 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fb7706f-2af6-4cf0-8221-9bbf78b261a0-dns-svc\") pod \"dnsmasq-dns-666b6646f7-l2hrp\" (UID: \"5fb7706f-2af6-4cf0-8221-9bbf78b261a0\") " pod="openstack/dnsmasq-dns-666b6646f7-l2hrp" Jan 31 07:39:18 crc kubenswrapper[4908]: I0131 07:39:18.503134 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgkvd\" (UniqueName: \"kubernetes.io/projected/5fb7706f-2af6-4cf0-8221-9bbf78b261a0-kube-api-access-kgkvd\") pod \"dnsmasq-dns-666b6646f7-l2hrp\" (UID: \"5fb7706f-2af6-4cf0-8221-9bbf78b261a0\") " pod="openstack/dnsmasq-dns-666b6646f7-l2hrp" Jan 31 07:39:18 crc kubenswrapper[4908]: I0131 07:39:18.503155 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fb7706f-2af6-4cf0-8221-9bbf78b261a0-config\") pod \"dnsmasq-dns-666b6646f7-l2hrp\" (UID: \"5fb7706f-2af6-4cf0-8221-9bbf78b261a0\") " pod="openstack/dnsmasq-dns-666b6646f7-l2hrp" Jan 31 07:39:18 crc kubenswrapper[4908]: I0131 07:39:18.504772 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fb7706f-2af6-4cf0-8221-9bbf78b261a0-dns-svc\") pod \"dnsmasq-dns-666b6646f7-l2hrp\" (UID: \"5fb7706f-2af6-4cf0-8221-9bbf78b261a0\") " pod="openstack/dnsmasq-dns-666b6646f7-l2hrp" Jan 31 07:39:18 crc kubenswrapper[4908]: I0131 07:39:18.504807 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fb7706f-2af6-4cf0-8221-9bbf78b261a0-config\") pod \"dnsmasq-dns-666b6646f7-l2hrp\" (UID: \"5fb7706f-2af6-4cf0-8221-9bbf78b261a0\") " pod="openstack/dnsmasq-dns-666b6646f7-l2hrp" Jan 31 07:39:18 crc kubenswrapper[4908]: I0131 07:39:18.526941 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgkvd\" (UniqueName: \"kubernetes.io/projected/5fb7706f-2af6-4cf0-8221-9bbf78b261a0-kube-api-access-kgkvd\") pod \"dnsmasq-dns-666b6646f7-l2hrp\" (UID: \"5fb7706f-2af6-4cf0-8221-9bbf78b261a0\") " pod="openstack/dnsmasq-dns-666b6646f7-l2hrp" Jan 31 07:39:18 crc kubenswrapper[4908]: I0131 07:39:18.625600 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-544z9"] Jan 31 07:39:18 crc kubenswrapper[4908]: I0131 07:39:18.656917 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-v4vxc"] Jan 31 07:39:18 crc kubenswrapper[4908]: I0131 07:39:18.659196 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-v4vxc" Jan 31 07:39:18 crc kubenswrapper[4908]: I0131 07:39:18.679905 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-v4vxc"] Jan 31 07:39:18 crc kubenswrapper[4908]: I0131 07:39:18.700949 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-l2hrp" Jan 31 07:39:18 crc kubenswrapper[4908]: I0131 07:39:18.806799 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1a05f6f-7cfb-4c96-9649-5ed588e52fd5-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-v4vxc\" (UID: \"c1a05f6f-7cfb-4c96-9649-5ed588e52fd5\") " pod="openstack/dnsmasq-dns-57d769cc4f-v4vxc" Jan 31 07:39:18 crc kubenswrapper[4908]: I0131 07:39:18.806876 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrqqq\" (UniqueName: \"kubernetes.io/projected/c1a05f6f-7cfb-4c96-9649-5ed588e52fd5-kube-api-access-lrqqq\") pod \"dnsmasq-dns-57d769cc4f-v4vxc\" (UID: \"c1a05f6f-7cfb-4c96-9649-5ed588e52fd5\") " pod="openstack/dnsmasq-dns-57d769cc4f-v4vxc" Jan 31 07:39:18 crc kubenswrapper[4908]: I0131 07:39:18.806919 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a05f6f-7cfb-4c96-9649-5ed588e52fd5-config\") pod \"dnsmasq-dns-57d769cc4f-v4vxc\" (UID: \"c1a05f6f-7cfb-4c96-9649-5ed588e52fd5\") " pod="openstack/dnsmasq-dns-57d769cc4f-v4vxc" Jan 31 07:39:18 crc kubenswrapper[4908]: I0131 07:39:18.908692 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1a05f6f-7cfb-4c96-9649-5ed588e52fd5-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-v4vxc\" (UID: \"c1a05f6f-7cfb-4c96-9649-5ed588e52fd5\") " pod="openstack/dnsmasq-dns-57d769cc4f-v4vxc" Jan 31 07:39:18 crc kubenswrapper[4908]: I0131 07:39:18.909544 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrqqq\" (UniqueName: \"kubernetes.io/projected/c1a05f6f-7cfb-4c96-9649-5ed588e52fd5-kube-api-access-lrqqq\") pod \"dnsmasq-dns-57d769cc4f-v4vxc\" (UID: \"c1a05f6f-7cfb-4c96-9649-5ed588e52fd5\") " pod="openstack/dnsmasq-dns-57d769cc4f-v4vxc" Jan 31 07:39:18 crc kubenswrapper[4908]: I0131 07:39:18.909934 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a05f6f-7cfb-4c96-9649-5ed588e52fd5-config\") pod \"dnsmasq-dns-57d769cc4f-v4vxc\" (UID: \"c1a05f6f-7cfb-4c96-9649-5ed588e52fd5\") " pod="openstack/dnsmasq-dns-57d769cc4f-v4vxc" Jan 31 07:39:18 crc kubenswrapper[4908]: I0131 07:39:18.910700 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1a05f6f-7cfb-4c96-9649-5ed588e52fd5-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-v4vxc\" (UID: \"c1a05f6f-7cfb-4c96-9649-5ed588e52fd5\") " pod="openstack/dnsmasq-dns-57d769cc4f-v4vxc" Jan 31 07:39:18 crc kubenswrapper[4908]: I0131 07:39:18.911105 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a05f6f-7cfb-4c96-9649-5ed588e52fd5-config\") pod \"dnsmasq-dns-57d769cc4f-v4vxc\" (UID: \"c1a05f6f-7cfb-4c96-9649-5ed588e52fd5\") " pod="openstack/dnsmasq-dns-57d769cc4f-v4vxc" Jan 31 07:39:18 crc kubenswrapper[4908]: I0131 07:39:18.931093 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrqqq\" (UniqueName: \"kubernetes.io/projected/c1a05f6f-7cfb-4c96-9649-5ed588e52fd5-kube-api-access-lrqqq\") pod \"dnsmasq-dns-57d769cc4f-v4vxc\" (UID: \"c1a05f6f-7cfb-4c96-9649-5ed588e52fd5\") " pod="openstack/dnsmasq-dns-57d769cc4f-v4vxc" Jan 31 07:39:18 crc kubenswrapper[4908]: I0131 07:39:18.996706 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-v4vxc" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.235138 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-l2hrp"] Jan 31 07:39:19 crc kubenswrapper[4908]: W0131 07:39:19.246444 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5fb7706f_2af6_4cf0_8221_9bbf78b261a0.slice/crio-ec751d7d7303ab7e6a7f217f6a2f89a49f4e6b6a5e7d4b4227b6ada98ea68b93 WatchSource:0}: Error finding container ec751d7d7303ab7e6a7f217f6a2f89a49f4e6b6a5e7d4b4227b6ada98ea68b93: Status 404 returned error can't find the container with id ec751d7d7303ab7e6a7f217f6a2f89a49f4e6b6a5e7d4b4227b6ada98ea68b93 Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.480766 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.482337 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.484258 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.484524 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.484587 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-lsd77" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.484616 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.490763 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-v4vxc"] Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.492500 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.492736 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.492737 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.499821 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.623623 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a1644408-1d98-43ed-b7eb-f399d80a7d10-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.623672 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a1644408-1d98-43ed-b7eb-f399d80a7d10-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.623704 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1644408-1d98-43ed-b7eb-f399d80a7d10-config-data\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.623729 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a1644408-1d98-43ed-b7eb-f399d80a7d10-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.623928 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.624058 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a1644408-1d98-43ed-b7eb-f399d80a7d10-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.624099 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq2jz\" (UniqueName: \"kubernetes.io/projected/a1644408-1d98-43ed-b7eb-f399d80a7d10-kube-api-access-sq2jz\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.624129 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a1644408-1d98-43ed-b7eb-f399d80a7d10-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.624244 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a1644408-1d98-43ed-b7eb-f399d80a7d10-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.624288 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a1644408-1d98-43ed-b7eb-f399d80a7d10-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.624320 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a1644408-1d98-43ed-b7eb-f399d80a7d10-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.726869 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a1644408-1d98-43ed-b7eb-f399d80a7d10-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.726913 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq2jz\" (UniqueName: \"kubernetes.io/projected/a1644408-1d98-43ed-b7eb-f399d80a7d10-kube-api-access-sq2jz\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.726935 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a1644408-1d98-43ed-b7eb-f399d80a7d10-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.727008 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a1644408-1d98-43ed-b7eb-f399d80a7d10-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.727030 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a1644408-1d98-43ed-b7eb-f399d80a7d10-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.727045 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a1644408-1d98-43ed-b7eb-f399d80a7d10-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.727114 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a1644408-1d98-43ed-b7eb-f399d80a7d10-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.727130 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a1644408-1d98-43ed-b7eb-f399d80a7d10-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.727149 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1644408-1d98-43ed-b7eb-f399d80a7d10-config-data\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.727165 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a1644408-1d98-43ed-b7eb-f399d80a7d10-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.728159 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.728431 4908 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.732459 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a1644408-1d98-43ed-b7eb-f399d80a7d10-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.732526 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1644408-1d98-43ed-b7eb-f399d80a7d10-config-data\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.732832 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a1644408-1d98-43ed-b7eb-f399d80a7d10-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.733450 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a1644408-1d98-43ed-b7eb-f399d80a7d10-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.742436 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a1644408-1d98-43ed-b7eb-f399d80a7d10-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.746730 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a1644408-1d98-43ed-b7eb-f399d80a7d10-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.749596 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a1644408-1d98-43ed-b7eb-f399d80a7d10-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.749628 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a1644408-1d98-43ed-b7eb-f399d80a7d10-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.768600 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a1644408-1d98-43ed-b7eb-f399d80a7d10-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.768685 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq2jz\" (UniqueName: \"kubernetes.io/projected/a1644408-1d98-43ed-b7eb-f399d80a7d10-kube-api-access-sq2jz\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.790357 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.812317 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.821948 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.827740 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.837543 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-pnrnz" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.837732 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.837871 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.837918 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.837875 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.838122 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.839365 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.848963 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.933607 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.933655 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.933681 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.933707 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.933729 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x65rm\" (UniqueName: \"kubernetes.io/projected/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-kube-api-access-x65rm\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.933750 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.933794 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.933849 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.933865 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.933943 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:19 crc kubenswrapper[4908]: I0131 07:39:19.933970 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.036787 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.036866 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.036905 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.036935 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.037009 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.037039 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.037067 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x65rm\" (UniqueName: \"kubernetes.io/projected/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-kube-api-access-x65rm\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.037099 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.037330 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.037372 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.037390 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.038525 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.038692 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.038751 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.039320 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.039522 4908 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.039753 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.043031 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.043525 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.043862 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.047152 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.059535 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x65rm\" (UniqueName: \"kubernetes.io/projected/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-kube-api-access-x65rm\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.075784 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-l2hrp" event={"ID":"5fb7706f-2af6-4cf0-8221-9bbf78b261a0","Type":"ContainerStarted","Data":"ec751d7d7303ab7e6a7f217f6a2f89a49f4e6b6a5e7d4b4227b6ada98ea68b93"} Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.078723 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-v4vxc" event={"ID":"c1a05f6f-7cfb-4c96-9649-5ed588e52fd5","Type":"ContainerStarted","Data":"0c3a8e95cdf3f267b5ad1e0db6f5a466ced047ee19205443a17462a137fbabe9"} Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.092712 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.202864 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.387941 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 07:39:20 crc kubenswrapper[4908]: W0131 07:39:20.391517 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1644408_1d98_43ed_b7eb_f399d80a7d10.slice/crio-c0bca229c05ee858a23d67a67ac967ff9c99701e49df70f21986d67549c09180 WatchSource:0}: Error finding container c0bca229c05ee858a23d67a67ac967ff9c99701e49df70f21986d67549c09180: Status 404 returned error can't find the container with id c0bca229c05ee858a23d67a67ac967ff9c99701e49df70f21986d67549c09180 Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.646381 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 07:39:20 crc kubenswrapper[4908]: W0131 07:39:20.667280 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4df218c_dfc0_4c17_8b5a_4649e3d4e710.slice/crio-77b409b12332f9d16d1207c19feb52042ee916c2fad40a0d5a00736d1b50b8b3 WatchSource:0}: Error finding container 77b409b12332f9d16d1207c19feb52042ee916c2fad40a0d5a00736d1b50b8b3: Status 404 returned error can't find the container with id 77b409b12332f9d16d1207c19feb52042ee916c2fad40a0d5a00736d1b50b8b3 Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.927768 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.929893 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.940250 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-nkm69" Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.940336 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.940480 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.940639 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.947350 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 31 07:39:20 crc kubenswrapper[4908]: I0131 07:39:20.960966 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 31 07:39:21 crc kubenswrapper[4908]: I0131 07:39:21.064917 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b3b18d7-fe50-4d65-b351-bb3bf14854f1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0b3b18d7-fe50-4d65-b351-bb3bf14854f1\") " pod="openstack/openstack-galera-0" Jan 31 07:39:21 crc kubenswrapper[4908]: I0131 07:39:21.064997 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0b3b18d7-fe50-4d65-b351-bb3bf14854f1-kolla-config\") pod \"openstack-galera-0\" (UID: \"0b3b18d7-fe50-4d65-b351-bb3bf14854f1\") " pod="openstack/openstack-galera-0" Jan 31 07:39:21 crc kubenswrapper[4908]: I0131 07:39:21.065127 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b3b18d7-fe50-4d65-b351-bb3bf14854f1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0b3b18d7-fe50-4d65-b351-bb3bf14854f1\") " pod="openstack/openstack-galera-0" Jan 31 07:39:21 crc kubenswrapper[4908]: I0131 07:39:21.065235 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7wm6\" (UniqueName: \"kubernetes.io/projected/0b3b18d7-fe50-4d65-b351-bb3bf14854f1-kube-api-access-z7wm6\") pod \"openstack-galera-0\" (UID: \"0b3b18d7-fe50-4d65-b351-bb3bf14854f1\") " pod="openstack/openstack-galera-0" Jan 31 07:39:21 crc kubenswrapper[4908]: I0131 07:39:21.065289 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0b3b18d7-fe50-4d65-b351-bb3bf14854f1-config-data-default\") pod \"openstack-galera-0\" (UID: \"0b3b18d7-fe50-4d65-b351-bb3bf14854f1\") " pod="openstack/openstack-galera-0" Jan 31 07:39:21 crc kubenswrapper[4908]: I0131 07:39:21.065455 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0b3b18d7-fe50-4d65-b351-bb3bf14854f1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0b3b18d7-fe50-4d65-b351-bb3bf14854f1\") " pod="openstack/openstack-galera-0" Jan 31 07:39:21 crc kubenswrapper[4908]: I0131 07:39:21.065614 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b3b18d7-fe50-4d65-b351-bb3bf14854f1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0b3b18d7-fe50-4d65-b351-bb3bf14854f1\") " pod="openstack/openstack-galera-0" Jan 31 07:39:21 crc kubenswrapper[4908]: I0131 07:39:21.065725 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"0b3b18d7-fe50-4d65-b351-bb3bf14854f1\") " pod="openstack/openstack-galera-0" Jan 31 07:39:21 crc kubenswrapper[4908]: I0131 07:39:21.088874 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a1644408-1d98-43ed-b7eb-f399d80a7d10","Type":"ContainerStarted","Data":"c0bca229c05ee858a23d67a67ac967ff9c99701e49df70f21986d67549c09180"} Jan 31 07:39:21 crc kubenswrapper[4908]: I0131 07:39:21.093589 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b4df218c-dfc0-4c17-8b5a-4649e3d4e710","Type":"ContainerStarted","Data":"77b409b12332f9d16d1207c19feb52042ee916c2fad40a0d5a00736d1b50b8b3"} Jan 31 07:39:21 crc kubenswrapper[4908]: I0131 07:39:21.167014 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b3b18d7-fe50-4d65-b351-bb3bf14854f1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0b3b18d7-fe50-4d65-b351-bb3bf14854f1\") " pod="openstack/openstack-galera-0" Jan 31 07:39:21 crc kubenswrapper[4908]: I0131 07:39:21.167076 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7wm6\" (UniqueName: \"kubernetes.io/projected/0b3b18d7-fe50-4d65-b351-bb3bf14854f1-kube-api-access-z7wm6\") pod \"openstack-galera-0\" (UID: \"0b3b18d7-fe50-4d65-b351-bb3bf14854f1\") " pod="openstack/openstack-galera-0" Jan 31 07:39:21 crc kubenswrapper[4908]: I0131 07:39:21.167109 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0b3b18d7-fe50-4d65-b351-bb3bf14854f1-config-data-default\") pod \"openstack-galera-0\" (UID: \"0b3b18d7-fe50-4d65-b351-bb3bf14854f1\") " pod="openstack/openstack-galera-0" Jan 31 07:39:21 crc kubenswrapper[4908]: I0131 07:39:21.167137 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0b3b18d7-fe50-4d65-b351-bb3bf14854f1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0b3b18d7-fe50-4d65-b351-bb3bf14854f1\") " pod="openstack/openstack-galera-0" Jan 31 07:39:21 crc kubenswrapper[4908]: I0131 07:39:21.167179 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b3b18d7-fe50-4d65-b351-bb3bf14854f1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0b3b18d7-fe50-4d65-b351-bb3bf14854f1\") " pod="openstack/openstack-galera-0" Jan 31 07:39:21 crc kubenswrapper[4908]: I0131 07:39:21.167218 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"0b3b18d7-fe50-4d65-b351-bb3bf14854f1\") " pod="openstack/openstack-galera-0" Jan 31 07:39:21 crc kubenswrapper[4908]: I0131 07:39:21.167290 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b3b18d7-fe50-4d65-b351-bb3bf14854f1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0b3b18d7-fe50-4d65-b351-bb3bf14854f1\") " pod="openstack/openstack-galera-0" Jan 31 07:39:21 crc kubenswrapper[4908]: I0131 07:39:21.167321 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0b3b18d7-fe50-4d65-b351-bb3bf14854f1-kolla-config\") pod \"openstack-galera-0\" (UID: \"0b3b18d7-fe50-4d65-b351-bb3bf14854f1\") " pod="openstack/openstack-galera-0" Jan 31 07:39:21 crc kubenswrapper[4908]: I0131 07:39:21.168161 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0b3b18d7-fe50-4d65-b351-bb3bf14854f1-kolla-config\") pod \"openstack-galera-0\" (UID: \"0b3b18d7-fe50-4d65-b351-bb3bf14854f1\") " pod="openstack/openstack-galera-0" Jan 31 07:39:21 crc kubenswrapper[4908]: I0131 07:39:21.169291 4908 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"0b3b18d7-fe50-4d65-b351-bb3bf14854f1\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-galera-0" Jan 31 07:39:21 crc kubenswrapper[4908]: I0131 07:39:21.169302 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0b3b18d7-fe50-4d65-b351-bb3bf14854f1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0b3b18d7-fe50-4d65-b351-bb3bf14854f1\") " pod="openstack/openstack-galera-0" Jan 31 07:39:21 crc kubenswrapper[4908]: I0131 07:39:21.171254 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0b3b18d7-fe50-4d65-b351-bb3bf14854f1-config-data-default\") pod \"openstack-galera-0\" (UID: \"0b3b18d7-fe50-4d65-b351-bb3bf14854f1\") " pod="openstack/openstack-galera-0" Jan 31 07:39:21 crc kubenswrapper[4908]: I0131 07:39:21.176395 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b3b18d7-fe50-4d65-b351-bb3bf14854f1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0b3b18d7-fe50-4d65-b351-bb3bf14854f1\") " pod="openstack/openstack-galera-0" Jan 31 07:39:21 crc kubenswrapper[4908]: I0131 07:39:21.184373 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b3b18d7-fe50-4d65-b351-bb3bf14854f1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0b3b18d7-fe50-4d65-b351-bb3bf14854f1\") " pod="openstack/openstack-galera-0" Jan 31 07:39:21 crc kubenswrapper[4908]: I0131 07:39:21.185745 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b3b18d7-fe50-4d65-b351-bb3bf14854f1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0b3b18d7-fe50-4d65-b351-bb3bf14854f1\") " pod="openstack/openstack-galera-0" Jan 31 07:39:21 crc kubenswrapper[4908]: I0131 07:39:21.198892 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7wm6\" (UniqueName: \"kubernetes.io/projected/0b3b18d7-fe50-4d65-b351-bb3bf14854f1-kube-api-access-z7wm6\") pod \"openstack-galera-0\" (UID: \"0b3b18d7-fe50-4d65-b351-bb3bf14854f1\") " pod="openstack/openstack-galera-0" Jan 31 07:39:21 crc kubenswrapper[4908]: I0131 07:39:21.202345 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"0b3b18d7-fe50-4d65-b351-bb3bf14854f1\") " pod="openstack/openstack-galera-0" Jan 31 07:39:21 crc kubenswrapper[4908]: I0131 07:39:21.310332 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 31 07:39:21 crc kubenswrapper[4908]: I0131 07:39:21.883093 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.237694 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.239800 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.244283 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-mv8xw" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.244399 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.244508 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.244856 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.255041 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.283036 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b3616414-a3b1-49e4-b87e-29abb6752ccb-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b3616414-a3b1-49e4-b87e-29abb6752ccb\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.283086 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3616414-a3b1-49e4-b87e-29abb6752ccb-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b3616414-a3b1-49e4-b87e-29abb6752ccb\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.283133 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mn76\" (UniqueName: \"kubernetes.io/projected/b3616414-a3b1-49e4-b87e-29abb6752ccb-kube-api-access-8mn76\") pod \"openstack-cell1-galera-0\" (UID: \"b3616414-a3b1-49e4-b87e-29abb6752ccb\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.283160 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b3616414-a3b1-49e4-b87e-29abb6752ccb\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.283182 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b3616414-a3b1-49e4-b87e-29abb6752ccb-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b3616414-a3b1-49e4-b87e-29abb6752ccb\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.283222 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3616414-a3b1-49e4-b87e-29abb6752ccb-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b3616414-a3b1-49e4-b87e-29abb6752ccb\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.283248 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b3616414-a3b1-49e4-b87e-29abb6752ccb-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b3616414-a3b1-49e4-b87e-29abb6752ccb\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.283304 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3616414-a3b1-49e4-b87e-29abb6752ccb-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b3616414-a3b1-49e4-b87e-29abb6752ccb\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.384809 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mn76\" (UniqueName: \"kubernetes.io/projected/b3616414-a3b1-49e4-b87e-29abb6752ccb-kube-api-access-8mn76\") pod \"openstack-cell1-galera-0\" (UID: \"b3616414-a3b1-49e4-b87e-29abb6752ccb\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.384863 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b3616414-a3b1-49e4-b87e-29abb6752ccb\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.384888 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b3616414-a3b1-49e4-b87e-29abb6752ccb-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b3616414-a3b1-49e4-b87e-29abb6752ccb\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.384948 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3616414-a3b1-49e4-b87e-29abb6752ccb-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b3616414-a3b1-49e4-b87e-29abb6752ccb\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.384973 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b3616414-a3b1-49e4-b87e-29abb6752ccb-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b3616414-a3b1-49e4-b87e-29abb6752ccb\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.385031 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3616414-a3b1-49e4-b87e-29abb6752ccb-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b3616414-a3b1-49e4-b87e-29abb6752ccb\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.385070 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b3616414-a3b1-49e4-b87e-29abb6752ccb-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b3616414-a3b1-49e4-b87e-29abb6752ccb\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.385093 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3616414-a3b1-49e4-b87e-29abb6752ccb-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b3616414-a3b1-49e4-b87e-29abb6752ccb\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.385817 4908 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b3616414-a3b1-49e4-b87e-29abb6752ccb\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-cell1-galera-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.386550 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b3616414-a3b1-49e4-b87e-29abb6752ccb-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b3616414-a3b1-49e4-b87e-29abb6752ccb\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.386823 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b3616414-a3b1-49e4-b87e-29abb6752ccb-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b3616414-a3b1-49e4-b87e-29abb6752ccb\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.386956 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b3616414-a3b1-49e4-b87e-29abb6752ccb-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b3616414-a3b1-49e4-b87e-29abb6752ccb\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.387418 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3616414-a3b1-49e4-b87e-29abb6752ccb-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b3616414-a3b1-49e4-b87e-29abb6752ccb\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.398690 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3616414-a3b1-49e4-b87e-29abb6752ccb-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b3616414-a3b1-49e4-b87e-29abb6752ccb\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.400324 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3616414-a3b1-49e4-b87e-29abb6752ccb-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b3616414-a3b1-49e4-b87e-29abb6752ccb\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.416659 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mn76\" (UniqueName: \"kubernetes.io/projected/b3616414-a3b1-49e4-b87e-29abb6752ccb-kube-api-access-8mn76\") pod \"openstack-cell1-galera-0\" (UID: \"b3616414-a3b1-49e4-b87e-29abb6752ccb\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.437052 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b3616414-a3b1-49e4-b87e-29abb6752ccb\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.575166 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.609538 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.610820 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.619469 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.620043 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-npszj" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.620406 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.625238 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.697735 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dc051fb6-064f-4ae7-8a0b-c69967d67049-kolla-config\") pod \"memcached-0\" (UID: \"dc051fb6-064f-4ae7-8a0b-c69967d67049\") " pod="openstack/memcached-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.698375 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc051fb6-064f-4ae7-8a0b-c69967d67049-combined-ca-bundle\") pod \"memcached-0\" (UID: \"dc051fb6-064f-4ae7-8a0b-c69967d67049\") " pod="openstack/memcached-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.698462 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k897\" (UniqueName: \"kubernetes.io/projected/dc051fb6-064f-4ae7-8a0b-c69967d67049-kube-api-access-6k897\") pod \"memcached-0\" (UID: \"dc051fb6-064f-4ae7-8a0b-c69967d67049\") " pod="openstack/memcached-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.698485 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc051fb6-064f-4ae7-8a0b-c69967d67049-memcached-tls-certs\") pod \"memcached-0\" (UID: \"dc051fb6-064f-4ae7-8a0b-c69967d67049\") " pod="openstack/memcached-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.698599 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc051fb6-064f-4ae7-8a0b-c69967d67049-config-data\") pod \"memcached-0\" (UID: \"dc051fb6-064f-4ae7-8a0b-c69967d67049\") " pod="openstack/memcached-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.803420 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc051fb6-064f-4ae7-8a0b-c69967d67049-config-data\") pod \"memcached-0\" (UID: \"dc051fb6-064f-4ae7-8a0b-c69967d67049\") " pod="openstack/memcached-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.803506 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dc051fb6-064f-4ae7-8a0b-c69967d67049-kolla-config\") pod \"memcached-0\" (UID: \"dc051fb6-064f-4ae7-8a0b-c69967d67049\") " pod="openstack/memcached-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.803531 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc051fb6-064f-4ae7-8a0b-c69967d67049-combined-ca-bundle\") pod \"memcached-0\" (UID: \"dc051fb6-064f-4ae7-8a0b-c69967d67049\") " pod="openstack/memcached-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.803636 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k897\" (UniqueName: \"kubernetes.io/projected/dc051fb6-064f-4ae7-8a0b-c69967d67049-kube-api-access-6k897\") pod \"memcached-0\" (UID: \"dc051fb6-064f-4ae7-8a0b-c69967d67049\") " pod="openstack/memcached-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.803666 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc051fb6-064f-4ae7-8a0b-c69967d67049-memcached-tls-certs\") pod \"memcached-0\" (UID: \"dc051fb6-064f-4ae7-8a0b-c69967d67049\") " pod="openstack/memcached-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.805428 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dc051fb6-064f-4ae7-8a0b-c69967d67049-kolla-config\") pod \"memcached-0\" (UID: \"dc051fb6-064f-4ae7-8a0b-c69967d67049\") " pod="openstack/memcached-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.805750 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc051fb6-064f-4ae7-8a0b-c69967d67049-config-data\") pod \"memcached-0\" (UID: \"dc051fb6-064f-4ae7-8a0b-c69967d67049\") " pod="openstack/memcached-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.808699 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc051fb6-064f-4ae7-8a0b-c69967d67049-memcached-tls-certs\") pod \"memcached-0\" (UID: \"dc051fb6-064f-4ae7-8a0b-c69967d67049\") " pod="openstack/memcached-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.811620 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc051fb6-064f-4ae7-8a0b-c69967d67049-combined-ca-bundle\") pod \"memcached-0\" (UID: \"dc051fb6-064f-4ae7-8a0b-c69967d67049\") " pod="openstack/memcached-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.850299 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k897\" (UniqueName: \"kubernetes.io/projected/dc051fb6-064f-4ae7-8a0b-c69967d67049-kube-api-access-6k897\") pod \"memcached-0\" (UID: \"dc051fb6-064f-4ae7-8a0b-c69967d67049\") " pod="openstack/memcached-0" Jan 31 07:39:22 crc kubenswrapper[4908]: I0131 07:39:22.956772 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 31 07:39:24 crc kubenswrapper[4908]: I0131 07:39:24.424358 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 07:39:24 crc kubenswrapper[4908]: I0131 07:39:24.425622 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 07:39:24 crc kubenswrapper[4908]: I0131 07:39:24.428288 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-xlnzb" Jan 31 07:39:24 crc kubenswrapper[4908]: I0131 07:39:24.433811 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 07:39:24 crc kubenswrapper[4908]: I0131 07:39:24.469390 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pv7t\" (UniqueName: \"kubernetes.io/projected/5c66c6a9-7173-46fc-b95a-b14d535e1b84-kube-api-access-8pv7t\") pod \"kube-state-metrics-0\" (UID: \"5c66c6a9-7173-46fc-b95a-b14d535e1b84\") " pod="openstack/kube-state-metrics-0" Jan 31 07:39:24 crc kubenswrapper[4908]: I0131 07:39:24.570774 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pv7t\" (UniqueName: \"kubernetes.io/projected/5c66c6a9-7173-46fc-b95a-b14d535e1b84-kube-api-access-8pv7t\") pod \"kube-state-metrics-0\" (UID: \"5c66c6a9-7173-46fc-b95a-b14d535e1b84\") " pod="openstack/kube-state-metrics-0" Jan 31 07:39:24 crc kubenswrapper[4908]: I0131 07:39:24.590072 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pv7t\" (UniqueName: \"kubernetes.io/projected/5c66c6a9-7173-46fc-b95a-b14d535e1b84-kube-api-access-8pv7t\") pod \"kube-state-metrics-0\" (UID: \"5c66c6a9-7173-46fc-b95a-b14d535e1b84\") " pod="openstack/kube-state-metrics-0" Jan 31 07:39:24 crc kubenswrapper[4908]: I0131 07:39:24.803945 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 07:39:27 crc kubenswrapper[4908]: I0131 07:39:27.936812 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-jt4wn"] Jan 31 07:39:27 crc kubenswrapper[4908]: I0131 07:39:27.938913 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jt4wn" Jan 31 07:39:27 crc kubenswrapper[4908]: I0131 07:39:27.944407 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 31 07:39:27 crc kubenswrapper[4908]: I0131 07:39:27.944751 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 31 07:39:27 crc kubenswrapper[4908]: I0131 07:39:27.945037 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-qk288" Jan 31 07:39:27 crc kubenswrapper[4908]: I0131 07:39:27.988026 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jt4wn"] Jan 31 07:39:27 crc kubenswrapper[4908]: I0131 07:39:27.996605 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-lnjb2"] Jan 31 07:39:27 crc kubenswrapper[4908]: I0131 07:39:27.998403 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-lnjb2" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.003686 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-lnjb2"] Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.026622 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmp5k\" (UniqueName: \"kubernetes.io/projected/d6e9ace1-1aad-474c-a7be-73e4a08770e1-kube-api-access-kmp5k\") pod \"ovn-controller-jt4wn\" (UID: \"d6e9ace1-1aad-474c-a7be-73e4a08770e1\") " pod="openstack/ovn-controller-jt4wn" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.026731 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d6e9ace1-1aad-474c-a7be-73e4a08770e1-var-log-ovn\") pod \"ovn-controller-jt4wn\" (UID: \"d6e9ace1-1aad-474c-a7be-73e4a08770e1\") " pod="openstack/ovn-controller-jt4wn" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.026775 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6e9ace1-1aad-474c-a7be-73e4a08770e1-scripts\") pod \"ovn-controller-jt4wn\" (UID: \"d6e9ace1-1aad-474c-a7be-73e4a08770e1\") " pod="openstack/ovn-controller-jt4wn" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.026861 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6e9ace1-1aad-474c-a7be-73e4a08770e1-var-run-ovn\") pod \"ovn-controller-jt4wn\" (UID: \"d6e9ace1-1aad-474c-a7be-73e4a08770e1\") " pod="openstack/ovn-controller-jt4wn" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.026919 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a95e7519-346b-4852-a98a-f164fc0d2b83-var-lib\") pod \"ovn-controller-ovs-lnjb2\" (UID: \"a95e7519-346b-4852-a98a-f164fc0d2b83\") " pod="openstack/ovn-controller-ovs-lnjb2" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.027011 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a95e7519-346b-4852-a98a-f164fc0d2b83-etc-ovs\") pod \"ovn-controller-ovs-lnjb2\" (UID: \"a95e7519-346b-4852-a98a-f164fc0d2b83\") " pod="openstack/ovn-controller-ovs-lnjb2" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.027070 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a95e7519-346b-4852-a98a-f164fc0d2b83-var-run\") pod \"ovn-controller-ovs-lnjb2\" (UID: \"a95e7519-346b-4852-a98a-f164fc0d2b83\") " pod="openstack/ovn-controller-ovs-lnjb2" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.027112 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a95e7519-346b-4852-a98a-f164fc0d2b83-var-log\") pod \"ovn-controller-ovs-lnjb2\" (UID: \"a95e7519-346b-4852-a98a-f164fc0d2b83\") " pod="openstack/ovn-controller-ovs-lnjb2" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.027155 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6e9ace1-1aad-474c-a7be-73e4a08770e1-ovn-controller-tls-certs\") pod \"ovn-controller-jt4wn\" (UID: \"d6e9ace1-1aad-474c-a7be-73e4a08770e1\") " pod="openstack/ovn-controller-jt4wn" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.027191 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6e9ace1-1aad-474c-a7be-73e4a08770e1-combined-ca-bundle\") pod \"ovn-controller-jt4wn\" (UID: \"d6e9ace1-1aad-474c-a7be-73e4a08770e1\") " pod="openstack/ovn-controller-jt4wn" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.027234 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d6e9ace1-1aad-474c-a7be-73e4a08770e1-var-run\") pod \"ovn-controller-jt4wn\" (UID: \"d6e9ace1-1aad-474c-a7be-73e4a08770e1\") " pod="openstack/ovn-controller-jt4wn" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.027270 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a95e7519-346b-4852-a98a-f164fc0d2b83-scripts\") pod \"ovn-controller-ovs-lnjb2\" (UID: \"a95e7519-346b-4852-a98a-f164fc0d2b83\") " pod="openstack/ovn-controller-ovs-lnjb2" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.027308 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8p8s\" (UniqueName: \"kubernetes.io/projected/a95e7519-346b-4852-a98a-f164fc0d2b83-kube-api-access-z8p8s\") pod \"ovn-controller-ovs-lnjb2\" (UID: \"a95e7519-346b-4852-a98a-f164fc0d2b83\") " pod="openstack/ovn-controller-ovs-lnjb2" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.133061 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a95e7519-346b-4852-a98a-f164fc0d2b83-scripts\") pod \"ovn-controller-ovs-lnjb2\" (UID: \"a95e7519-346b-4852-a98a-f164fc0d2b83\") " pod="openstack/ovn-controller-ovs-lnjb2" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.133118 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8p8s\" (UniqueName: \"kubernetes.io/projected/a95e7519-346b-4852-a98a-f164fc0d2b83-kube-api-access-z8p8s\") pod \"ovn-controller-ovs-lnjb2\" (UID: \"a95e7519-346b-4852-a98a-f164fc0d2b83\") " pod="openstack/ovn-controller-ovs-lnjb2" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.133163 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmp5k\" (UniqueName: \"kubernetes.io/projected/d6e9ace1-1aad-474c-a7be-73e4a08770e1-kube-api-access-kmp5k\") pod \"ovn-controller-jt4wn\" (UID: \"d6e9ace1-1aad-474c-a7be-73e4a08770e1\") " pod="openstack/ovn-controller-jt4wn" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.133218 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d6e9ace1-1aad-474c-a7be-73e4a08770e1-var-log-ovn\") pod \"ovn-controller-jt4wn\" (UID: \"d6e9ace1-1aad-474c-a7be-73e4a08770e1\") " pod="openstack/ovn-controller-jt4wn" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.133573 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6e9ace1-1aad-474c-a7be-73e4a08770e1-scripts\") pod \"ovn-controller-jt4wn\" (UID: \"d6e9ace1-1aad-474c-a7be-73e4a08770e1\") " pod="openstack/ovn-controller-jt4wn" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.134266 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d6e9ace1-1aad-474c-a7be-73e4a08770e1-var-log-ovn\") pod \"ovn-controller-jt4wn\" (UID: \"d6e9ace1-1aad-474c-a7be-73e4a08770e1\") " pod="openstack/ovn-controller-jt4wn" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.134342 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6e9ace1-1aad-474c-a7be-73e4a08770e1-var-run-ovn\") pod \"ovn-controller-jt4wn\" (UID: \"d6e9ace1-1aad-474c-a7be-73e4a08770e1\") " pod="openstack/ovn-controller-jt4wn" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.134374 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a95e7519-346b-4852-a98a-f164fc0d2b83-var-lib\") pod \"ovn-controller-ovs-lnjb2\" (UID: \"a95e7519-346b-4852-a98a-f164fc0d2b83\") " pod="openstack/ovn-controller-ovs-lnjb2" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.134427 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a95e7519-346b-4852-a98a-f164fc0d2b83-etc-ovs\") pod \"ovn-controller-ovs-lnjb2\" (UID: \"a95e7519-346b-4852-a98a-f164fc0d2b83\") " pod="openstack/ovn-controller-ovs-lnjb2" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.134457 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a95e7519-346b-4852-a98a-f164fc0d2b83-var-run\") pod \"ovn-controller-ovs-lnjb2\" (UID: \"a95e7519-346b-4852-a98a-f164fc0d2b83\") " pod="openstack/ovn-controller-ovs-lnjb2" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.134518 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a95e7519-346b-4852-a98a-f164fc0d2b83-var-log\") pod \"ovn-controller-ovs-lnjb2\" (UID: \"a95e7519-346b-4852-a98a-f164fc0d2b83\") " pod="openstack/ovn-controller-ovs-lnjb2" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.134560 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6e9ace1-1aad-474c-a7be-73e4a08770e1-ovn-controller-tls-certs\") pod \"ovn-controller-jt4wn\" (UID: \"d6e9ace1-1aad-474c-a7be-73e4a08770e1\") " pod="openstack/ovn-controller-jt4wn" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.134597 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6e9ace1-1aad-474c-a7be-73e4a08770e1-combined-ca-bundle\") pod \"ovn-controller-jt4wn\" (UID: \"d6e9ace1-1aad-474c-a7be-73e4a08770e1\") " pod="openstack/ovn-controller-jt4wn" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.134627 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d6e9ace1-1aad-474c-a7be-73e4a08770e1-var-run\") pod \"ovn-controller-jt4wn\" (UID: \"d6e9ace1-1aad-474c-a7be-73e4a08770e1\") " pod="openstack/ovn-controller-jt4wn" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.134924 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d6e9ace1-1aad-474c-a7be-73e4a08770e1-var-run\") pod \"ovn-controller-jt4wn\" (UID: \"d6e9ace1-1aad-474c-a7be-73e4a08770e1\") " pod="openstack/ovn-controller-jt4wn" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.135056 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6e9ace1-1aad-474c-a7be-73e4a08770e1-var-run-ovn\") pod \"ovn-controller-jt4wn\" (UID: \"d6e9ace1-1aad-474c-a7be-73e4a08770e1\") " pod="openstack/ovn-controller-jt4wn" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.135233 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a95e7519-346b-4852-a98a-f164fc0d2b83-var-lib\") pod \"ovn-controller-ovs-lnjb2\" (UID: \"a95e7519-346b-4852-a98a-f164fc0d2b83\") " pod="openstack/ovn-controller-ovs-lnjb2" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.135378 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a95e7519-346b-4852-a98a-f164fc0d2b83-etc-ovs\") pod \"ovn-controller-ovs-lnjb2\" (UID: \"a95e7519-346b-4852-a98a-f164fc0d2b83\") " pod="openstack/ovn-controller-ovs-lnjb2" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.135517 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a95e7519-346b-4852-a98a-f164fc0d2b83-var-run\") pod \"ovn-controller-ovs-lnjb2\" (UID: \"a95e7519-346b-4852-a98a-f164fc0d2b83\") " pod="openstack/ovn-controller-ovs-lnjb2" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.135650 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a95e7519-346b-4852-a98a-f164fc0d2b83-var-log\") pod \"ovn-controller-ovs-lnjb2\" (UID: \"a95e7519-346b-4852-a98a-f164fc0d2b83\") " pod="openstack/ovn-controller-ovs-lnjb2" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.135714 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a95e7519-346b-4852-a98a-f164fc0d2b83-scripts\") pod \"ovn-controller-ovs-lnjb2\" (UID: \"a95e7519-346b-4852-a98a-f164fc0d2b83\") " pod="openstack/ovn-controller-ovs-lnjb2" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.136148 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6e9ace1-1aad-474c-a7be-73e4a08770e1-scripts\") pod \"ovn-controller-jt4wn\" (UID: \"d6e9ace1-1aad-474c-a7be-73e4a08770e1\") " pod="openstack/ovn-controller-jt4wn" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.141041 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6e9ace1-1aad-474c-a7be-73e4a08770e1-combined-ca-bundle\") pod \"ovn-controller-jt4wn\" (UID: \"d6e9ace1-1aad-474c-a7be-73e4a08770e1\") " pod="openstack/ovn-controller-jt4wn" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.149163 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6e9ace1-1aad-474c-a7be-73e4a08770e1-ovn-controller-tls-certs\") pod \"ovn-controller-jt4wn\" (UID: \"d6e9ace1-1aad-474c-a7be-73e4a08770e1\") " pod="openstack/ovn-controller-jt4wn" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.149595 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmp5k\" (UniqueName: \"kubernetes.io/projected/d6e9ace1-1aad-474c-a7be-73e4a08770e1-kube-api-access-kmp5k\") pod \"ovn-controller-jt4wn\" (UID: \"d6e9ace1-1aad-474c-a7be-73e4a08770e1\") " pod="openstack/ovn-controller-jt4wn" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.150263 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8p8s\" (UniqueName: \"kubernetes.io/projected/a95e7519-346b-4852-a98a-f164fc0d2b83-kube-api-access-z8p8s\") pod \"ovn-controller-ovs-lnjb2\" (UID: \"a95e7519-346b-4852-a98a-f164fc0d2b83\") " pod="openstack/ovn-controller-ovs-lnjb2" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.178452 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0b3b18d7-fe50-4d65-b351-bb3bf14854f1","Type":"ContainerStarted","Data":"a92e6403645c18aed5488b31b307ea3550ac5fbb172794ad4f73d00428aff4b9"} Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.295499 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jt4wn" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.327055 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-lnjb2" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.389242 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.397387 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.399932 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.400721 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-hgbjm" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.400825 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.400940 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.400992 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.408316 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.439781 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.439826 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gppq\" (UniqueName: \"kubernetes.io/projected/d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2-kube-api-access-7gppq\") pod \"ovsdbserver-nb-0\" (UID: \"d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.439874 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.439895 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.439930 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.439994 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2-config\") pod \"ovsdbserver-nb-0\" (UID: \"d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.440008 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.440026 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.541560 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2-config\") pod \"ovsdbserver-nb-0\" (UID: \"d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.541794 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.541855 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.541897 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.541930 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gppq\" (UniqueName: \"kubernetes.io/projected/d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2-kube-api-access-7gppq\") pod \"ovsdbserver-nb-0\" (UID: \"d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.541999 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.542033 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.542079 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.542284 4908 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-nb-0" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.542505 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2-config\") pod \"ovsdbserver-nb-0\" (UID: \"d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.544127 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.544833 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.549134 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.559332 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gppq\" (UniqueName: \"kubernetes.io/projected/d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2-kube-api-access-7gppq\") pod \"ovsdbserver-nb-0\" (UID: \"d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.564215 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.566487 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.567709 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:39:28 crc kubenswrapper[4908]: I0131 07:39:28.729680 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.516128 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.522940 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.525423 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.527472 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.527576 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.527487 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-tt4ft" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.540844 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.587363 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"9177e471-d5ab-4e6f-85d6-24bb337facf6\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.587421 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9177e471-d5ab-4e6f-85d6-24bb337facf6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9177e471-d5ab-4e6f-85d6-24bb337facf6\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.587445 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9177e471-d5ab-4e6f-85d6-24bb337facf6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"9177e471-d5ab-4e6f-85d6-24bb337facf6\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.587504 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9177e471-d5ab-4e6f-85d6-24bb337facf6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"9177e471-d5ab-4e6f-85d6-24bb337facf6\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.587529 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9177e471-d5ab-4e6f-85d6-24bb337facf6-config\") pod \"ovsdbserver-sb-0\" (UID: \"9177e471-d5ab-4e6f-85d6-24bb337facf6\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.587560 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktklz\" (UniqueName: \"kubernetes.io/projected/9177e471-d5ab-4e6f-85d6-24bb337facf6-kube-api-access-ktklz\") pod \"ovsdbserver-sb-0\" (UID: \"9177e471-d5ab-4e6f-85d6-24bb337facf6\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.587595 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9177e471-d5ab-4e6f-85d6-24bb337facf6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"9177e471-d5ab-4e6f-85d6-24bb337facf6\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.587610 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9177e471-d5ab-4e6f-85d6-24bb337facf6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9177e471-d5ab-4e6f-85d6-24bb337facf6\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.689412 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktklz\" (UniqueName: \"kubernetes.io/projected/9177e471-d5ab-4e6f-85d6-24bb337facf6-kube-api-access-ktklz\") pod \"ovsdbserver-sb-0\" (UID: \"9177e471-d5ab-4e6f-85d6-24bb337facf6\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.689500 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9177e471-d5ab-4e6f-85d6-24bb337facf6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"9177e471-d5ab-4e6f-85d6-24bb337facf6\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.689523 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9177e471-d5ab-4e6f-85d6-24bb337facf6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9177e471-d5ab-4e6f-85d6-24bb337facf6\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.689562 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"9177e471-d5ab-4e6f-85d6-24bb337facf6\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.689605 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9177e471-d5ab-4e6f-85d6-24bb337facf6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9177e471-d5ab-4e6f-85d6-24bb337facf6\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.689630 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9177e471-d5ab-4e6f-85d6-24bb337facf6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"9177e471-d5ab-4e6f-85d6-24bb337facf6\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.689687 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9177e471-d5ab-4e6f-85d6-24bb337facf6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"9177e471-d5ab-4e6f-85d6-24bb337facf6\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.689711 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9177e471-d5ab-4e6f-85d6-24bb337facf6-config\") pod \"ovsdbserver-sb-0\" (UID: \"9177e471-d5ab-4e6f-85d6-24bb337facf6\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.690936 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9177e471-d5ab-4e6f-85d6-24bb337facf6-config\") pod \"ovsdbserver-sb-0\" (UID: \"9177e471-d5ab-4e6f-85d6-24bb337facf6\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.692611 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9177e471-d5ab-4e6f-85d6-24bb337facf6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"9177e471-d5ab-4e6f-85d6-24bb337facf6\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.693755 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9177e471-d5ab-4e6f-85d6-24bb337facf6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"9177e471-d5ab-4e6f-85d6-24bb337facf6\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.694052 4908 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"9177e471-d5ab-4e6f-85d6-24bb337facf6\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.700466 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9177e471-d5ab-4e6f-85d6-24bb337facf6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9177e471-d5ab-4e6f-85d6-24bb337facf6\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.700746 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9177e471-d5ab-4e6f-85d6-24bb337facf6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"9177e471-d5ab-4e6f-85d6-24bb337facf6\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.705348 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9177e471-d5ab-4e6f-85d6-24bb337facf6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9177e471-d5ab-4e6f-85d6-24bb337facf6\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.711625 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktklz\" (UniqueName: \"kubernetes.io/projected/9177e471-d5ab-4e6f-85d6-24bb337facf6-kube-api-access-ktklz\") pod \"ovsdbserver-sb-0\" (UID: \"9177e471-d5ab-4e6f-85d6-24bb337facf6\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.725477 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"9177e471-d5ab-4e6f-85d6-24bb337facf6\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:39:31 crc kubenswrapper[4908]: I0131 07:39:31.866774 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 31 07:39:32 crc kubenswrapper[4908]: I0131 07:39:32.037860 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 07:39:48 crc kubenswrapper[4908]: W0131 07:39:48.963218 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c66c6a9_7173_46fc_b95a_b14d535e1b84.slice/crio-84261ff1dd6830f5593e9af64fb3ffcbf4bd60de50d608a93ada3ec087f48828 WatchSource:0}: Error finding container 84261ff1dd6830f5593e9af64fb3ffcbf4bd60de50d608a93ada3ec087f48828: Status 404 returned error can't find the container with id 84261ff1dd6830f5593e9af64fb3ffcbf4bd60de50d608a93ada3ec087f48828 Jan 31 07:39:49 crc kubenswrapper[4908]: I0131 07:39:49.342924 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5c66c6a9-7173-46fc-b95a-b14d535e1b84","Type":"ContainerStarted","Data":"84261ff1dd6830f5593e9af64fb3ffcbf4bd60de50d608a93ada3ec087f48828"} Jan 31 07:39:53 crc kubenswrapper[4908]: E0131 07:39:53.093348 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 07:39:53 crc kubenswrapper[4908]: E0131 07:39:53.093794 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kgkvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-l2hrp_openstack(5fb7706f-2af6-4cf0-8221-9bbf78b261a0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:39:53 crc kubenswrapper[4908]: E0131 07:39:53.094968 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-l2hrp" podUID="5fb7706f-2af6-4cf0-8221-9bbf78b261a0" Jan 31 07:39:53 crc kubenswrapper[4908]: E0131 07:39:53.368435 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-l2hrp" podUID="5fb7706f-2af6-4cf0-8221-9bbf78b261a0" Jan 31 07:40:00 crc kubenswrapper[4908]: E0131 07:40:00.890538 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 07:40:00 crc kubenswrapper[4908]: E0131 07:40:00.891224 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v5kh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-vn4f5_openstack(c354654c-8b0b-48a4-a8ea-4cce2ba23701): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:40:00 crc kubenswrapper[4908]: E0131 07:40:00.892425 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-vn4f5" podUID="c354654c-8b0b-48a4-a8ea-4cce2ba23701" Jan 31 07:40:01 crc kubenswrapper[4908]: E0131 07:40:01.150815 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 07:40:01 crc kubenswrapper[4908]: E0131 07:40:01.151034 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lrqqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-v4vxc_openstack(c1a05f6f-7cfb-4c96-9649-5ed588e52fd5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:40:01 crc kubenswrapper[4908]: E0131 07:40:01.152757 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-v4vxc" podUID="c1a05f6f-7cfb-4c96-9649-5ed588e52fd5" Jan 31 07:40:01 crc kubenswrapper[4908]: E0131 07:40:01.155472 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 07:40:01 crc kubenswrapper[4908]: E0131 07:40:01.155651 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r5f6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-544z9_openstack(2c07e4c1-245f-4cd9-a49b-c12ed2d585e2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:40:01 crc kubenswrapper[4908]: E0131 07:40:01.156994 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-544z9" podUID="2c07e4c1-245f-4cd9-a49b-c12ed2d585e2" Jan 31 07:40:01 crc kubenswrapper[4908]: E0131 07:40:01.169364 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 31 07:40:01 crc kubenswrapper[4908]: E0131 07:40:01.169637 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z7wm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(0b3b18d7-fe50-4d65-b351-bb3bf14854f1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:40:01 crc kubenswrapper[4908]: E0131 07:40:01.171033 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="0b3b18d7-fe50-4d65-b351-bb3bf14854f1" Jan 31 07:40:01 crc kubenswrapper[4908]: E0131 07:40:01.421768 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-v4vxc" podUID="c1a05f6f-7cfb-4c96-9649-5ed588e52fd5" Jan 31 07:40:01 crc kubenswrapper[4908]: E0131 07:40:01.422727 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="0b3b18d7-fe50-4d65-b351-bb3bf14854f1" Jan 31 07:40:01 crc kubenswrapper[4908]: I0131 07:40:01.697963 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.022607 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jt4wn"] Jan 31 07:40:02 crc kubenswrapper[4908]: W0131 07:40:02.036200 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc051fb6_064f_4ae7_8a0b_c69967d67049.slice/crio-6e21b6024a4a1c83d517465045178db3e406c21bc28ec1e50a5450c3e1bfc560 WatchSource:0}: Error finding container 6e21b6024a4a1c83d517465045178db3e406c21bc28ec1e50a5450c3e1bfc560: Status 404 returned error can't find the container with id 6e21b6024a4a1c83d517465045178db3e406c21bc28ec1e50a5450c3e1bfc560 Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.037869 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-vn4f5" Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.040549 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-544z9" Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.048964 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 31 07:40:02 crc kubenswrapper[4908]: W0131 07:40:02.053817 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6e9ace1_1aad_474c_a7be_73e4a08770e1.slice/crio-fc42dad256b1a81f5e0a629af4e7900cfdb351fd3ce7c696209152e2602026cf WatchSource:0}: Error finding container fc42dad256b1a81f5e0a629af4e7900cfdb351fd3ce7c696209152e2602026cf: Status 404 returned error can't find the container with id fc42dad256b1a81f5e0a629af4e7900cfdb351fd3ce7c696209152e2602026cf Jan 31 07:40:02 crc kubenswrapper[4908]: W0131 07:40:02.058388 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3616414_a3b1_49e4_b87e_29abb6752ccb.slice/crio-435e133756ff9c7acc3d934aa6493563479aa4b2567762b0ddfcda88ad0ed19b WatchSource:0}: Error finding container 435e133756ff9c7acc3d934aa6493563479aa4b2567762b0ddfcda88ad0ed19b: Status 404 returned error can't find the container with id 435e133756ff9c7acc3d934aa6493563479aa4b2567762b0ddfcda88ad0ed19b Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.109020 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c07e4c1-245f-4cd9-a49b-c12ed2d585e2-config\") pod \"2c07e4c1-245f-4cd9-a49b-c12ed2d585e2\" (UID: \"2c07e4c1-245f-4cd9-a49b-c12ed2d585e2\") " Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.109073 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c07e4c1-245f-4cd9-a49b-c12ed2d585e2-dns-svc\") pod \"2c07e4c1-245f-4cd9-a49b-c12ed2d585e2\" (UID: \"2c07e4c1-245f-4cd9-a49b-c12ed2d585e2\") " Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.109102 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c354654c-8b0b-48a4-a8ea-4cce2ba23701-config\") pod \"c354654c-8b0b-48a4-a8ea-4cce2ba23701\" (UID: \"c354654c-8b0b-48a4-a8ea-4cce2ba23701\") " Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.109208 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5kh5\" (UniqueName: \"kubernetes.io/projected/c354654c-8b0b-48a4-a8ea-4cce2ba23701-kube-api-access-v5kh5\") pod \"c354654c-8b0b-48a4-a8ea-4cce2ba23701\" (UID: \"c354654c-8b0b-48a4-a8ea-4cce2ba23701\") " Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.109269 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5f6b\" (UniqueName: \"kubernetes.io/projected/2c07e4c1-245f-4cd9-a49b-c12ed2d585e2-kube-api-access-r5f6b\") pod \"2c07e4c1-245f-4cd9-a49b-c12ed2d585e2\" (UID: \"2c07e4c1-245f-4cd9-a49b-c12ed2d585e2\") " Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.109347 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-lnjb2"] Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.109762 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c07e4c1-245f-4cd9-a49b-c12ed2d585e2-config" (OuterVolumeSpecName: "config") pod "2c07e4c1-245f-4cd9-a49b-c12ed2d585e2" (UID: "2c07e4c1-245f-4cd9-a49b-c12ed2d585e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.109810 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c354654c-8b0b-48a4-a8ea-4cce2ba23701-config" (OuterVolumeSpecName: "config") pod "c354654c-8b0b-48a4-a8ea-4cce2ba23701" (UID: "c354654c-8b0b-48a4-a8ea-4cce2ba23701"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.110209 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c07e4c1-245f-4cd9-a49b-c12ed2d585e2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2c07e4c1-245f-4cd9-a49b-c12ed2d585e2" (UID: "2c07e4c1-245f-4cd9-a49b-c12ed2d585e2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.210177 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c07e4c1-245f-4cd9-a49b-c12ed2d585e2-kube-api-access-r5f6b" (OuterVolumeSpecName: "kube-api-access-r5f6b") pod "2c07e4c1-245f-4cd9-a49b-c12ed2d585e2" (UID: "2c07e4c1-245f-4cd9-a49b-c12ed2d585e2"). InnerVolumeSpecName "kube-api-access-r5f6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.210407 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c354654c-8b0b-48a4-a8ea-4cce2ba23701-kube-api-access-v5kh5" (OuterVolumeSpecName: "kube-api-access-v5kh5") pod "c354654c-8b0b-48a4-a8ea-4cce2ba23701" (UID: "c354654c-8b0b-48a4-a8ea-4cce2ba23701"). InnerVolumeSpecName "kube-api-access-v5kh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.211113 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5f6b\" (UniqueName: \"kubernetes.io/projected/2c07e4c1-245f-4cd9-a49b-c12ed2d585e2-kube-api-access-r5f6b\") pod \"2c07e4c1-245f-4cd9-a49b-c12ed2d585e2\" (UID: \"2c07e4c1-245f-4cd9-a49b-c12ed2d585e2\") " Jan 31 07:40:02 crc kubenswrapper[4908]: W0131 07:40:02.211470 4908 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/2c07e4c1-245f-4cd9-a49b-c12ed2d585e2/volumes/kubernetes.io~projected/kube-api-access-r5f6b Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.211571 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c07e4c1-245f-4cd9-a49b-c12ed2d585e2-kube-api-access-r5f6b" (OuterVolumeSpecName: "kube-api-access-r5f6b") pod "2c07e4c1-245f-4cd9-a49b-c12ed2d585e2" (UID: "2c07e4c1-245f-4cd9-a49b-c12ed2d585e2"). InnerVolumeSpecName "kube-api-access-r5f6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.212191 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5kh5\" (UniqueName: \"kubernetes.io/projected/c354654c-8b0b-48a4-a8ea-4cce2ba23701-kube-api-access-v5kh5\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.212213 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5f6b\" (UniqueName: \"kubernetes.io/projected/2c07e4c1-245f-4cd9-a49b-c12ed2d585e2-kube-api-access-r5f6b\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.212227 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c07e4c1-245f-4cd9-a49b-c12ed2d585e2-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.212239 4908 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c07e4c1-245f-4cd9-a49b-c12ed2d585e2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.212253 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c354654c-8b0b-48a4-a8ea-4cce2ba23701-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.427837 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-vn4f5" event={"ID":"c354654c-8b0b-48a4-a8ea-4cce2ba23701","Type":"ContainerDied","Data":"80122a604ce456f7bb2653c5dc327276a0ce61ee484f2a89687e25911c48ff50"} Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.427852 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-vn4f5" Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.428937 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b3616414-a3b1-49e4-b87e-29abb6752ccb","Type":"ContainerStarted","Data":"435e133756ff9c7acc3d934aa6493563479aa4b2567762b0ddfcda88ad0ed19b"} Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.433102 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-544z9" event={"ID":"2c07e4c1-245f-4cd9-a49b-c12ed2d585e2","Type":"ContainerDied","Data":"3739a5e422be45de99f87d1b0842528a4c2761dfbc72562182a5c72636452dc6"} Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.433176 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-544z9" Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.438639 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jt4wn" event={"ID":"d6e9ace1-1aad-474c-a7be-73e4a08770e1","Type":"ContainerStarted","Data":"fc42dad256b1a81f5e0a629af4e7900cfdb351fd3ce7c696209152e2602026cf"} Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.439891 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"dc051fb6-064f-4ae7-8a0b-c69967d67049","Type":"ContainerStarted","Data":"6e21b6024a4a1c83d517465045178db3e406c21bc28ec1e50a5450c3e1bfc560"} Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.537799 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-544z9"] Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.541864 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-544z9"] Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.615827 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-vn4f5"] Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.634704 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-vn4f5"] Jan 31 07:40:02 crc kubenswrapper[4908]: E0131 07:40:02.645754 4908 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc354654c_8b0b_48a4_a8ea_4cce2ba23701.slice/crio-80122a604ce456f7bb2653c5dc327276a0ce61ee484f2a89687e25911c48ff50\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc354654c_8b0b_48a4_a8ea_4cce2ba23701.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c07e4c1_245f_4cd9_a49b_c12ed2d585e2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c07e4c1_245f_4cd9_a49b_c12ed2d585e2.slice/crio-3739a5e422be45de99f87d1b0842528a4c2761dfbc72562182a5c72636452dc6\": RecentStats: unable to find data in memory cache]" Jan 31 07:40:02 crc kubenswrapper[4908]: I0131 07:40:02.764561 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 31 07:40:03 crc kubenswrapper[4908]: I0131 07:40:03.133821 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 31 07:40:03 crc kubenswrapper[4908]: W0131 07:40:03.222348 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2a696d8_67cd_4f3d_b4db_b26d2e58d4c2.slice/crio-014869021ad69ba58269fe092c6747603f897828624f656e535a3f76e826f97b WatchSource:0}: Error finding container 014869021ad69ba58269fe092c6747603f897828624f656e535a3f76e826f97b: Status 404 returned error can't find the container with id 014869021ad69ba58269fe092c6747603f897828624f656e535a3f76e826f97b Jan 31 07:40:03 crc kubenswrapper[4908]: I0131 07:40:03.460137 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lnjb2" event={"ID":"a95e7519-346b-4852-a98a-f164fc0d2b83","Type":"ContainerStarted","Data":"d51114f94effb4ec152ecbb9d97bb61b1bc9bb63bfc1cf18d0e5a0b6e44cab0c"} Jan 31 07:40:03 crc kubenswrapper[4908]: I0131 07:40:03.461881 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"9177e471-d5ab-4e6f-85d6-24bb337facf6","Type":"ContainerStarted","Data":"6a04f5c8f8e3cd168e5e35d025e17c86d9510a5522cece7c48c931b6bb20d7cb"} Jan 31 07:40:03 crc kubenswrapper[4908]: I0131 07:40:03.464302 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a1644408-1d98-43ed-b7eb-f399d80a7d10","Type":"ContainerStarted","Data":"c7d96a44eed3b5f2e9d0fc5bcb9ee82f21abe1cedd04285137bea567fecd04e2"} Jan 31 07:40:03 crc kubenswrapper[4908]: I0131 07:40:03.467010 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2","Type":"ContainerStarted","Data":"014869021ad69ba58269fe092c6747603f897828624f656e535a3f76e826f97b"} Jan 31 07:40:03 crc kubenswrapper[4908]: I0131 07:40:03.474462 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b4df218c-dfc0-4c17-8b5a-4649e3d4e710","Type":"ContainerStarted","Data":"bb91ab1f3fd499a30cb095e03eda92509acf7fe46024be71ac15353a041686a2"} Jan 31 07:40:03 crc kubenswrapper[4908]: I0131 07:40:03.962162 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c07e4c1-245f-4cd9-a49b-c12ed2d585e2" path="/var/lib/kubelet/pods/2c07e4c1-245f-4cd9-a49b-c12ed2d585e2/volumes" Jan 31 07:40:03 crc kubenswrapper[4908]: I0131 07:40:03.962802 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c354654c-8b0b-48a4-a8ea-4cce2ba23701" path="/var/lib/kubelet/pods/c354654c-8b0b-48a4-a8ea-4cce2ba23701/volumes" Jan 31 07:40:10 crc kubenswrapper[4908]: I0131 07:40:10.531887 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5c66c6a9-7173-46fc-b95a-b14d535e1b84","Type":"ContainerStarted","Data":"491710e61f70d1e90b5278ad1f61faf918dc050225b5ff0fc94242649bebb3a9"} Jan 31 07:40:10 crc kubenswrapper[4908]: I0131 07:40:10.532542 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 31 07:40:10 crc kubenswrapper[4908]: I0131 07:40:10.533543 4908 generic.go:334] "Generic (PLEG): container finished" podID="5fb7706f-2af6-4cf0-8221-9bbf78b261a0" containerID="0a1ca55619ae79887fff15d990cd0183683c2223f0cfb9123ee7bb364d5da0bc" exitCode=0 Jan 31 07:40:10 crc kubenswrapper[4908]: I0131 07:40:10.533611 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-l2hrp" event={"ID":"5fb7706f-2af6-4cf0-8221-9bbf78b261a0","Type":"ContainerDied","Data":"0a1ca55619ae79887fff15d990cd0183683c2223f0cfb9123ee7bb364d5da0bc"} Jan 31 07:40:10 crc kubenswrapper[4908]: I0131 07:40:10.535764 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jt4wn" event={"ID":"d6e9ace1-1aad-474c-a7be-73e4a08770e1","Type":"ContainerStarted","Data":"bb863317d91722e61639209763ca612e0c653f69e22164b6003831c7c093bdda"} Jan 31 07:40:10 crc kubenswrapper[4908]: I0131 07:40:10.535843 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-jt4wn" Jan 31 07:40:10 crc kubenswrapper[4908]: I0131 07:40:10.538677 4908 generic.go:334] "Generic (PLEG): container finished" podID="a95e7519-346b-4852-a98a-f164fc0d2b83" containerID="20c460ba9cd78ebdb6444fb09086ef854f0c4a799e0c8d3cf13b0fa8426ea8ee" exitCode=0 Jan 31 07:40:10 crc kubenswrapper[4908]: I0131 07:40:10.538775 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lnjb2" event={"ID":"a95e7519-346b-4852-a98a-f164fc0d2b83","Type":"ContainerDied","Data":"20c460ba9cd78ebdb6444fb09086ef854f0c4a799e0c8d3cf13b0fa8426ea8ee"} Jan 31 07:40:10 crc kubenswrapper[4908]: I0131 07:40:10.541682 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"9177e471-d5ab-4e6f-85d6-24bb337facf6","Type":"ContainerStarted","Data":"67283c335ae41379bd7debe6a6432f66c20cbe9116702ea21a5e867fdbb73025"} Jan 31 07:40:10 crc kubenswrapper[4908]: I0131 07:40:10.543271 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"dc051fb6-064f-4ae7-8a0b-c69967d67049","Type":"ContainerStarted","Data":"b87d1f2b7610f5f96cbfe364f76871442d9e165d3c816b8a88aefb04240ccae6"} Jan 31 07:40:10 crc kubenswrapper[4908]: I0131 07:40:10.543879 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 31 07:40:10 crc kubenswrapper[4908]: I0131 07:40:10.547570 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b3616414-a3b1-49e4-b87e-29abb6752ccb","Type":"ContainerStarted","Data":"01aac88fde8b7c2169037bd3e6a8e49737f705474278bf878ed6af946f78966b"} Jan 31 07:40:10 crc kubenswrapper[4908]: I0131 07:40:10.549571 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2","Type":"ContainerStarted","Data":"c92df15c4cfdbfb1faff809db9957e213d35e45cfaa6aadb66a8872614e9df73"} Jan 31 07:40:10 crc kubenswrapper[4908]: I0131 07:40:10.563081 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=28.968172628 podStartE2EDuration="46.563064351s" podCreationTimestamp="2026-01-31 07:39:24 +0000 UTC" firstStartedPulling="2026-01-31 07:39:48.968054231 +0000 UTC m=+1095.583998925" lastFinishedPulling="2026-01-31 07:40:06.562945994 +0000 UTC m=+1113.178890648" observedRunningTime="2026-01-31 07:40:10.56148928 +0000 UTC m=+1117.177433944" watchObservedRunningTime="2026-01-31 07:40:10.563064351 +0000 UTC m=+1117.179009005" Jan 31 07:40:10 crc kubenswrapper[4908]: I0131 07:40:10.599176 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=41.54064057 podStartE2EDuration="48.599151181s" podCreationTimestamp="2026-01-31 07:39:22 +0000 UTC" firstStartedPulling="2026-01-31 07:40:02.038269446 +0000 UTC m=+1108.654214100" lastFinishedPulling="2026-01-31 07:40:09.096780007 +0000 UTC m=+1115.712724711" observedRunningTime="2026-01-31 07:40:10.585801832 +0000 UTC m=+1117.201746486" watchObservedRunningTime="2026-01-31 07:40:10.599151181 +0000 UTC m=+1117.215095835" Jan 31 07:40:10 crc kubenswrapper[4908]: I0131 07:40:10.678062 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-jt4wn" podStartSLOduration=36.19862324 podStartE2EDuration="43.678046226s" podCreationTimestamp="2026-01-31 07:39:27 +0000 UTC" firstStartedPulling="2026-01-31 07:40:02.060877056 +0000 UTC m=+1108.676821710" lastFinishedPulling="2026-01-31 07:40:09.540300042 +0000 UTC m=+1116.156244696" observedRunningTime="2026-01-31 07:40:10.671730283 +0000 UTC m=+1117.287674937" watchObservedRunningTime="2026-01-31 07:40:10.678046226 +0000 UTC m=+1117.293990880" Jan 31 07:40:11 crc kubenswrapper[4908]: I0131 07:40:11.564482 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lnjb2" event={"ID":"a95e7519-346b-4852-a98a-f164fc0d2b83","Type":"ContainerStarted","Data":"b6b0bc1ae871fe447dbc155871824ba3eb952ca67c449694212de607372eef28"} Jan 31 07:40:11 crc kubenswrapper[4908]: I0131 07:40:11.565454 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lnjb2" event={"ID":"a95e7519-346b-4852-a98a-f164fc0d2b83","Type":"ContainerStarted","Data":"cc6304695f0d5a7853b46723476918bd8393495f6a6884bdfca2da07dea6fede"} Jan 31 07:40:11 crc kubenswrapper[4908]: I0131 07:40:11.566493 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-lnjb2" Jan 31 07:40:11 crc kubenswrapper[4908]: I0131 07:40:11.566750 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-lnjb2" Jan 31 07:40:11 crc kubenswrapper[4908]: I0131 07:40:11.574267 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-l2hrp" event={"ID":"5fb7706f-2af6-4cf0-8221-9bbf78b261a0","Type":"ContainerStarted","Data":"6414eb4a2830833acb9fbfc56dfde9ac91ddfeac0a8eea3f4dfada911719b969"} Jan 31 07:40:11 crc kubenswrapper[4908]: I0131 07:40:11.591501 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-lnjb2" podStartSLOduration=38.106248917 podStartE2EDuration="44.591483599s" podCreationTimestamp="2026-01-31 07:39:27 +0000 UTC" firstStartedPulling="2026-01-31 07:40:02.611397892 +0000 UTC m=+1109.227342546" lastFinishedPulling="2026-01-31 07:40:09.096632574 +0000 UTC m=+1115.712577228" observedRunningTime="2026-01-31 07:40:11.588879778 +0000 UTC m=+1118.204824452" watchObservedRunningTime="2026-01-31 07:40:11.591483599 +0000 UTC m=+1118.207428253" Jan 31 07:40:11 crc kubenswrapper[4908]: I0131 07:40:11.610017 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-l2hrp" podStartSLOduration=3.291463623 podStartE2EDuration="53.609995869s" podCreationTimestamp="2026-01-31 07:39:18 +0000 UTC" firstStartedPulling="2026-01-31 07:39:19.248071917 +0000 UTC m=+1065.864016571" lastFinishedPulling="2026-01-31 07:40:09.566604163 +0000 UTC m=+1116.182548817" observedRunningTime="2026-01-31 07:40:11.605636654 +0000 UTC m=+1118.221581328" watchObservedRunningTime="2026-01-31 07:40:11.609995869 +0000 UTC m=+1118.225940523" Jan 31 07:40:13 crc kubenswrapper[4908]: I0131 07:40:13.589283 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"9177e471-d5ab-4e6f-85d6-24bb337facf6","Type":"ContainerStarted","Data":"4c0f22f8ca370f2267fc85d37aaf6f10877e42b8831aceafd9993225839b434e"} Jan 31 07:40:13 crc kubenswrapper[4908]: I0131 07:40:13.590693 4908 generic.go:334] "Generic (PLEG): container finished" podID="c1a05f6f-7cfb-4c96-9649-5ed588e52fd5" containerID="7e3bca82a389981b09bc3a6a940dc4b2a47d2b7d90ab74af05c0ebea2f5ed35a" exitCode=0 Jan 31 07:40:13 crc kubenswrapper[4908]: I0131 07:40:13.590765 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-v4vxc" event={"ID":"c1a05f6f-7cfb-4c96-9649-5ed588e52fd5","Type":"ContainerDied","Data":"7e3bca82a389981b09bc3a6a940dc4b2a47d2b7d90ab74af05c0ebea2f5ed35a"} Jan 31 07:40:13 crc kubenswrapper[4908]: I0131 07:40:13.592828 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2","Type":"ContainerStarted","Data":"0de84e6ccce2ffb4302197ac211191334ee05cc5e8fbfe218bec71aed04badb8"} Jan 31 07:40:13 crc kubenswrapper[4908]: I0131 07:40:13.619742 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=33.584852341 podStartE2EDuration="43.61972712s" podCreationTimestamp="2026-01-31 07:39:30 +0000 UTC" firstStartedPulling="2026-01-31 07:40:02.901658706 +0000 UTC m=+1109.517603360" lastFinishedPulling="2026-01-31 07:40:12.936533485 +0000 UTC m=+1119.552478139" observedRunningTime="2026-01-31 07:40:13.613632802 +0000 UTC m=+1120.229577466" watchObservedRunningTime="2026-01-31 07:40:13.61972712 +0000 UTC m=+1120.235671774" Jan 31 07:40:13 crc kubenswrapper[4908]: I0131 07:40:13.644496 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=37.061718375 podStartE2EDuration="46.644476222s" podCreationTimestamp="2026-01-31 07:39:27 +0000 UTC" firstStartedPulling="2026-01-31 07:40:03.364356494 +0000 UTC m=+1109.980301148" lastFinishedPulling="2026-01-31 07:40:12.947114341 +0000 UTC m=+1119.563058995" observedRunningTime="2026-01-31 07:40:13.635781893 +0000 UTC m=+1120.251726547" watchObservedRunningTime="2026-01-31 07:40:13.644476222 +0000 UTC m=+1120.260420876" Jan 31 07:40:13 crc kubenswrapper[4908]: I0131 07:40:13.701883 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-l2hrp" Jan 31 07:40:13 crc kubenswrapper[4908]: I0131 07:40:13.730678 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 31 07:40:13 crc kubenswrapper[4908]: I0131 07:40:13.730751 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 31 07:40:13 crc kubenswrapper[4908]: I0131 07:40:13.766044 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 31 07:40:13 crc kubenswrapper[4908]: I0131 07:40:13.867287 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 31 07:40:13 crc kubenswrapper[4908]: I0131 07:40:13.900705 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 31 07:40:14 crc kubenswrapper[4908]: I0131 07:40:14.599681 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0b3b18d7-fe50-4d65-b351-bb3bf14854f1","Type":"ContainerStarted","Data":"251879671f05eb847b25cf0c899af10908f7ae993a48496f82afdcdfd571b4e7"} Jan 31 07:40:14 crc kubenswrapper[4908]: I0131 07:40:14.601077 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-v4vxc" event={"ID":"c1a05f6f-7cfb-4c96-9649-5ed588e52fd5","Type":"ContainerStarted","Data":"2c4143e415a823423561cd2e3d04ec40f20d3613be95fcda285c8ca473f98b1d"} Jan 31 07:40:14 crc kubenswrapper[4908]: I0131 07:40:14.601972 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 31 07:40:14 crc kubenswrapper[4908]: I0131 07:40:14.639030 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-v4vxc" podStartSLOduration=-9223371980.215765 podStartE2EDuration="56.639010501s" podCreationTimestamp="2026-01-31 07:39:18 +0000 UTC" firstStartedPulling="2026-01-31 07:39:19.545714715 +0000 UTC m=+1066.161659369" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:40:14.634661876 +0000 UTC m=+1121.250606550" watchObservedRunningTime="2026-01-31 07:40:14.639010501 +0000 UTC m=+1121.254955155" Jan 31 07:40:14 crc kubenswrapper[4908]: I0131 07:40:14.644205 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 31 07:40:14 crc kubenswrapper[4908]: I0131 07:40:14.648233 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 31 07:40:14 crc kubenswrapper[4908]: I0131 07:40:14.811991 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 31 07:40:14 crc kubenswrapper[4908]: I0131 07:40:14.947474 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-v4vxc"] Jan 31 07:40:14 crc kubenswrapper[4908]: I0131 07:40:14.996592 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-cm5cs"] Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.002004 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-cm5cs" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.004533 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.025132 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-cm5cs"] Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.051343 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c59abf0-40c3-445e-b42e-4f3d5b19e94c-config\") pod \"dnsmasq-dns-5bf47b49b7-cm5cs\" (UID: \"0c59abf0-40c3-445e-b42e-4f3d5b19e94c\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cm5cs" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.051459 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5vxk\" (UniqueName: \"kubernetes.io/projected/0c59abf0-40c3-445e-b42e-4f3d5b19e94c-kube-api-access-h5vxk\") pod \"dnsmasq-dns-5bf47b49b7-cm5cs\" (UID: \"0c59abf0-40c3-445e-b42e-4f3d5b19e94c\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cm5cs" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.051499 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c59abf0-40c3-445e-b42e-4f3d5b19e94c-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-cm5cs\" (UID: \"0c59abf0-40c3-445e-b42e-4f3d5b19e94c\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cm5cs" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.051566 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c59abf0-40c3-445e-b42e-4f3d5b19e94c-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-cm5cs\" (UID: \"0c59abf0-40c3-445e-b42e-4f3d5b19e94c\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cm5cs" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.115471 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-4xcwk"] Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.125524 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4xcwk" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.131696 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.153811 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c59abf0-40c3-445e-b42e-4f3d5b19e94c-config\") pod \"dnsmasq-dns-5bf47b49b7-cm5cs\" (UID: \"0c59abf0-40c3-445e-b42e-4f3d5b19e94c\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cm5cs" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.153908 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5vxk\" (UniqueName: \"kubernetes.io/projected/0c59abf0-40c3-445e-b42e-4f3d5b19e94c-kube-api-access-h5vxk\") pod \"dnsmasq-dns-5bf47b49b7-cm5cs\" (UID: \"0c59abf0-40c3-445e-b42e-4f3d5b19e94c\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cm5cs" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.153938 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c59abf0-40c3-445e-b42e-4f3d5b19e94c-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-cm5cs\" (UID: \"0c59abf0-40c3-445e-b42e-4f3d5b19e94c\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cm5cs" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.154000 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c59abf0-40c3-445e-b42e-4f3d5b19e94c-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-cm5cs\" (UID: \"0c59abf0-40c3-445e-b42e-4f3d5b19e94c\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cm5cs" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.154884 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c59abf0-40c3-445e-b42e-4f3d5b19e94c-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-cm5cs\" (UID: \"0c59abf0-40c3-445e-b42e-4f3d5b19e94c\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cm5cs" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.155422 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c59abf0-40c3-445e-b42e-4f3d5b19e94c-config\") pod \"dnsmasq-dns-5bf47b49b7-cm5cs\" (UID: \"0c59abf0-40c3-445e-b42e-4f3d5b19e94c\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cm5cs" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.156237 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c59abf0-40c3-445e-b42e-4f3d5b19e94c-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-cm5cs\" (UID: \"0c59abf0-40c3-445e-b42e-4f3d5b19e94c\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cm5cs" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.158211 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-4xcwk"] Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.200850 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5vxk\" (UniqueName: \"kubernetes.io/projected/0c59abf0-40c3-445e-b42e-4f3d5b19e94c-kube-api-access-h5vxk\") pod \"dnsmasq-dns-5bf47b49b7-cm5cs\" (UID: \"0c59abf0-40c3-445e-b42e-4f3d5b19e94c\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cm5cs" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.245198 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.246705 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.251491 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.251738 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.252031 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.252198 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-jndb9" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.255147 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/eea5d160-58e2-45d2-9546-4a837cb35f0b-ovn-rundir\") pod \"ovn-controller-metrics-4xcwk\" (UID: \"eea5d160-58e2-45d2-9546-4a837cb35f0b\") " pod="openstack/ovn-controller-metrics-4xcwk" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.255215 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/eea5d160-58e2-45d2-9546-4a837cb35f0b-ovs-rundir\") pod \"ovn-controller-metrics-4xcwk\" (UID: \"eea5d160-58e2-45d2-9546-4a837cb35f0b\") " pod="openstack/ovn-controller-metrics-4xcwk" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.255297 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wttw\" (UniqueName: \"kubernetes.io/projected/eea5d160-58e2-45d2-9546-4a837cb35f0b-kube-api-access-6wttw\") pod \"ovn-controller-metrics-4xcwk\" (UID: \"eea5d160-58e2-45d2-9546-4a837cb35f0b\") " pod="openstack/ovn-controller-metrics-4xcwk" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.255373 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea5d160-58e2-45d2-9546-4a837cb35f0b-combined-ca-bundle\") pod \"ovn-controller-metrics-4xcwk\" (UID: \"eea5d160-58e2-45d2-9546-4a837cb35f0b\") " pod="openstack/ovn-controller-metrics-4xcwk" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.255470 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eea5d160-58e2-45d2-9546-4a837cb35f0b-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4xcwk\" (UID: \"eea5d160-58e2-45d2-9546-4a837cb35f0b\") " pod="openstack/ovn-controller-metrics-4xcwk" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.255559 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eea5d160-58e2-45d2-9546-4a837cb35f0b-config\") pod \"ovn-controller-metrics-4xcwk\" (UID: \"eea5d160-58e2-45d2-9546-4a837cb35f0b\") " pod="openstack/ovn-controller-metrics-4xcwk" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.272144 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.280632 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-l2hrp"] Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.280920 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-l2hrp" podUID="5fb7706f-2af6-4cf0-8221-9bbf78b261a0" containerName="dnsmasq-dns" containerID="cri-o://6414eb4a2830833acb9fbfc56dfde9ac91ddfeac0a8eea3f4dfada911719b969" gracePeriod=10 Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.291194 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-l2hrp" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.314853 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-bdv78"] Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.316642 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-bdv78" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.319164 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.321334 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-cm5cs" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.339445 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-bdv78"] Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.358624 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/eea5d160-58e2-45d2-9546-4a837cb35f0b-ovn-rundir\") pod \"ovn-controller-metrics-4xcwk\" (UID: \"eea5d160-58e2-45d2-9546-4a837cb35f0b\") " pod="openstack/ovn-controller-metrics-4xcwk" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.358956 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a672a17c-744a-4e39-b753-e406a91a02f0-config\") pod \"ovn-northd-0\" (UID: \"a672a17c-744a-4e39-b753-e406a91a02f0\") " pod="openstack/ovn-northd-0" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.359070 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/eea5d160-58e2-45d2-9546-4a837cb35f0b-ovs-rundir\") pod \"ovn-controller-metrics-4xcwk\" (UID: \"eea5d160-58e2-45d2-9546-4a837cb35f0b\") " pod="openstack/ovn-controller-metrics-4xcwk" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.359125 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz7tg\" (UniqueName: \"kubernetes.io/projected/a672a17c-744a-4e39-b753-e406a91a02f0-kube-api-access-tz7tg\") pod \"ovn-northd-0\" (UID: \"a672a17c-744a-4e39-b753-e406a91a02f0\") " pod="openstack/ovn-northd-0" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.359147 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a672a17c-744a-4e39-b753-e406a91a02f0-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a672a17c-744a-4e39-b753-e406a91a02f0\") " pod="openstack/ovn-northd-0" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.359187 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wttw\" (UniqueName: \"kubernetes.io/projected/eea5d160-58e2-45d2-9546-4a837cb35f0b-kube-api-access-6wttw\") pod \"ovn-controller-metrics-4xcwk\" (UID: \"eea5d160-58e2-45d2-9546-4a837cb35f0b\") " pod="openstack/ovn-controller-metrics-4xcwk" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.359229 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a672a17c-744a-4e39-b753-e406a91a02f0-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a672a17c-744a-4e39-b753-e406a91a02f0\") " pod="openstack/ovn-northd-0" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.359259 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a672a17c-744a-4e39-b753-e406a91a02f0-scripts\") pod \"ovn-northd-0\" (UID: \"a672a17c-744a-4e39-b753-e406a91a02f0\") " pod="openstack/ovn-northd-0" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.359292 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a672a17c-744a-4e39-b753-e406a91a02f0-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a672a17c-744a-4e39-b753-e406a91a02f0\") " pod="openstack/ovn-northd-0" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.359312 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a672a17c-744a-4e39-b753-e406a91a02f0-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a672a17c-744a-4e39-b753-e406a91a02f0\") " pod="openstack/ovn-northd-0" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.359345 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea5d160-58e2-45d2-9546-4a837cb35f0b-combined-ca-bundle\") pod \"ovn-controller-metrics-4xcwk\" (UID: \"eea5d160-58e2-45d2-9546-4a837cb35f0b\") " pod="openstack/ovn-controller-metrics-4xcwk" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.359402 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eea5d160-58e2-45d2-9546-4a837cb35f0b-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4xcwk\" (UID: \"eea5d160-58e2-45d2-9546-4a837cb35f0b\") " pod="openstack/ovn-controller-metrics-4xcwk" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.359437 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eea5d160-58e2-45d2-9546-4a837cb35f0b-config\") pod \"ovn-controller-metrics-4xcwk\" (UID: \"eea5d160-58e2-45d2-9546-4a837cb35f0b\") " pod="openstack/ovn-controller-metrics-4xcwk" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.359576 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/eea5d160-58e2-45d2-9546-4a837cb35f0b-ovs-rundir\") pod \"ovn-controller-metrics-4xcwk\" (UID: \"eea5d160-58e2-45d2-9546-4a837cb35f0b\") " pod="openstack/ovn-controller-metrics-4xcwk" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.359800 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/eea5d160-58e2-45d2-9546-4a837cb35f0b-ovn-rundir\") pod \"ovn-controller-metrics-4xcwk\" (UID: \"eea5d160-58e2-45d2-9546-4a837cb35f0b\") " pod="openstack/ovn-controller-metrics-4xcwk" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.360188 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eea5d160-58e2-45d2-9546-4a837cb35f0b-config\") pod \"ovn-controller-metrics-4xcwk\" (UID: \"eea5d160-58e2-45d2-9546-4a837cb35f0b\") " pod="openstack/ovn-controller-metrics-4xcwk" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.366526 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eea5d160-58e2-45d2-9546-4a837cb35f0b-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4xcwk\" (UID: \"eea5d160-58e2-45d2-9546-4a837cb35f0b\") " pod="openstack/ovn-controller-metrics-4xcwk" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.367970 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea5d160-58e2-45d2-9546-4a837cb35f0b-combined-ca-bundle\") pod \"ovn-controller-metrics-4xcwk\" (UID: \"eea5d160-58e2-45d2-9546-4a837cb35f0b\") " pod="openstack/ovn-controller-metrics-4xcwk" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.383563 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wttw\" (UniqueName: \"kubernetes.io/projected/eea5d160-58e2-45d2-9546-4a837cb35f0b-kube-api-access-6wttw\") pod \"ovn-controller-metrics-4xcwk\" (UID: \"eea5d160-58e2-45d2-9546-4a837cb35f0b\") " pod="openstack/ovn-controller-metrics-4xcwk" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.443565 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4xcwk" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.460937 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz7tg\" (UniqueName: \"kubernetes.io/projected/a672a17c-744a-4e39-b753-e406a91a02f0-kube-api-access-tz7tg\") pod \"ovn-northd-0\" (UID: \"a672a17c-744a-4e39-b753-e406a91a02f0\") " pod="openstack/ovn-northd-0" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.460993 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a672a17c-744a-4e39-b753-e406a91a02f0-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a672a17c-744a-4e39-b753-e406a91a02f0\") " pod="openstack/ovn-northd-0" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.461048 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/269111c9-ab7c-4600-a09a-d542812483ba-dns-svc\") pod \"dnsmasq-dns-8554648995-bdv78\" (UID: \"269111c9-ab7c-4600-a09a-d542812483ba\") " pod="openstack/dnsmasq-dns-8554648995-bdv78" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.461072 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/269111c9-ab7c-4600-a09a-d542812483ba-config\") pod \"dnsmasq-dns-8554648995-bdv78\" (UID: \"269111c9-ab7c-4600-a09a-d542812483ba\") " pod="openstack/dnsmasq-dns-8554648995-bdv78" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.461106 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a672a17c-744a-4e39-b753-e406a91a02f0-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a672a17c-744a-4e39-b753-e406a91a02f0\") " pod="openstack/ovn-northd-0" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.461137 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a672a17c-744a-4e39-b753-e406a91a02f0-scripts\") pod \"ovn-northd-0\" (UID: \"a672a17c-744a-4e39-b753-e406a91a02f0\") " pod="openstack/ovn-northd-0" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.461224 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a672a17c-744a-4e39-b753-e406a91a02f0-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a672a17c-744a-4e39-b753-e406a91a02f0\") " pod="openstack/ovn-northd-0" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.461274 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/269111c9-ab7c-4600-a09a-d542812483ba-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-bdv78\" (UID: \"269111c9-ab7c-4600-a09a-d542812483ba\") " pod="openstack/dnsmasq-dns-8554648995-bdv78" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.461302 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a672a17c-744a-4e39-b753-e406a91a02f0-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a672a17c-744a-4e39-b753-e406a91a02f0\") " pod="openstack/ovn-northd-0" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.461383 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr4qw\" (UniqueName: \"kubernetes.io/projected/269111c9-ab7c-4600-a09a-d542812483ba-kube-api-access-nr4qw\") pod \"dnsmasq-dns-8554648995-bdv78\" (UID: \"269111c9-ab7c-4600-a09a-d542812483ba\") " pod="openstack/dnsmasq-dns-8554648995-bdv78" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.461418 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/269111c9-ab7c-4600-a09a-d542812483ba-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-bdv78\" (UID: \"269111c9-ab7c-4600-a09a-d542812483ba\") " pod="openstack/dnsmasq-dns-8554648995-bdv78" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.461486 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a672a17c-744a-4e39-b753-e406a91a02f0-config\") pod \"ovn-northd-0\" (UID: \"a672a17c-744a-4e39-b753-e406a91a02f0\") " pod="openstack/ovn-northd-0" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.461856 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a672a17c-744a-4e39-b753-e406a91a02f0-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a672a17c-744a-4e39-b753-e406a91a02f0\") " pod="openstack/ovn-northd-0" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.462215 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a672a17c-744a-4e39-b753-e406a91a02f0-config\") pod \"ovn-northd-0\" (UID: \"a672a17c-744a-4e39-b753-e406a91a02f0\") " pod="openstack/ovn-northd-0" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.462660 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a672a17c-744a-4e39-b753-e406a91a02f0-scripts\") pod \"ovn-northd-0\" (UID: \"a672a17c-744a-4e39-b753-e406a91a02f0\") " pod="openstack/ovn-northd-0" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.466648 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a672a17c-744a-4e39-b753-e406a91a02f0-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a672a17c-744a-4e39-b753-e406a91a02f0\") " pod="openstack/ovn-northd-0" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.469761 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a672a17c-744a-4e39-b753-e406a91a02f0-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a672a17c-744a-4e39-b753-e406a91a02f0\") " pod="openstack/ovn-northd-0" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.472307 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a672a17c-744a-4e39-b753-e406a91a02f0-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a672a17c-744a-4e39-b753-e406a91a02f0\") " pod="openstack/ovn-northd-0" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.478005 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz7tg\" (UniqueName: \"kubernetes.io/projected/a672a17c-744a-4e39-b753-e406a91a02f0-kube-api-access-tz7tg\") pod \"ovn-northd-0\" (UID: \"a672a17c-744a-4e39-b753-e406a91a02f0\") " pod="openstack/ovn-northd-0" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.563349 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/269111c9-ab7c-4600-a09a-d542812483ba-dns-svc\") pod \"dnsmasq-dns-8554648995-bdv78\" (UID: \"269111c9-ab7c-4600-a09a-d542812483ba\") " pod="openstack/dnsmasq-dns-8554648995-bdv78" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.563401 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/269111c9-ab7c-4600-a09a-d542812483ba-config\") pod \"dnsmasq-dns-8554648995-bdv78\" (UID: \"269111c9-ab7c-4600-a09a-d542812483ba\") " pod="openstack/dnsmasq-dns-8554648995-bdv78" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.563460 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/269111c9-ab7c-4600-a09a-d542812483ba-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-bdv78\" (UID: \"269111c9-ab7c-4600-a09a-d542812483ba\") " pod="openstack/dnsmasq-dns-8554648995-bdv78" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.563550 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr4qw\" (UniqueName: \"kubernetes.io/projected/269111c9-ab7c-4600-a09a-d542812483ba-kube-api-access-nr4qw\") pod \"dnsmasq-dns-8554648995-bdv78\" (UID: \"269111c9-ab7c-4600-a09a-d542812483ba\") " pod="openstack/dnsmasq-dns-8554648995-bdv78" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.563676 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/269111c9-ab7c-4600-a09a-d542812483ba-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-bdv78\" (UID: \"269111c9-ab7c-4600-a09a-d542812483ba\") " pod="openstack/dnsmasq-dns-8554648995-bdv78" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.564556 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/269111c9-ab7c-4600-a09a-d542812483ba-config\") pod \"dnsmasq-dns-8554648995-bdv78\" (UID: \"269111c9-ab7c-4600-a09a-d542812483ba\") " pod="openstack/dnsmasq-dns-8554648995-bdv78" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.564571 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/269111c9-ab7c-4600-a09a-d542812483ba-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-bdv78\" (UID: \"269111c9-ab7c-4600-a09a-d542812483ba\") " pod="openstack/dnsmasq-dns-8554648995-bdv78" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.564615 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/269111c9-ab7c-4600-a09a-d542812483ba-dns-svc\") pod \"dnsmasq-dns-8554648995-bdv78\" (UID: \"269111c9-ab7c-4600-a09a-d542812483ba\") " pod="openstack/dnsmasq-dns-8554648995-bdv78" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.565313 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/269111c9-ab7c-4600-a09a-d542812483ba-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-bdv78\" (UID: \"269111c9-ab7c-4600-a09a-d542812483ba\") " pod="openstack/dnsmasq-dns-8554648995-bdv78" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.570217 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.588374 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr4qw\" (UniqueName: \"kubernetes.io/projected/269111c9-ab7c-4600-a09a-d542812483ba-kube-api-access-nr4qw\") pod \"dnsmasq-dns-8554648995-bdv78\" (UID: \"269111c9-ab7c-4600-a09a-d542812483ba\") " pod="openstack/dnsmasq-dns-8554648995-bdv78" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.632361 4908 generic.go:334] "Generic (PLEG): container finished" podID="5fb7706f-2af6-4cf0-8221-9bbf78b261a0" containerID="6414eb4a2830833acb9fbfc56dfde9ac91ddfeac0a8eea3f4dfada911719b969" exitCode=0 Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.632546 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-l2hrp" event={"ID":"5fb7706f-2af6-4cf0-8221-9bbf78b261a0","Type":"ContainerDied","Data":"6414eb4a2830833acb9fbfc56dfde9ac91ddfeac0a8eea3f4dfada911719b969"} Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.632587 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-v4vxc" podUID="c1a05f6f-7cfb-4c96-9649-5ed588e52fd5" containerName="dnsmasq-dns" containerID="cri-o://2c4143e415a823423561cd2e3d04ec40f20d3613be95fcda285c8ca473f98b1d" gracePeriod=10 Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.633322 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-v4vxc" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.735727 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-bdv78" Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.775106 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-4xcwk"] Jan 31 07:40:15 crc kubenswrapper[4908]: W0131 07:40:15.780730 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeea5d160_58e2_45d2_9546_4a837cb35f0b.slice/crio-50ca1dafec74e6d71017b11225757081ea7a5543eefea60162cd234c9cb740ea WatchSource:0}: Error finding container 50ca1dafec74e6d71017b11225757081ea7a5543eefea60162cd234c9cb740ea: Status 404 returned error can't find the container with id 50ca1dafec74e6d71017b11225757081ea7a5543eefea60162cd234c9cb740ea Jan 31 07:40:15 crc kubenswrapper[4908]: I0131 07:40:15.862769 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-cm5cs"] Jan 31 07:40:16 crc kubenswrapper[4908]: I0131 07:40:16.068242 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 31 07:40:16 crc kubenswrapper[4908]: I0131 07:40:16.214354 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-bdv78"] Jan 31 07:40:16 crc kubenswrapper[4908]: W0131 07:40:16.217166 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod269111c9_ab7c_4600_a09a_d542812483ba.slice/crio-226d2d504d6cc45455120e7b440c676d45694b1dd486fa4e19ffd70d364fdf97 WatchSource:0}: Error finding container 226d2d504d6cc45455120e7b440c676d45694b1dd486fa4e19ffd70d364fdf97: Status 404 returned error can't find the container with id 226d2d504d6cc45455120e7b440c676d45694b1dd486fa4e19ffd70d364fdf97 Jan 31 07:40:16 crc kubenswrapper[4908]: I0131 07:40:16.641875 4908 generic.go:334] "Generic (PLEG): container finished" podID="c1a05f6f-7cfb-4c96-9649-5ed588e52fd5" containerID="2c4143e415a823423561cd2e3d04ec40f20d3613be95fcda285c8ca473f98b1d" exitCode=0 Jan 31 07:40:16 crc kubenswrapper[4908]: I0131 07:40:16.641945 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-v4vxc" event={"ID":"c1a05f6f-7cfb-4c96-9649-5ed588e52fd5","Type":"ContainerDied","Data":"2c4143e415a823423561cd2e3d04ec40f20d3613be95fcda285c8ca473f98b1d"} Jan 31 07:40:16 crc kubenswrapper[4908]: I0131 07:40:16.643202 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-bdv78" event={"ID":"269111c9-ab7c-4600-a09a-d542812483ba","Type":"ContainerStarted","Data":"226d2d504d6cc45455120e7b440c676d45694b1dd486fa4e19ffd70d364fdf97"} Jan 31 07:40:16 crc kubenswrapper[4908]: I0131 07:40:16.645005 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-cm5cs" event={"ID":"0c59abf0-40c3-445e-b42e-4f3d5b19e94c","Type":"ContainerStarted","Data":"c29de9d266bf9e6b323d81ec0526941da0187d8b44ea1cc984176df959f73fe3"} Jan 31 07:40:16 crc kubenswrapper[4908]: I0131 07:40:16.645039 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-cm5cs" event={"ID":"0c59abf0-40c3-445e-b42e-4f3d5b19e94c","Type":"ContainerStarted","Data":"9d8f2a798cf36850c3f99052c388dbceb58cf2e047c511b2b5f341418c8ecbcf"} Jan 31 07:40:16 crc kubenswrapper[4908]: I0131 07:40:16.646239 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4xcwk" event={"ID":"eea5d160-58e2-45d2-9546-4a837cb35f0b","Type":"ContainerStarted","Data":"7729bc296c5c45bd07ee9c7b98a1861cb3045ccb586adb34de95a4ec74569290"} Jan 31 07:40:16 crc kubenswrapper[4908]: I0131 07:40:16.646264 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4xcwk" event={"ID":"eea5d160-58e2-45d2-9546-4a837cb35f0b","Type":"ContainerStarted","Data":"50ca1dafec74e6d71017b11225757081ea7a5543eefea60162cd234c9cb740ea"} Jan 31 07:40:16 crc kubenswrapper[4908]: I0131 07:40:16.649962 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a672a17c-744a-4e39-b753-e406a91a02f0","Type":"ContainerStarted","Data":"9276aebc1a13341537c656d65503d9b5ab12f252e1bc67c813f78f25f9a0bd71"} Jan 31 07:40:16 crc kubenswrapper[4908]: I0131 07:40:16.805652 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-v4vxc" Jan 31 07:40:16 crc kubenswrapper[4908]: I0131 07:40:16.812681 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-l2hrp" Jan 31 07:40:16 crc kubenswrapper[4908]: I0131 07:40:16.902654 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1a05f6f-7cfb-4c96-9649-5ed588e52fd5-dns-svc\") pod \"c1a05f6f-7cfb-4c96-9649-5ed588e52fd5\" (UID: \"c1a05f6f-7cfb-4c96-9649-5ed588e52fd5\") " Jan 31 07:40:16 crc kubenswrapper[4908]: I0131 07:40:16.902699 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrqqq\" (UniqueName: \"kubernetes.io/projected/c1a05f6f-7cfb-4c96-9649-5ed588e52fd5-kube-api-access-lrqqq\") pod \"c1a05f6f-7cfb-4c96-9649-5ed588e52fd5\" (UID: \"c1a05f6f-7cfb-4c96-9649-5ed588e52fd5\") " Jan 31 07:40:16 crc kubenswrapper[4908]: I0131 07:40:16.902740 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fb7706f-2af6-4cf0-8221-9bbf78b261a0-config\") pod \"5fb7706f-2af6-4cf0-8221-9bbf78b261a0\" (UID: \"5fb7706f-2af6-4cf0-8221-9bbf78b261a0\") " Jan 31 07:40:16 crc kubenswrapper[4908]: I0131 07:40:16.902783 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a05f6f-7cfb-4c96-9649-5ed588e52fd5-config\") pod \"c1a05f6f-7cfb-4c96-9649-5ed588e52fd5\" (UID: \"c1a05f6f-7cfb-4c96-9649-5ed588e52fd5\") " Jan 31 07:40:16 crc kubenswrapper[4908]: I0131 07:40:16.902854 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgkvd\" (UniqueName: \"kubernetes.io/projected/5fb7706f-2af6-4cf0-8221-9bbf78b261a0-kube-api-access-kgkvd\") pod \"5fb7706f-2af6-4cf0-8221-9bbf78b261a0\" (UID: \"5fb7706f-2af6-4cf0-8221-9bbf78b261a0\") " Jan 31 07:40:16 crc kubenswrapper[4908]: I0131 07:40:16.902886 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fb7706f-2af6-4cf0-8221-9bbf78b261a0-dns-svc\") pod \"5fb7706f-2af6-4cf0-8221-9bbf78b261a0\" (UID: \"5fb7706f-2af6-4cf0-8221-9bbf78b261a0\") " Jan 31 07:40:16 crc kubenswrapper[4908]: I0131 07:40:16.910569 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fb7706f-2af6-4cf0-8221-9bbf78b261a0-kube-api-access-kgkvd" (OuterVolumeSpecName: "kube-api-access-kgkvd") pod "5fb7706f-2af6-4cf0-8221-9bbf78b261a0" (UID: "5fb7706f-2af6-4cf0-8221-9bbf78b261a0"). InnerVolumeSpecName "kube-api-access-kgkvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:40:16 crc kubenswrapper[4908]: I0131 07:40:16.922193 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1a05f6f-7cfb-4c96-9649-5ed588e52fd5-kube-api-access-lrqqq" (OuterVolumeSpecName: "kube-api-access-lrqqq") pod "c1a05f6f-7cfb-4c96-9649-5ed588e52fd5" (UID: "c1a05f6f-7cfb-4c96-9649-5ed588e52fd5"). InnerVolumeSpecName "kube-api-access-lrqqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:40:16 crc kubenswrapper[4908]: I0131 07:40:16.950389 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1a05f6f-7cfb-4c96-9649-5ed588e52fd5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c1a05f6f-7cfb-4c96-9649-5ed588e52fd5" (UID: "c1a05f6f-7cfb-4c96-9649-5ed588e52fd5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:40:16 crc kubenswrapper[4908]: I0131 07:40:16.958604 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1a05f6f-7cfb-4c96-9649-5ed588e52fd5-config" (OuterVolumeSpecName: "config") pod "c1a05f6f-7cfb-4c96-9649-5ed588e52fd5" (UID: "c1a05f6f-7cfb-4c96-9649-5ed588e52fd5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:40:16 crc kubenswrapper[4908]: I0131 07:40:16.994359 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fb7706f-2af6-4cf0-8221-9bbf78b261a0-config" (OuterVolumeSpecName: "config") pod "5fb7706f-2af6-4cf0-8221-9bbf78b261a0" (UID: "5fb7706f-2af6-4cf0-8221-9bbf78b261a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:40:16 crc kubenswrapper[4908]: I0131 07:40:16.994787 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fb7706f-2af6-4cf0-8221-9bbf78b261a0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5fb7706f-2af6-4cf0-8221-9bbf78b261a0" (UID: "5fb7706f-2af6-4cf0-8221-9bbf78b261a0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:40:17 crc kubenswrapper[4908]: I0131 07:40:17.007320 4908 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1a05f6f-7cfb-4c96-9649-5ed588e52fd5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:17 crc kubenswrapper[4908]: I0131 07:40:17.007350 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrqqq\" (UniqueName: \"kubernetes.io/projected/c1a05f6f-7cfb-4c96-9649-5ed588e52fd5-kube-api-access-lrqqq\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:17 crc kubenswrapper[4908]: I0131 07:40:17.007471 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fb7706f-2af6-4cf0-8221-9bbf78b261a0-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:17 crc kubenswrapper[4908]: I0131 07:40:17.007488 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a05f6f-7cfb-4c96-9649-5ed588e52fd5-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:17 crc kubenswrapper[4908]: I0131 07:40:17.007501 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgkvd\" (UniqueName: \"kubernetes.io/projected/5fb7706f-2af6-4cf0-8221-9bbf78b261a0-kube-api-access-kgkvd\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:17 crc kubenswrapper[4908]: I0131 07:40:17.007511 4908 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fb7706f-2af6-4cf0-8221-9bbf78b261a0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:17 crc kubenswrapper[4908]: I0131 07:40:17.663788 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-v4vxc" event={"ID":"c1a05f6f-7cfb-4c96-9649-5ed588e52fd5","Type":"ContainerDied","Data":"0c3a8e95cdf3f267b5ad1e0db6f5a466ced047ee19205443a17462a137fbabe9"} Jan 31 07:40:17 crc kubenswrapper[4908]: I0131 07:40:17.664323 4908 scope.go:117] "RemoveContainer" containerID="2c4143e415a823423561cd2e3d04ec40f20d3613be95fcda285c8ca473f98b1d" Jan 31 07:40:17 crc kubenswrapper[4908]: I0131 07:40:17.663824 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-v4vxc" Jan 31 07:40:17 crc kubenswrapper[4908]: I0131 07:40:17.668134 4908 generic.go:334] "Generic (PLEG): container finished" podID="269111c9-ab7c-4600-a09a-d542812483ba" containerID="125278caa89fd2dc3b60745fbe8bb7be2bb0eb9be00f697a873b643993ea71a0" exitCode=0 Jan 31 07:40:17 crc kubenswrapper[4908]: I0131 07:40:17.668224 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-bdv78" event={"ID":"269111c9-ab7c-4600-a09a-d542812483ba","Type":"ContainerDied","Data":"125278caa89fd2dc3b60745fbe8bb7be2bb0eb9be00f697a873b643993ea71a0"} Jan 31 07:40:17 crc kubenswrapper[4908]: I0131 07:40:17.674319 4908 generic.go:334] "Generic (PLEG): container finished" podID="0c59abf0-40c3-445e-b42e-4f3d5b19e94c" containerID="c29de9d266bf9e6b323d81ec0526941da0187d8b44ea1cc984176df959f73fe3" exitCode=0 Jan 31 07:40:17 crc kubenswrapper[4908]: I0131 07:40:17.674421 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-cm5cs" event={"ID":"0c59abf0-40c3-445e-b42e-4f3d5b19e94c","Type":"ContainerDied","Data":"c29de9d266bf9e6b323d81ec0526941da0187d8b44ea1cc984176df959f73fe3"} Jan 31 07:40:17 crc kubenswrapper[4908]: I0131 07:40:17.678134 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-l2hrp" event={"ID":"5fb7706f-2af6-4cf0-8221-9bbf78b261a0","Type":"ContainerDied","Data":"ec751d7d7303ab7e6a7f217f6a2f89a49f4e6b6a5e7d4b4227b6ada98ea68b93"} Jan 31 07:40:17 crc kubenswrapper[4908]: I0131 07:40:17.678169 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-l2hrp" Jan 31 07:40:17 crc kubenswrapper[4908]: I0131 07:40:17.730858 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-4xcwk" podStartSLOduration=2.7308358249999998 podStartE2EDuration="2.730835825s" podCreationTimestamp="2026-01-31 07:40:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:40:17.721644067 +0000 UTC m=+1124.337588731" watchObservedRunningTime="2026-01-31 07:40:17.730835825 +0000 UTC m=+1124.346780479" Jan 31 07:40:17 crc kubenswrapper[4908]: I0131 07:40:17.763236 4908 scope.go:117] "RemoveContainer" containerID="7e3bca82a389981b09bc3a6a940dc4b2a47d2b7d90ab74af05c0ebea2f5ed35a" Jan 31 07:40:17 crc kubenswrapper[4908]: I0131 07:40:17.791320 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-l2hrp"] Jan 31 07:40:17 crc kubenswrapper[4908]: I0131 07:40:17.798158 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-l2hrp"] Jan 31 07:40:17 crc kubenswrapper[4908]: I0131 07:40:17.806023 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-v4vxc"] Jan 31 07:40:17 crc kubenswrapper[4908]: I0131 07:40:17.812470 4908 scope.go:117] "RemoveContainer" containerID="6414eb4a2830833acb9fbfc56dfde9ac91ddfeac0a8eea3f4dfada911719b969" Jan 31 07:40:17 crc kubenswrapper[4908]: I0131 07:40:17.814041 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-v4vxc"] Jan 31 07:40:17 crc kubenswrapper[4908]: I0131 07:40:17.836120 4908 scope.go:117] "RemoveContainer" containerID="0a1ca55619ae79887fff15d990cd0183683c2223f0cfb9123ee7bb364d5da0bc" Jan 31 07:40:17 crc kubenswrapper[4908]: I0131 07:40:17.950001 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fb7706f-2af6-4cf0-8221-9bbf78b261a0" path="/var/lib/kubelet/pods/5fb7706f-2af6-4cf0-8221-9bbf78b261a0/volumes" Jan 31 07:40:17 crc kubenswrapper[4908]: I0131 07:40:17.951586 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1a05f6f-7cfb-4c96-9649-5ed588e52fd5" path="/var/lib/kubelet/pods/c1a05f6f-7cfb-4c96-9649-5ed588e52fd5/volumes" Jan 31 07:40:17 crc kubenswrapper[4908]: I0131 07:40:17.959166 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 31 07:40:18 crc kubenswrapper[4908]: I0131 07:40:18.686536 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-bdv78" event={"ID":"269111c9-ab7c-4600-a09a-d542812483ba","Type":"ContainerStarted","Data":"7f7c6afdbcc44ffd8afad0e835a7c5e55cd422049835838287de2b23120a5d27"} Jan 31 07:40:18 crc kubenswrapper[4908]: I0131 07:40:18.687471 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-bdv78" Jan 31 07:40:18 crc kubenswrapper[4908]: I0131 07:40:18.688454 4908 generic.go:334] "Generic (PLEG): container finished" podID="b3616414-a3b1-49e4-b87e-29abb6752ccb" containerID="01aac88fde8b7c2169037bd3e6a8e49737f705474278bf878ed6af946f78966b" exitCode=0 Jan 31 07:40:18 crc kubenswrapper[4908]: I0131 07:40:18.688523 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b3616414-a3b1-49e4-b87e-29abb6752ccb","Type":"ContainerDied","Data":"01aac88fde8b7c2169037bd3e6a8e49737f705474278bf878ed6af946f78966b"} Jan 31 07:40:18 crc kubenswrapper[4908]: I0131 07:40:18.690480 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-cm5cs" event={"ID":"0c59abf0-40c3-445e-b42e-4f3d5b19e94c","Type":"ContainerStarted","Data":"14e69b66105881b23e242464b73b7697b9287802f3b704e1eae789b74013390e"} Jan 31 07:40:18 crc kubenswrapper[4908]: I0131 07:40:18.690562 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-cm5cs" Jan 31 07:40:18 crc kubenswrapper[4908]: I0131 07:40:18.693805 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a672a17c-744a-4e39-b753-e406a91a02f0","Type":"ContainerStarted","Data":"6b911ac96e6ecee09773dd6e19440be18176ae852e723a9de0ad5fd9d4e2642a"} Jan 31 07:40:18 crc kubenswrapper[4908]: I0131 07:40:18.693859 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a672a17c-744a-4e39-b753-e406a91a02f0","Type":"ContainerStarted","Data":"6f0cb28d2cc66a132d226e70a40cfce80c9b04ebec790874e3d62422a665855a"} Jan 31 07:40:18 crc kubenswrapper[4908]: I0131 07:40:18.693908 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 31 07:40:18 crc kubenswrapper[4908]: I0131 07:40:18.710257 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-bdv78" podStartSLOduration=3.71024313 podStartE2EDuration="3.71024313s" podCreationTimestamp="2026-01-31 07:40:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:40:18.707522457 +0000 UTC m=+1125.323467111" watchObservedRunningTime="2026-01-31 07:40:18.71024313 +0000 UTC m=+1125.326187784" Jan 31 07:40:18 crc kubenswrapper[4908]: I0131 07:40:18.727542 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-cm5cs" podStartSLOduration=4.727524266 podStartE2EDuration="4.727524266s" podCreationTimestamp="2026-01-31 07:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:40:18.724687981 +0000 UTC m=+1125.340632645" watchObservedRunningTime="2026-01-31 07:40:18.727524266 +0000 UTC m=+1125.343468920" Jan 31 07:40:18 crc kubenswrapper[4908]: I0131 07:40:18.786210 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.269597904 podStartE2EDuration="3.786195117s" podCreationTimestamp="2026-01-31 07:40:15 +0000 UTC" firstStartedPulling="2026-01-31 07:40:16.068614941 +0000 UTC m=+1122.684559595" lastFinishedPulling="2026-01-31 07:40:17.585212154 +0000 UTC m=+1124.201156808" observedRunningTime="2026-01-31 07:40:18.78221202 +0000 UTC m=+1125.398156674" watchObservedRunningTime="2026-01-31 07:40:18.786195117 +0000 UTC m=+1125.402139771" Jan 31 07:40:19 crc kubenswrapper[4908]: I0131 07:40:19.704749 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b3616414-a3b1-49e4-b87e-29abb6752ccb","Type":"ContainerStarted","Data":"2a9e24b11009344cf5819c5c15d0cabc2e6b23b9ea7ff9009fff584967ebab54"} Jan 31 07:40:19 crc kubenswrapper[4908]: I0131 07:40:19.734105 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=56.313700933 podStartE2EDuration="58.73408744s" podCreationTimestamp="2026-01-31 07:39:21 +0000 UTC" firstStartedPulling="2026-01-31 07:40:02.065156319 +0000 UTC m=+1108.681100973" lastFinishedPulling="2026-01-31 07:40:04.485542826 +0000 UTC m=+1111.101487480" observedRunningTime="2026-01-31 07:40:19.721035816 +0000 UTC m=+1126.336980490" watchObservedRunningTime="2026-01-31 07:40:19.73408744 +0000 UTC m=+1126.350032094" Jan 31 07:40:20 crc kubenswrapper[4908]: I0131 07:40:20.713874 4908 generic.go:334] "Generic (PLEG): container finished" podID="0b3b18d7-fe50-4d65-b351-bb3bf14854f1" containerID="251879671f05eb847b25cf0c899af10908f7ae993a48496f82afdcdfd571b4e7" exitCode=0 Jan 31 07:40:20 crc kubenswrapper[4908]: I0131 07:40:20.713916 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0b3b18d7-fe50-4d65-b351-bb3bf14854f1","Type":"ContainerDied","Data":"251879671f05eb847b25cf0c899af10908f7ae993a48496f82afdcdfd571b4e7"} Jan 31 07:40:21 crc kubenswrapper[4908]: I0131 07:40:21.726233 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0b3b18d7-fe50-4d65-b351-bb3bf14854f1","Type":"ContainerStarted","Data":"d467d8573d0b219d5eb05aa2951a3b570abf342ae23ea6f5afba5ed8512c36e4"} Jan 31 07:40:21 crc kubenswrapper[4908]: I0131 07:40:21.751639 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371974.103155 podStartE2EDuration="1m2.751621213s" podCreationTimestamp="2026-01-31 07:39:19 +0000 UTC" firstStartedPulling="2026-01-31 07:39:27.354447027 +0000 UTC m=+1073.970391681" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:40:21.743965924 +0000 UTC m=+1128.359910578" watchObservedRunningTime="2026-01-31 07:40:21.751621213 +0000 UTC m=+1128.367565867" Jan 31 07:40:22 crc kubenswrapper[4908]: I0131 07:40:22.576404 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 31 07:40:22 crc kubenswrapper[4908]: I0131 07:40:22.576879 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 31 07:40:25 crc kubenswrapper[4908]: I0131 07:40:25.082265 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 31 07:40:25 crc kubenswrapper[4908]: I0131 07:40:25.151677 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 31 07:40:25 crc kubenswrapper[4908]: I0131 07:40:25.323803 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf47b49b7-cm5cs" Jan 31 07:40:25 crc kubenswrapper[4908]: I0131 07:40:25.737203 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-bdv78" Jan 31 07:40:25 crc kubenswrapper[4908]: I0131 07:40:25.789875 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-cm5cs"] Jan 31 07:40:25 crc kubenswrapper[4908]: I0131 07:40:25.790329 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-cm5cs" podUID="0c59abf0-40c3-445e-b42e-4f3d5b19e94c" containerName="dnsmasq-dns" containerID="cri-o://14e69b66105881b23e242464b73b7697b9287802f3b704e1eae789b74013390e" gracePeriod=10 Jan 31 07:40:26 crc kubenswrapper[4908]: I0131 07:40:26.761551 4908 generic.go:334] "Generic (PLEG): container finished" podID="0c59abf0-40c3-445e-b42e-4f3d5b19e94c" containerID="14e69b66105881b23e242464b73b7697b9287802f3b704e1eae789b74013390e" exitCode=0 Jan 31 07:40:26 crc kubenswrapper[4908]: I0131 07:40:26.761648 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-cm5cs" event={"ID":"0c59abf0-40c3-445e-b42e-4f3d5b19e94c","Type":"ContainerDied","Data":"14e69b66105881b23e242464b73b7697b9287802f3b704e1eae789b74013390e"} Jan 31 07:40:27 crc kubenswrapper[4908]: I0131 07:40:27.555311 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-cm5cs" Jan 31 07:40:27 crc kubenswrapper[4908]: I0131 07:40:27.669929 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5vxk\" (UniqueName: \"kubernetes.io/projected/0c59abf0-40c3-445e-b42e-4f3d5b19e94c-kube-api-access-h5vxk\") pod \"0c59abf0-40c3-445e-b42e-4f3d5b19e94c\" (UID: \"0c59abf0-40c3-445e-b42e-4f3d5b19e94c\") " Jan 31 07:40:27 crc kubenswrapper[4908]: I0131 07:40:27.670075 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c59abf0-40c3-445e-b42e-4f3d5b19e94c-ovsdbserver-nb\") pod \"0c59abf0-40c3-445e-b42e-4f3d5b19e94c\" (UID: \"0c59abf0-40c3-445e-b42e-4f3d5b19e94c\") " Jan 31 07:40:27 crc kubenswrapper[4908]: I0131 07:40:27.670103 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c59abf0-40c3-445e-b42e-4f3d5b19e94c-config\") pod \"0c59abf0-40c3-445e-b42e-4f3d5b19e94c\" (UID: \"0c59abf0-40c3-445e-b42e-4f3d5b19e94c\") " Jan 31 07:40:27 crc kubenswrapper[4908]: I0131 07:40:27.670222 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c59abf0-40c3-445e-b42e-4f3d5b19e94c-dns-svc\") pod \"0c59abf0-40c3-445e-b42e-4f3d5b19e94c\" (UID: \"0c59abf0-40c3-445e-b42e-4f3d5b19e94c\") " Jan 31 07:40:27 crc kubenswrapper[4908]: I0131 07:40:27.675243 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c59abf0-40c3-445e-b42e-4f3d5b19e94c-kube-api-access-h5vxk" (OuterVolumeSpecName: "kube-api-access-h5vxk") pod "0c59abf0-40c3-445e-b42e-4f3d5b19e94c" (UID: "0c59abf0-40c3-445e-b42e-4f3d5b19e94c"). InnerVolumeSpecName "kube-api-access-h5vxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:40:27 crc kubenswrapper[4908]: I0131 07:40:27.705880 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c59abf0-40c3-445e-b42e-4f3d5b19e94c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0c59abf0-40c3-445e-b42e-4f3d5b19e94c" (UID: "0c59abf0-40c3-445e-b42e-4f3d5b19e94c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:40:27 crc kubenswrapper[4908]: I0131 07:40:27.706682 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c59abf0-40c3-445e-b42e-4f3d5b19e94c-config" (OuterVolumeSpecName: "config") pod "0c59abf0-40c3-445e-b42e-4f3d5b19e94c" (UID: "0c59abf0-40c3-445e-b42e-4f3d5b19e94c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:40:27 crc kubenswrapper[4908]: I0131 07:40:27.708738 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c59abf0-40c3-445e-b42e-4f3d5b19e94c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0c59abf0-40c3-445e-b42e-4f3d5b19e94c" (UID: "0c59abf0-40c3-445e-b42e-4f3d5b19e94c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:40:27 crc kubenswrapper[4908]: I0131 07:40:27.772006 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-cm5cs" event={"ID":"0c59abf0-40c3-445e-b42e-4f3d5b19e94c","Type":"ContainerDied","Data":"9d8f2a798cf36850c3f99052c388dbceb58cf2e047c511b2b5f341418c8ecbcf"} Jan 31 07:40:27 crc kubenswrapper[4908]: I0131 07:40:27.772055 4908 scope.go:117] "RemoveContainer" containerID="14e69b66105881b23e242464b73b7697b9287802f3b704e1eae789b74013390e" Jan 31 07:40:27 crc kubenswrapper[4908]: I0131 07:40:27.772087 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-cm5cs" Jan 31 07:40:27 crc kubenswrapper[4908]: I0131 07:40:27.773283 4908 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c59abf0-40c3-445e-b42e-4f3d5b19e94c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:27 crc kubenswrapper[4908]: I0131 07:40:27.773646 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c59abf0-40c3-445e-b42e-4f3d5b19e94c-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:27 crc kubenswrapper[4908]: I0131 07:40:27.773712 4908 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c59abf0-40c3-445e-b42e-4f3d5b19e94c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:27 crc kubenswrapper[4908]: I0131 07:40:27.773731 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5vxk\" (UniqueName: \"kubernetes.io/projected/0c59abf0-40c3-445e-b42e-4f3d5b19e94c-kube-api-access-h5vxk\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:27 crc kubenswrapper[4908]: I0131 07:40:27.790700 4908 scope.go:117] "RemoveContainer" containerID="c29de9d266bf9e6b323d81ec0526941da0187d8b44ea1cc984176df959f73fe3" Jan 31 07:40:27 crc kubenswrapper[4908]: I0131 07:40:27.810105 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-cm5cs"] Jan 31 07:40:27 crc kubenswrapper[4908]: I0131 07:40:27.812529 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-cm5cs"] Jan 31 07:40:27 crc kubenswrapper[4908]: I0131 07:40:27.950516 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c59abf0-40c3-445e-b42e-4f3d5b19e94c" path="/var/lib/kubelet/pods/0c59abf0-40c3-445e-b42e-4f3d5b19e94c/volumes" Jan 31 07:40:31 crc kubenswrapper[4908]: I0131 07:40:31.305776 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qnzvj"] Jan 31 07:40:31 crc kubenswrapper[4908]: E0131 07:40:31.306564 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a05f6f-7cfb-4c96-9649-5ed588e52fd5" containerName="init" Jan 31 07:40:31 crc kubenswrapper[4908]: I0131 07:40:31.306576 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a05f6f-7cfb-4c96-9649-5ed588e52fd5" containerName="init" Jan 31 07:40:31 crc kubenswrapper[4908]: E0131 07:40:31.306590 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c59abf0-40c3-445e-b42e-4f3d5b19e94c" containerName="init" Jan 31 07:40:31 crc kubenswrapper[4908]: I0131 07:40:31.306596 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c59abf0-40c3-445e-b42e-4f3d5b19e94c" containerName="init" Jan 31 07:40:31 crc kubenswrapper[4908]: E0131 07:40:31.306616 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c59abf0-40c3-445e-b42e-4f3d5b19e94c" containerName="dnsmasq-dns" Jan 31 07:40:31 crc kubenswrapper[4908]: I0131 07:40:31.306623 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c59abf0-40c3-445e-b42e-4f3d5b19e94c" containerName="dnsmasq-dns" Jan 31 07:40:31 crc kubenswrapper[4908]: E0131 07:40:31.306643 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fb7706f-2af6-4cf0-8221-9bbf78b261a0" containerName="dnsmasq-dns" Jan 31 07:40:31 crc kubenswrapper[4908]: I0131 07:40:31.306650 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fb7706f-2af6-4cf0-8221-9bbf78b261a0" containerName="dnsmasq-dns" Jan 31 07:40:31 crc kubenswrapper[4908]: E0131 07:40:31.306667 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fb7706f-2af6-4cf0-8221-9bbf78b261a0" containerName="init" Jan 31 07:40:31 crc kubenswrapper[4908]: I0131 07:40:31.306672 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fb7706f-2af6-4cf0-8221-9bbf78b261a0" containerName="init" Jan 31 07:40:31 crc kubenswrapper[4908]: E0131 07:40:31.306682 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a05f6f-7cfb-4c96-9649-5ed588e52fd5" containerName="dnsmasq-dns" Jan 31 07:40:31 crc kubenswrapper[4908]: I0131 07:40:31.306687 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a05f6f-7cfb-4c96-9649-5ed588e52fd5" containerName="dnsmasq-dns" Jan 31 07:40:31 crc kubenswrapper[4908]: I0131 07:40:31.306830 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1a05f6f-7cfb-4c96-9649-5ed588e52fd5" containerName="dnsmasq-dns" Jan 31 07:40:31 crc kubenswrapper[4908]: I0131 07:40:31.306855 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c59abf0-40c3-445e-b42e-4f3d5b19e94c" containerName="dnsmasq-dns" Jan 31 07:40:31 crc kubenswrapper[4908]: I0131 07:40:31.306868 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fb7706f-2af6-4cf0-8221-9bbf78b261a0" containerName="dnsmasq-dns" Jan 31 07:40:31 crc kubenswrapper[4908]: I0131 07:40:31.307400 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qnzvj" Jan 31 07:40:31 crc kubenswrapper[4908]: I0131 07:40:31.309436 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 31 07:40:31 crc kubenswrapper[4908]: I0131 07:40:31.310777 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 31 07:40:31 crc kubenswrapper[4908]: I0131 07:40:31.310826 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 31 07:40:31 crc kubenswrapper[4908]: I0131 07:40:31.315793 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qnzvj"] Jan 31 07:40:31 crc kubenswrapper[4908]: I0131 07:40:31.382239 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 31 07:40:31 crc kubenswrapper[4908]: I0131 07:40:31.428911 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j52l5\" (UniqueName: \"kubernetes.io/projected/a8b2e087-b297-4001-aad2-6d7ca61ff1c7-kube-api-access-j52l5\") pod \"root-account-create-update-qnzvj\" (UID: \"a8b2e087-b297-4001-aad2-6d7ca61ff1c7\") " pod="openstack/root-account-create-update-qnzvj" Jan 31 07:40:31 crc kubenswrapper[4908]: I0131 07:40:31.429000 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8b2e087-b297-4001-aad2-6d7ca61ff1c7-operator-scripts\") pod \"root-account-create-update-qnzvj\" (UID: \"a8b2e087-b297-4001-aad2-6d7ca61ff1c7\") " pod="openstack/root-account-create-update-qnzvj" Jan 31 07:40:31 crc kubenswrapper[4908]: I0131 07:40:31.530952 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j52l5\" (UniqueName: \"kubernetes.io/projected/a8b2e087-b297-4001-aad2-6d7ca61ff1c7-kube-api-access-j52l5\") pod \"root-account-create-update-qnzvj\" (UID: \"a8b2e087-b297-4001-aad2-6d7ca61ff1c7\") " pod="openstack/root-account-create-update-qnzvj" Jan 31 07:40:31 crc kubenswrapper[4908]: I0131 07:40:31.531085 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8b2e087-b297-4001-aad2-6d7ca61ff1c7-operator-scripts\") pod \"root-account-create-update-qnzvj\" (UID: \"a8b2e087-b297-4001-aad2-6d7ca61ff1c7\") " pod="openstack/root-account-create-update-qnzvj" Jan 31 07:40:31 crc kubenswrapper[4908]: I0131 07:40:31.532165 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8b2e087-b297-4001-aad2-6d7ca61ff1c7-operator-scripts\") pod \"root-account-create-update-qnzvj\" (UID: \"a8b2e087-b297-4001-aad2-6d7ca61ff1c7\") " pod="openstack/root-account-create-update-qnzvj" Jan 31 07:40:31 crc kubenswrapper[4908]: I0131 07:40:31.552069 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j52l5\" (UniqueName: \"kubernetes.io/projected/a8b2e087-b297-4001-aad2-6d7ca61ff1c7-kube-api-access-j52l5\") pod \"root-account-create-update-qnzvj\" (UID: \"a8b2e087-b297-4001-aad2-6d7ca61ff1c7\") " pod="openstack/root-account-create-update-qnzvj" Jan 31 07:40:31 crc kubenswrapper[4908]: I0131 07:40:31.625766 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qnzvj" Jan 31 07:40:31 crc kubenswrapper[4908]: I0131 07:40:31.888672 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.087514 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qnzvj"] Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.519455 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-klpc8"] Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.520372 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-klpc8" Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.527788 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-klpc8"] Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.624986 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-0720-account-create-update-v6x8z"] Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.625874 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0720-account-create-update-v6x8z" Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.627461 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.633918 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0720-account-create-update-v6x8z"] Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.646425 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqdgr\" (UniqueName: \"kubernetes.io/projected/498a5601-b656-4767-b223-2f59c64d76c5-kube-api-access-hqdgr\") pod \"keystone-db-create-klpc8\" (UID: \"498a5601-b656-4767-b223-2f59c64d76c5\") " pod="openstack/keystone-db-create-klpc8" Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.646652 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/498a5601-b656-4767-b223-2f59c64d76c5-operator-scripts\") pod \"keystone-db-create-klpc8\" (UID: \"498a5601-b656-4767-b223-2f59c64d76c5\") " pod="openstack/keystone-db-create-klpc8" Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.747669 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqdgr\" (UniqueName: \"kubernetes.io/projected/498a5601-b656-4767-b223-2f59c64d76c5-kube-api-access-hqdgr\") pod \"keystone-db-create-klpc8\" (UID: \"498a5601-b656-4767-b223-2f59c64d76c5\") " pod="openstack/keystone-db-create-klpc8" Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.747771 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98ede730-6518-4590-ade8-ecc313c8147f-operator-scripts\") pod \"keystone-0720-account-create-update-v6x8z\" (UID: \"98ede730-6518-4590-ade8-ecc313c8147f\") " pod="openstack/keystone-0720-account-create-update-v6x8z" Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.747802 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fjdr\" (UniqueName: \"kubernetes.io/projected/98ede730-6518-4590-ade8-ecc313c8147f-kube-api-access-7fjdr\") pod \"keystone-0720-account-create-update-v6x8z\" (UID: \"98ede730-6518-4590-ade8-ecc313c8147f\") " pod="openstack/keystone-0720-account-create-update-v6x8z" Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.747835 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/498a5601-b656-4767-b223-2f59c64d76c5-operator-scripts\") pod \"keystone-db-create-klpc8\" (UID: \"498a5601-b656-4767-b223-2f59c64d76c5\") " pod="openstack/keystone-db-create-klpc8" Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.748595 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/498a5601-b656-4767-b223-2f59c64d76c5-operator-scripts\") pod \"keystone-db-create-klpc8\" (UID: \"498a5601-b656-4767-b223-2f59c64d76c5\") " pod="openstack/keystone-db-create-klpc8" Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.782677 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqdgr\" (UniqueName: \"kubernetes.io/projected/498a5601-b656-4767-b223-2f59c64d76c5-kube-api-access-hqdgr\") pod \"keystone-db-create-klpc8\" (UID: \"498a5601-b656-4767-b223-2f59c64d76c5\") " pod="openstack/keystone-db-create-klpc8" Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.816396 4908 generic.go:334] "Generic (PLEG): container finished" podID="a8b2e087-b297-4001-aad2-6d7ca61ff1c7" containerID="c51c83aeb7b3ca4c74797da5d96af03183118da7dd1844bc7c7a0bc3735f9e9c" exitCode=0 Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.816450 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qnzvj" event={"ID":"a8b2e087-b297-4001-aad2-6d7ca61ff1c7","Type":"ContainerDied","Data":"c51c83aeb7b3ca4c74797da5d96af03183118da7dd1844bc7c7a0bc3735f9e9c"} Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.816480 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qnzvj" event={"ID":"a8b2e087-b297-4001-aad2-6d7ca61ff1c7","Type":"ContainerStarted","Data":"6e7f91d8889af79f297e151bf2cf475aeff902a7082bd6b1e2b1f41870818091"} Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.833956 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-klpc8" Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.840335 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-8mnl5"] Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.841524 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8mnl5" Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.859960 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98ede730-6518-4590-ade8-ecc313c8147f-operator-scripts\") pod \"keystone-0720-account-create-update-v6x8z\" (UID: \"98ede730-6518-4590-ade8-ecc313c8147f\") " pod="openstack/keystone-0720-account-create-update-v6x8z" Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.860028 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fjdr\" (UniqueName: \"kubernetes.io/projected/98ede730-6518-4590-ade8-ecc313c8147f-kube-api-access-7fjdr\") pod \"keystone-0720-account-create-update-v6x8z\" (UID: \"98ede730-6518-4590-ade8-ecc313c8147f\") " pod="openstack/keystone-0720-account-create-update-v6x8z" Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.860970 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98ede730-6518-4590-ade8-ecc313c8147f-operator-scripts\") pod \"keystone-0720-account-create-update-v6x8z\" (UID: \"98ede730-6518-4590-ade8-ecc313c8147f\") " pod="openstack/keystone-0720-account-create-update-v6x8z" Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.889350 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-8mnl5"] Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.928546 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fjdr\" (UniqueName: \"kubernetes.io/projected/98ede730-6518-4590-ade8-ecc313c8147f-kube-api-access-7fjdr\") pod \"keystone-0720-account-create-update-v6x8z\" (UID: \"98ede730-6518-4590-ade8-ecc313c8147f\") " pod="openstack/keystone-0720-account-create-update-v6x8z" Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.939846 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0720-account-create-update-v6x8z" Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.961322 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea5d5cc9-3f8f-427d-8675-bd883e86e4ee-operator-scripts\") pod \"placement-db-create-8mnl5\" (UID: \"ea5d5cc9-3f8f-427d-8675-bd883e86e4ee\") " pod="openstack/placement-db-create-8mnl5" Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.961473 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc57k\" (UniqueName: \"kubernetes.io/projected/ea5d5cc9-3f8f-427d-8675-bd883e86e4ee-kube-api-access-kc57k\") pod \"placement-db-create-8mnl5\" (UID: \"ea5d5cc9-3f8f-427d-8675-bd883e86e4ee\") " pod="openstack/placement-db-create-8mnl5" Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.968285 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-61b8-account-create-update-mwxlx"] Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.971272 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-61b8-account-create-update-mwxlx" Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.972784 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 31 07:40:32 crc kubenswrapper[4908]: I0131 07:40:32.995438 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-61b8-account-create-update-mwxlx"] Jan 31 07:40:33 crc kubenswrapper[4908]: I0131 07:40:33.064039 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea5d5cc9-3f8f-427d-8675-bd883e86e4ee-operator-scripts\") pod \"placement-db-create-8mnl5\" (UID: \"ea5d5cc9-3f8f-427d-8675-bd883e86e4ee\") " pod="openstack/placement-db-create-8mnl5" Jan 31 07:40:33 crc kubenswrapper[4908]: I0131 07:40:33.064646 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97fff54c-e113-417f-b87c-6e01eea5e6b7-operator-scripts\") pod \"placement-61b8-account-create-update-mwxlx\" (UID: \"97fff54c-e113-417f-b87c-6e01eea5e6b7\") " pod="openstack/placement-61b8-account-create-update-mwxlx" Jan 31 07:40:33 crc kubenswrapper[4908]: I0131 07:40:33.064707 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vzrr\" (UniqueName: \"kubernetes.io/projected/97fff54c-e113-417f-b87c-6e01eea5e6b7-kube-api-access-9vzrr\") pod \"placement-61b8-account-create-update-mwxlx\" (UID: \"97fff54c-e113-417f-b87c-6e01eea5e6b7\") " pod="openstack/placement-61b8-account-create-update-mwxlx" Jan 31 07:40:33 crc kubenswrapper[4908]: I0131 07:40:33.064778 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea5d5cc9-3f8f-427d-8675-bd883e86e4ee-operator-scripts\") pod \"placement-db-create-8mnl5\" (UID: \"ea5d5cc9-3f8f-427d-8675-bd883e86e4ee\") " pod="openstack/placement-db-create-8mnl5" Jan 31 07:40:33 crc kubenswrapper[4908]: I0131 07:40:33.065016 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc57k\" (UniqueName: \"kubernetes.io/projected/ea5d5cc9-3f8f-427d-8675-bd883e86e4ee-kube-api-access-kc57k\") pod \"placement-db-create-8mnl5\" (UID: \"ea5d5cc9-3f8f-427d-8675-bd883e86e4ee\") " pod="openstack/placement-db-create-8mnl5" Jan 31 07:40:33 crc kubenswrapper[4908]: I0131 07:40:33.093573 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc57k\" (UniqueName: \"kubernetes.io/projected/ea5d5cc9-3f8f-427d-8675-bd883e86e4ee-kube-api-access-kc57k\") pod \"placement-db-create-8mnl5\" (UID: \"ea5d5cc9-3f8f-427d-8675-bd883e86e4ee\") " pod="openstack/placement-db-create-8mnl5" Jan 31 07:40:33 crc kubenswrapper[4908]: I0131 07:40:33.166374 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97fff54c-e113-417f-b87c-6e01eea5e6b7-operator-scripts\") pod \"placement-61b8-account-create-update-mwxlx\" (UID: \"97fff54c-e113-417f-b87c-6e01eea5e6b7\") " pod="openstack/placement-61b8-account-create-update-mwxlx" Jan 31 07:40:33 crc kubenswrapper[4908]: I0131 07:40:33.166470 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vzrr\" (UniqueName: \"kubernetes.io/projected/97fff54c-e113-417f-b87c-6e01eea5e6b7-kube-api-access-9vzrr\") pod \"placement-61b8-account-create-update-mwxlx\" (UID: \"97fff54c-e113-417f-b87c-6e01eea5e6b7\") " pod="openstack/placement-61b8-account-create-update-mwxlx" Jan 31 07:40:33 crc kubenswrapper[4908]: I0131 07:40:33.167632 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97fff54c-e113-417f-b87c-6e01eea5e6b7-operator-scripts\") pod \"placement-61b8-account-create-update-mwxlx\" (UID: \"97fff54c-e113-417f-b87c-6e01eea5e6b7\") " pod="openstack/placement-61b8-account-create-update-mwxlx" Jan 31 07:40:33 crc kubenswrapper[4908]: I0131 07:40:33.188665 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vzrr\" (UniqueName: \"kubernetes.io/projected/97fff54c-e113-417f-b87c-6e01eea5e6b7-kube-api-access-9vzrr\") pod \"placement-61b8-account-create-update-mwxlx\" (UID: \"97fff54c-e113-417f-b87c-6e01eea5e6b7\") " pod="openstack/placement-61b8-account-create-update-mwxlx" Jan 31 07:40:33 crc kubenswrapper[4908]: I0131 07:40:33.329619 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8mnl5" Jan 31 07:40:33 crc kubenswrapper[4908]: I0131 07:40:33.337415 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-61b8-account-create-update-mwxlx" Jan 31 07:40:33 crc kubenswrapper[4908]: I0131 07:40:33.379517 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-klpc8"] Jan 31 07:40:33 crc kubenswrapper[4908]: I0131 07:40:33.463863 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0720-account-create-update-v6x8z"] Jan 31 07:40:33 crc kubenswrapper[4908]: I0131 07:40:33.823729 4908 generic.go:334] "Generic (PLEG): container finished" podID="498a5601-b656-4767-b223-2f59c64d76c5" containerID="76a1780294639786eb222fdf49fa236ba7eb6f9e3d9984706a6129dc95607916" exitCode=0 Jan 31 07:40:33 crc kubenswrapper[4908]: I0131 07:40:33.823782 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-klpc8" event={"ID":"498a5601-b656-4767-b223-2f59c64d76c5","Type":"ContainerDied","Data":"76a1780294639786eb222fdf49fa236ba7eb6f9e3d9984706a6129dc95607916"} Jan 31 07:40:33 crc kubenswrapper[4908]: I0131 07:40:33.823828 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-klpc8" event={"ID":"498a5601-b656-4767-b223-2f59c64d76c5","Type":"ContainerStarted","Data":"2e8f84b723c67ff69c43af6a5f9fe970bfa58a28f573cc4717d9a1ba8733cae9"} Jan 31 07:40:33 crc kubenswrapper[4908]: I0131 07:40:33.824296 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-8mnl5"] Jan 31 07:40:33 crc kubenswrapper[4908]: I0131 07:40:33.825004 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0720-account-create-update-v6x8z" event={"ID":"98ede730-6518-4590-ade8-ecc313c8147f","Type":"ContainerStarted","Data":"c4d19058bd902ace6ac84ef709af12c044e1e14d8975f41aef1907effbae9761"} Jan 31 07:40:33 crc kubenswrapper[4908]: I0131 07:40:33.825055 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0720-account-create-update-v6x8z" event={"ID":"98ede730-6518-4590-ade8-ecc313c8147f","Type":"ContainerStarted","Data":"0653bf73e30acf30d1fec282b5726da193d7be498ef6c7d56c4a4544e23a0d9f"} Jan 31 07:40:33 crc kubenswrapper[4908]: W0131 07:40:33.831125 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea5d5cc9_3f8f_427d_8675_bd883e86e4ee.slice/crio-a7b190999bef72519d073fcddce7996fc8a33f3bf63bc7e705799cf3b425f78d WatchSource:0}: Error finding container a7b190999bef72519d073fcddce7996fc8a33f3bf63bc7e705799cf3b425f78d: Status 404 returned error can't find the container with id a7b190999bef72519d073fcddce7996fc8a33f3bf63bc7e705799cf3b425f78d Jan 31 07:40:33 crc kubenswrapper[4908]: I0131 07:40:33.834454 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-61b8-account-create-update-mwxlx"] Jan 31 07:40:33 crc kubenswrapper[4908]: I0131 07:40:33.854615 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-0720-account-create-update-v6x8z" podStartSLOduration=1.854599449 podStartE2EDuration="1.854599449s" podCreationTimestamp="2026-01-31 07:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:40:33.853471987 +0000 UTC m=+1140.469416661" watchObservedRunningTime="2026-01-31 07:40:33.854599449 +0000 UTC m=+1140.470544103" Jan 31 07:40:34 crc kubenswrapper[4908]: I0131 07:40:34.120212 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qnzvj" Jan 31 07:40:34 crc kubenswrapper[4908]: I0131 07:40:34.182723 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8b2e087-b297-4001-aad2-6d7ca61ff1c7-operator-scripts\") pod \"a8b2e087-b297-4001-aad2-6d7ca61ff1c7\" (UID: \"a8b2e087-b297-4001-aad2-6d7ca61ff1c7\") " Jan 31 07:40:34 crc kubenswrapper[4908]: I0131 07:40:34.182898 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j52l5\" (UniqueName: \"kubernetes.io/projected/a8b2e087-b297-4001-aad2-6d7ca61ff1c7-kube-api-access-j52l5\") pod \"a8b2e087-b297-4001-aad2-6d7ca61ff1c7\" (UID: \"a8b2e087-b297-4001-aad2-6d7ca61ff1c7\") " Jan 31 07:40:34 crc kubenswrapper[4908]: I0131 07:40:34.184292 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8b2e087-b297-4001-aad2-6d7ca61ff1c7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a8b2e087-b297-4001-aad2-6d7ca61ff1c7" (UID: "a8b2e087-b297-4001-aad2-6d7ca61ff1c7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:40:34 crc kubenswrapper[4908]: I0131 07:40:34.188303 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8b2e087-b297-4001-aad2-6d7ca61ff1c7-kube-api-access-j52l5" (OuterVolumeSpecName: "kube-api-access-j52l5") pod "a8b2e087-b297-4001-aad2-6d7ca61ff1c7" (UID: "a8b2e087-b297-4001-aad2-6d7ca61ff1c7"). InnerVolumeSpecName "kube-api-access-j52l5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:40:34 crc kubenswrapper[4908]: I0131 07:40:34.284875 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j52l5\" (UniqueName: \"kubernetes.io/projected/a8b2e087-b297-4001-aad2-6d7ca61ff1c7-kube-api-access-j52l5\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:34 crc kubenswrapper[4908]: I0131 07:40:34.284903 4908 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8b2e087-b297-4001-aad2-6d7ca61ff1c7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:34 crc kubenswrapper[4908]: I0131 07:40:34.834391 4908 generic.go:334] "Generic (PLEG): container finished" podID="b4df218c-dfc0-4c17-8b5a-4649e3d4e710" containerID="bb91ab1f3fd499a30cb095e03eda92509acf7fe46024be71ac15353a041686a2" exitCode=0 Jan 31 07:40:34 crc kubenswrapper[4908]: I0131 07:40:34.834487 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b4df218c-dfc0-4c17-8b5a-4649e3d4e710","Type":"ContainerDied","Data":"bb91ab1f3fd499a30cb095e03eda92509acf7fe46024be71ac15353a041686a2"} Jan 31 07:40:34 crc kubenswrapper[4908]: I0131 07:40:34.838213 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qnzvj" event={"ID":"a8b2e087-b297-4001-aad2-6d7ca61ff1c7","Type":"ContainerDied","Data":"6e7f91d8889af79f297e151bf2cf475aeff902a7082bd6b1e2b1f41870818091"} Jan 31 07:40:34 crc kubenswrapper[4908]: I0131 07:40:34.838238 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e7f91d8889af79f297e151bf2cf475aeff902a7082bd6b1e2b1f41870818091" Jan 31 07:40:34 crc kubenswrapper[4908]: I0131 07:40:34.838302 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qnzvj" Jan 31 07:40:34 crc kubenswrapper[4908]: I0131 07:40:34.841415 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8mnl5" event={"ID":"ea5d5cc9-3f8f-427d-8675-bd883e86e4ee","Type":"ContainerStarted","Data":"1b0973d51ee69d7c2cd26f28fb04fae2f0a5d5648f9085ed9b7b9774817ddfca"} Jan 31 07:40:34 crc kubenswrapper[4908]: I0131 07:40:34.841556 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8mnl5" event={"ID":"ea5d5cc9-3f8f-427d-8675-bd883e86e4ee","Type":"ContainerStarted","Data":"a7b190999bef72519d073fcddce7996fc8a33f3bf63bc7e705799cf3b425f78d"} Jan 31 07:40:34 crc kubenswrapper[4908]: I0131 07:40:34.845031 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-61b8-account-create-update-mwxlx" event={"ID":"97fff54c-e113-417f-b87c-6e01eea5e6b7","Type":"ContainerStarted","Data":"200a62055b8110a7de5a961e12ac6c5a70c85bbd99038e46faf9ba720ec1d598"} Jan 31 07:40:34 crc kubenswrapper[4908]: I0131 07:40:34.845171 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-61b8-account-create-update-mwxlx" event={"ID":"97fff54c-e113-417f-b87c-6e01eea5e6b7","Type":"ContainerStarted","Data":"b6387b662fd4192f69b302caac7289092ddd4eee8759d38b87d397bfb119a6ea"} Jan 31 07:40:34 crc kubenswrapper[4908]: I0131 07:40:34.849211 4908 generic.go:334] "Generic (PLEG): container finished" podID="98ede730-6518-4590-ade8-ecc313c8147f" containerID="c4d19058bd902ace6ac84ef709af12c044e1e14d8975f41aef1907effbae9761" exitCode=0 Jan 31 07:40:34 crc kubenswrapper[4908]: I0131 07:40:34.849296 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0720-account-create-update-v6x8z" event={"ID":"98ede730-6518-4590-ade8-ecc313c8147f","Type":"ContainerDied","Data":"c4d19058bd902ace6ac84ef709af12c044e1e14d8975f41aef1907effbae9761"} Jan 31 07:40:34 crc kubenswrapper[4908]: I0131 07:40:34.853848 4908 generic.go:334] "Generic (PLEG): container finished" podID="a1644408-1d98-43ed-b7eb-f399d80a7d10" containerID="c7d96a44eed3b5f2e9d0fc5bcb9ee82f21abe1cedd04285137bea567fecd04e2" exitCode=0 Jan 31 07:40:34 crc kubenswrapper[4908]: I0131 07:40:34.853965 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a1644408-1d98-43ed-b7eb-f399d80a7d10","Type":"ContainerDied","Data":"c7d96a44eed3b5f2e9d0fc5bcb9ee82f21abe1cedd04285137bea567fecd04e2"} Jan 31 07:40:34 crc kubenswrapper[4908]: I0131 07:40:34.883155 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-61b8-account-create-update-mwxlx" podStartSLOduration=2.88313327 podStartE2EDuration="2.88313327s" podCreationTimestamp="2026-01-31 07:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:40:34.876185735 +0000 UTC m=+1141.492130409" watchObservedRunningTime="2026-01-31 07:40:34.88313327 +0000 UTC m=+1141.499077924" Jan 31 07:40:34 crc kubenswrapper[4908]: I0131 07:40:34.957663 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-8mnl5" podStartSLOduration=2.957641959 podStartE2EDuration="2.957641959s" podCreationTimestamp="2026-01-31 07:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:40:34.924017145 +0000 UTC m=+1141.539961809" watchObservedRunningTime="2026-01-31 07:40:34.957641959 +0000 UTC m=+1141.573586613" Jan 31 07:40:35 crc kubenswrapper[4908]: I0131 07:40:35.209098 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-klpc8" Jan 31 07:40:35 crc kubenswrapper[4908]: I0131 07:40:35.301860 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqdgr\" (UniqueName: \"kubernetes.io/projected/498a5601-b656-4767-b223-2f59c64d76c5-kube-api-access-hqdgr\") pod \"498a5601-b656-4767-b223-2f59c64d76c5\" (UID: \"498a5601-b656-4767-b223-2f59c64d76c5\") " Jan 31 07:40:35 crc kubenswrapper[4908]: I0131 07:40:35.302337 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/498a5601-b656-4767-b223-2f59c64d76c5-operator-scripts\") pod \"498a5601-b656-4767-b223-2f59c64d76c5\" (UID: \"498a5601-b656-4767-b223-2f59c64d76c5\") " Jan 31 07:40:35 crc kubenswrapper[4908]: I0131 07:40:35.303035 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/498a5601-b656-4767-b223-2f59c64d76c5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "498a5601-b656-4767-b223-2f59c64d76c5" (UID: "498a5601-b656-4767-b223-2f59c64d76c5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:40:35 crc kubenswrapper[4908]: I0131 07:40:35.306893 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/498a5601-b656-4767-b223-2f59c64d76c5-kube-api-access-hqdgr" (OuterVolumeSpecName: "kube-api-access-hqdgr") pod "498a5601-b656-4767-b223-2f59c64d76c5" (UID: "498a5601-b656-4767-b223-2f59c64d76c5"). InnerVolumeSpecName "kube-api-access-hqdgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:40:35 crc kubenswrapper[4908]: I0131 07:40:35.403568 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqdgr\" (UniqueName: \"kubernetes.io/projected/498a5601-b656-4767-b223-2f59c64d76c5-kube-api-access-hqdgr\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:35 crc kubenswrapper[4908]: I0131 07:40:35.403601 4908 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/498a5601-b656-4767-b223-2f59c64d76c5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:35 crc kubenswrapper[4908]: I0131 07:40:35.628779 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 31 07:40:35 crc kubenswrapper[4908]: I0131 07:40:35.862222 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-klpc8" event={"ID":"498a5601-b656-4767-b223-2f59c64d76c5","Type":"ContainerDied","Data":"2e8f84b723c67ff69c43af6a5f9fe970bfa58a28f573cc4717d9a1ba8733cae9"} Jan 31 07:40:35 crc kubenswrapper[4908]: I0131 07:40:35.862267 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e8f84b723c67ff69c43af6a5f9fe970bfa58a28f573cc4717d9a1ba8733cae9" Jan 31 07:40:35 crc kubenswrapper[4908]: I0131 07:40:35.862232 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-klpc8" Jan 31 07:40:35 crc kubenswrapper[4908]: I0131 07:40:35.864399 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a1644408-1d98-43ed-b7eb-f399d80a7d10","Type":"ContainerStarted","Data":"bd010ea0a4f22384a7cd4808ac77b0df3b776b11bb70a4be7e2992b77d2a98be"} Jan 31 07:40:35 crc kubenswrapper[4908]: I0131 07:40:35.864609 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 31 07:40:35 crc kubenswrapper[4908]: I0131 07:40:35.866331 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b4df218c-dfc0-4c17-8b5a-4649e3d4e710","Type":"ContainerStarted","Data":"438d4bdb5a64d2a268428029e93da88fc0e9b5834c50b2214ecf6268d68315d3"} Jan 31 07:40:35 crc kubenswrapper[4908]: I0131 07:40:35.866496 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:40:35 crc kubenswrapper[4908]: I0131 07:40:35.868235 4908 generic.go:334] "Generic (PLEG): container finished" podID="ea5d5cc9-3f8f-427d-8675-bd883e86e4ee" containerID="1b0973d51ee69d7c2cd26f28fb04fae2f0a5d5648f9085ed9b7b9774817ddfca" exitCode=0 Jan 31 07:40:35 crc kubenswrapper[4908]: I0131 07:40:35.868295 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8mnl5" event={"ID":"ea5d5cc9-3f8f-427d-8675-bd883e86e4ee","Type":"ContainerDied","Data":"1b0973d51ee69d7c2cd26f28fb04fae2f0a5d5648f9085ed9b7b9774817ddfca"} Jan 31 07:40:35 crc kubenswrapper[4908]: I0131 07:40:35.870358 4908 generic.go:334] "Generic (PLEG): container finished" podID="97fff54c-e113-417f-b87c-6e01eea5e6b7" containerID="200a62055b8110a7de5a961e12ac6c5a70c85bbd99038e46faf9ba720ec1d598" exitCode=0 Jan 31 07:40:35 crc kubenswrapper[4908]: I0131 07:40:35.870542 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-61b8-account-create-update-mwxlx" event={"ID":"97fff54c-e113-417f-b87c-6e01eea5e6b7","Type":"ContainerDied","Data":"200a62055b8110a7de5a961e12ac6c5a70c85bbd99038e46faf9ba720ec1d598"} Jan 31 07:40:35 crc kubenswrapper[4908]: I0131 07:40:35.890965 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.087522254 podStartE2EDuration="1m17.890940297s" podCreationTimestamp="2026-01-31 07:39:18 +0000 UTC" firstStartedPulling="2026-01-31 07:39:20.39492054 +0000 UTC m=+1067.010865194" lastFinishedPulling="2026-01-31 07:40:01.198338583 +0000 UTC m=+1107.814283237" observedRunningTime="2026-01-31 07:40:35.886061702 +0000 UTC m=+1142.502006366" watchObservedRunningTime="2026-01-31 07:40:35.890940297 +0000 UTC m=+1142.506884951" Jan 31 07:40:35 crc kubenswrapper[4908]: I0131 07:40:35.921159 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.393400891 podStartE2EDuration="1m17.921140384s" podCreationTimestamp="2026-01-31 07:39:18 +0000 UTC" firstStartedPulling="2026-01-31 07:39:20.67060422 +0000 UTC m=+1067.286548874" lastFinishedPulling="2026-01-31 07:40:01.198343713 +0000 UTC m=+1107.814288367" observedRunningTime="2026-01-31 07:40:35.9173301 +0000 UTC m=+1142.533274754" watchObservedRunningTime="2026-01-31 07:40:35.921140384 +0000 UTC m=+1142.537085038" Jan 31 07:40:36 crc kubenswrapper[4908]: I0131 07:40:36.195474 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0720-account-create-update-v6x8z" Jan 31 07:40:36 crc kubenswrapper[4908]: I0131 07:40:36.213817 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fjdr\" (UniqueName: \"kubernetes.io/projected/98ede730-6518-4590-ade8-ecc313c8147f-kube-api-access-7fjdr\") pod \"98ede730-6518-4590-ade8-ecc313c8147f\" (UID: \"98ede730-6518-4590-ade8-ecc313c8147f\") " Jan 31 07:40:36 crc kubenswrapper[4908]: I0131 07:40:36.213962 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98ede730-6518-4590-ade8-ecc313c8147f-operator-scripts\") pod \"98ede730-6518-4590-ade8-ecc313c8147f\" (UID: \"98ede730-6518-4590-ade8-ecc313c8147f\") " Jan 31 07:40:36 crc kubenswrapper[4908]: I0131 07:40:36.214376 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98ede730-6518-4590-ade8-ecc313c8147f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "98ede730-6518-4590-ade8-ecc313c8147f" (UID: "98ede730-6518-4590-ade8-ecc313c8147f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:40:36 crc kubenswrapper[4908]: I0131 07:40:36.214520 4908 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98ede730-6518-4590-ade8-ecc313c8147f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:36 crc kubenswrapper[4908]: I0131 07:40:36.217326 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98ede730-6518-4590-ade8-ecc313c8147f-kube-api-access-7fjdr" (OuterVolumeSpecName: "kube-api-access-7fjdr") pod "98ede730-6518-4590-ade8-ecc313c8147f" (UID: "98ede730-6518-4590-ade8-ecc313c8147f"). InnerVolumeSpecName "kube-api-access-7fjdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:40:36 crc kubenswrapper[4908]: I0131 07:40:36.315680 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fjdr\" (UniqueName: \"kubernetes.io/projected/98ede730-6518-4590-ade8-ecc313c8147f-kube-api-access-7fjdr\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:36 crc kubenswrapper[4908]: I0131 07:40:36.883240 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0720-account-create-update-v6x8z" event={"ID":"98ede730-6518-4590-ade8-ecc313c8147f","Type":"ContainerDied","Data":"0653bf73e30acf30d1fec282b5726da193d7be498ef6c7d56c4a4544e23a0d9f"} Jan 31 07:40:36 crc kubenswrapper[4908]: I0131 07:40:36.884990 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0653bf73e30acf30d1fec282b5726da193d7be498ef6c7d56c4a4544e23a0d9f" Jan 31 07:40:36 crc kubenswrapper[4908]: I0131 07:40:36.883303 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0720-account-create-update-v6x8z" Jan 31 07:40:37 crc kubenswrapper[4908]: I0131 07:40:37.998439 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8mnl5" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.004786 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-61b8-account-create-update-mwxlx" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.138617 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-x9fx8"] Jan 31 07:40:38 crc kubenswrapper[4908]: E0131 07:40:38.138927 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97fff54c-e113-417f-b87c-6e01eea5e6b7" containerName="mariadb-account-create-update" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.138947 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="97fff54c-e113-417f-b87c-6e01eea5e6b7" containerName="mariadb-account-create-update" Jan 31 07:40:38 crc kubenswrapper[4908]: E0131 07:40:38.138956 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8b2e087-b297-4001-aad2-6d7ca61ff1c7" containerName="mariadb-account-create-update" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.138962 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b2e087-b297-4001-aad2-6d7ca61ff1c7" containerName="mariadb-account-create-update" Jan 31 07:40:38 crc kubenswrapper[4908]: E0131 07:40:38.138997 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98ede730-6518-4590-ade8-ecc313c8147f" containerName="mariadb-account-create-update" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.139003 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="98ede730-6518-4590-ade8-ecc313c8147f" containerName="mariadb-account-create-update" Jan 31 07:40:38 crc kubenswrapper[4908]: E0131 07:40:38.139015 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498a5601-b656-4767-b223-2f59c64d76c5" containerName="mariadb-database-create" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.139021 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="498a5601-b656-4767-b223-2f59c64d76c5" containerName="mariadb-database-create" Jan 31 07:40:38 crc kubenswrapper[4908]: E0131 07:40:38.139036 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea5d5cc9-3f8f-427d-8675-bd883e86e4ee" containerName="mariadb-database-create" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.139043 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea5d5cc9-3f8f-427d-8675-bd883e86e4ee" containerName="mariadb-database-create" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.139182 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="498a5601-b656-4767-b223-2f59c64d76c5" containerName="mariadb-database-create" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.139193 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="97fff54c-e113-417f-b87c-6e01eea5e6b7" containerName="mariadb-account-create-update" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.139202 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="98ede730-6518-4590-ade8-ecc313c8147f" containerName="mariadb-account-create-update" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.139214 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8b2e087-b297-4001-aad2-6d7ca61ff1c7" containerName="mariadb-account-create-update" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.139223 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea5d5cc9-3f8f-427d-8675-bd883e86e4ee" containerName="mariadb-database-create" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.141039 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-x9fx8" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.146878 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-x9fx8"] Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.154311 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97fff54c-e113-417f-b87c-6e01eea5e6b7-operator-scripts\") pod \"97fff54c-e113-417f-b87c-6e01eea5e6b7\" (UID: \"97fff54c-e113-417f-b87c-6e01eea5e6b7\") " Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.154354 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc57k\" (UniqueName: \"kubernetes.io/projected/ea5d5cc9-3f8f-427d-8675-bd883e86e4ee-kube-api-access-kc57k\") pod \"ea5d5cc9-3f8f-427d-8675-bd883e86e4ee\" (UID: \"ea5d5cc9-3f8f-427d-8675-bd883e86e4ee\") " Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.154397 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vzrr\" (UniqueName: \"kubernetes.io/projected/97fff54c-e113-417f-b87c-6e01eea5e6b7-kube-api-access-9vzrr\") pod \"97fff54c-e113-417f-b87c-6e01eea5e6b7\" (UID: \"97fff54c-e113-417f-b87c-6e01eea5e6b7\") " Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.154522 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea5d5cc9-3f8f-427d-8675-bd883e86e4ee-operator-scripts\") pod \"ea5d5cc9-3f8f-427d-8675-bd883e86e4ee\" (UID: \"ea5d5cc9-3f8f-427d-8675-bd883e86e4ee\") " Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.155070 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea5d5cc9-3f8f-427d-8675-bd883e86e4ee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ea5d5cc9-3f8f-427d-8675-bd883e86e4ee" (UID: "ea5d5cc9-3f8f-427d-8675-bd883e86e4ee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.155579 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97fff54c-e113-417f-b87c-6e01eea5e6b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "97fff54c-e113-417f-b87c-6e01eea5e6b7" (UID: "97fff54c-e113-417f-b87c-6e01eea5e6b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.159337 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea5d5cc9-3f8f-427d-8675-bd883e86e4ee-kube-api-access-kc57k" (OuterVolumeSpecName: "kube-api-access-kc57k") pod "ea5d5cc9-3f8f-427d-8675-bd883e86e4ee" (UID: "ea5d5cc9-3f8f-427d-8675-bd883e86e4ee"). InnerVolumeSpecName "kube-api-access-kc57k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.163053 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97fff54c-e113-417f-b87c-6e01eea5e6b7-kube-api-access-9vzrr" (OuterVolumeSpecName: "kube-api-access-9vzrr") pod "97fff54c-e113-417f-b87c-6e01eea5e6b7" (UID: "97fff54c-e113-417f-b87c-6e01eea5e6b7"). InnerVolumeSpecName "kube-api-access-9vzrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.249861 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-f74b-account-create-update-nz9tr"] Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.250961 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f74b-account-create-update-nz9tr" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.252933 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.260349 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f74b-account-create-update-nz9tr"] Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.261658 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e53b78a9-d76e-42fe-b026-f44b248941ea-operator-scripts\") pod \"glance-db-create-x9fx8\" (UID: \"e53b78a9-d76e-42fe-b026-f44b248941ea\") " pod="openstack/glance-db-create-x9fx8" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.261819 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwp88\" (UniqueName: \"kubernetes.io/projected/e53b78a9-d76e-42fe-b026-f44b248941ea-kube-api-access-mwp88\") pod \"glance-db-create-x9fx8\" (UID: \"e53b78a9-d76e-42fe-b026-f44b248941ea\") " pod="openstack/glance-db-create-x9fx8" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.261895 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vzrr\" (UniqueName: \"kubernetes.io/projected/97fff54c-e113-417f-b87c-6e01eea5e6b7-kube-api-access-9vzrr\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.261908 4908 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea5d5cc9-3f8f-427d-8675-bd883e86e4ee-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.261935 4908 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97fff54c-e113-417f-b87c-6e01eea5e6b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.261943 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc57k\" (UniqueName: \"kubernetes.io/projected/ea5d5cc9-3f8f-427d-8675-bd883e86e4ee-kube-api-access-kc57k\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.364086 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjbx8\" (UniqueName: \"kubernetes.io/projected/06b43a84-efc2-407f-818a-afc0151b60d8-kube-api-access-zjbx8\") pod \"glance-f74b-account-create-update-nz9tr\" (UID: \"06b43a84-efc2-407f-818a-afc0151b60d8\") " pod="openstack/glance-f74b-account-create-update-nz9tr" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.364528 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwp88\" (UniqueName: \"kubernetes.io/projected/e53b78a9-d76e-42fe-b026-f44b248941ea-kube-api-access-mwp88\") pod \"glance-db-create-x9fx8\" (UID: \"e53b78a9-d76e-42fe-b026-f44b248941ea\") " pod="openstack/glance-db-create-x9fx8" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.364765 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e53b78a9-d76e-42fe-b026-f44b248941ea-operator-scripts\") pod \"glance-db-create-x9fx8\" (UID: \"e53b78a9-d76e-42fe-b026-f44b248941ea\") " pod="openstack/glance-db-create-x9fx8" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.365050 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06b43a84-efc2-407f-818a-afc0151b60d8-operator-scripts\") pod \"glance-f74b-account-create-update-nz9tr\" (UID: \"06b43a84-efc2-407f-818a-afc0151b60d8\") " pod="openstack/glance-f74b-account-create-update-nz9tr" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.366272 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e53b78a9-d76e-42fe-b026-f44b248941ea-operator-scripts\") pod \"glance-db-create-x9fx8\" (UID: \"e53b78a9-d76e-42fe-b026-f44b248941ea\") " pod="openstack/glance-db-create-x9fx8" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.384908 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwp88\" (UniqueName: \"kubernetes.io/projected/e53b78a9-d76e-42fe-b026-f44b248941ea-kube-api-access-mwp88\") pod \"glance-db-create-x9fx8\" (UID: \"e53b78a9-d76e-42fe-b026-f44b248941ea\") " pod="openstack/glance-db-create-x9fx8" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.454737 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-x9fx8" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.466547 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06b43a84-efc2-407f-818a-afc0151b60d8-operator-scripts\") pod \"glance-f74b-account-create-update-nz9tr\" (UID: \"06b43a84-efc2-407f-818a-afc0151b60d8\") " pod="openstack/glance-f74b-account-create-update-nz9tr" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.466606 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjbx8\" (UniqueName: \"kubernetes.io/projected/06b43a84-efc2-407f-818a-afc0151b60d8-kube-api-access-zjbx8\") pod \"glance-f74b-account-create-update-nz9tr\" (UID: \"06b43a84-efc2-407f-818a-afc0151b60d8\") " pod="openstack/glance-f74b-account-create-update-nz9tr" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.467592 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06b43a84-efc2-407f-818a-afc0151b60d8-operator-scripts\") pod \"glance-f74b-account-create-update-nz9tr\" (UID: \"06b43a84-efc2-407f-818a-afc0151b60d8\") " pod="openstack/glance-f74b-account-create-update-nz9tr" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.640457 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjbx8\" (UniqueName: \"kubernetes.io/projected/06b43a84-efc2-407f-818a-afc0151b60d8-kube-api-access-zjbx8\") pod \"glance-f74b-account-create-update-nz9tr\" (UID: \"06b43a84-efc2-407f-818a-afc0151b60d8\") " pod="openstack/glance-f74b-account-create-update-nz9tr" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.872159 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f74b-account-create-update-nz9tr" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.912399 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8mnl5" event={"ID":"ea5d5cc9-3f8f-427d-8675-bd883e86e4ee","Type":"ContainerDied","Data":"a7b190999bef72519d073fcddce7996fc8a33f3bf63bc7e705799cf3b425f78d"} Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.912429 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8mnl5" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.912452 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7b190999bef72519d073fcddce7996fc8a33f3bf63bc7e705799cf3b425f78d" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.913810 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-61b8-account-create-update-mwxlx" event={"ID":"97fff54c-e113-417f-b87c-6e01eea5e6b7","Type":"ContainerDied","Data":"b6387b662fd4192f69b302caac7289092ddd4eee8759d38b87d397bfb119a6ea"} Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.913833 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6387b662fd4192f69b302caac7289092ddd4eee8759d38b87d397bfb119a6ea" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.913861 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-61b8-account-create-update-mwxlx" Jan 31 07:40:38 crc kubenswrapper[4908]: I0131 07:40:38.993942 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-x9fx8"] Jan 31 07:40:38 crc kubenswrapper[4908]: W0131 07:40:38.998279 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode53b78a9_d76e_42fe_b026_f44b248941ea.slice/crio-b5222964173e44e67f3c4d534dcf62b7acdf607b743c7988bd31393103f41637 WatchSource:0}: Error finding container b5222964173e44e67f3c4d534dcf62b7acdf607b743c7988bd31393103f41637: Status 404 returned error can't find the container with id b5222964173e44e67f3c4d534dcf62b7acdf607b743c7988bd31393103f41637 Jan 31 07:40:39 crc kubenswrapper[4908]: I0131 07:40:39.133589 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f74b-account-create-update-nz9tr"] Jan 31 07:40:39 crc kubenswrapper[4908]: I0131 07:40:39.861841 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qnzvj"] Jan 31 07:40:39 crc kubenswrapper[4908]: I0131 07:40:39.868492 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qnzvj"] Jan 31 07:40:39 crc kubenswrapper[4908]: I0131 07:40:39.928269 4908 generic.go:334] "Generic (PLEG): container finished" podID="06b43a84-efc2-407f-818a-afc0151b60d8" containerID="e0f3915037dc7bc82589d800d49cfab360555f4719e2be36274ee4a9375d5974" exitCode=0 Jan 31 07:40:39 crc kubenswrapper[4908]: I0131 07:40:39.928373 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f74b-account-create-update-nz9tr" event={"ID":"06b43a84-efc2-407f-818a-afc0151b60d8","Type":"ContainerDied","Data":"e0f3915037dc7bc82589d800d49cfab360555f4719e2be36274ee4a9375d5974"} Jan 31 07:40:39 crc kubenswrapper[4908]: I0131 07:40:39.928401 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f74b-account-create-update-nz9tr" event={"ID":"06b43a84-efc2-407f-818a-afc0151b60d8","Type":"ContainerStarted","Data":"99c7764219ad758f4d1a78ded14be0224749027f33931bfe4086ccff4f8aea43"} Jan 31 07:40:39 crc kubenswrapper[4908]: I0131 07:40:39.930114 4908 generic.go:334] "Generic (PLEG): container finished" podID="e53b78a9-d76e-42fe-b026-f44b248941ea" containerID="da0309834ce602d6d65f7f9b7e6454625ccfa7d4dc45bec65a022b2aa319b4ef" exitCode=0 Jan 31 07:40:39 crc kubenswrapper[4908]: I0131 07:40:39.930167 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-x9fx8" event={"ID":"e53b78a9-d76e-42fe-b026-f44b248941ea","Type":"ContainerDied","Data":"da0309834ce602d6d65f7f9b7e6454625ccfa7d4dc45bec65a022b2aa319b4ef"} Jan 31 07:40:39 crc kubenswrapper[4908]: I0131 07:40:39.930197 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-x9fx8" event={"ID":"e53b78a9-d76e-42fe-b026-f44b248941ea","Type":"ContainerStarted","Data":"b5222964173e44e67f3c4d534dcf62b7acdf607b743c7988bd31393103f41637"} Jan 31 07:40:39 crc kubenswrapper[4908]: I0131 07:40:39.969750 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8b2e087-b297-4001-aad2-6d7ca61ff1c7" path="/var/lib/kubelet/pods/a8b2e087-b297-4001-aad2-6d7ca61ff1c7/volumes" Jan 31 07:40:39 crc kubenswrapper[4908]: I0131 07:40:39.970412 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-5n2cn"] Jan 31 07:40:39 crc kubenswrapper[4908]: I0131 07:40:39.976315 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5n2cn"] Jan 31 07:40:39 crc kubenswrapper[4908]: I0131 07:40:39.985673 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5n2cn" Jan 31 07:40:39 crc kubenswrapper[4908]: I0131 07:40:39.995992 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 31 07:40:40 crc kubenswrapper[4908]: I0131 07:40:40.097125 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mzgh\" (UniqueName: \"kubernetes.io/projected/da434b5e-8218-4d2c-a269-b1ac41d172d0-kube-api-access-5mzgh\") pod \"root-account-create-update-5n2cn\" (UID: \"da434b5e-8218-4d2c-a269-b1ac41d172d0\") " pod="openstack/root-account-create-update-5n2cn" Jan 31 07:40:40 crc kubenswrapper[4908]: I0131 07:40:40.098064 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da434b5e-8218-4d2c-a269-b1ac41d172d0-operator-scripts\") pod \"root-account-create-update-5n2cn\" (UID: \"da434b5e-8218-4d2c-a269-b1ac41d172d0\") " pod="openstack/root-account-create-update-5n2cn" Jan 31 07:40:40 crc kubenswrapper[4908]: I0131 07:40:40.199426 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mzgh\" (UniqueName: \"kubernetes.io/projected/da434b5e-8218-4d2c-a269-b1ac41d172d0-kube-api-access-5mzgh\") pod \"root-account-create-update-5n2cn\" (UID: \"da434b5e-8218-4d2c-a269-b1ac41d172d0\") " pod="openstack/root-account-create-update-5n2cn" Jan 31 07:40:40 crc kubenswrapper[4908]: I0131 07:40:40.199497 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da434b5e-8218-4d2c-a269-b1ac41d172d0-operator-scripts\") pod \"root-account-create-update-5n2cn\" (UID: \"da434b5e-8218-4d2c-a269-b1ac41d172d0\") " pod="openstack/root-account-create-update-5n2cn" Jan 31 07:40:40 crc kubenswrapper[4908]: I0131 07:40:40.200387 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da434b5e-8218-4d2c-a269-b1ac41d172d0-operator-scripts\") pod \"root-account-create-update-5n2cn\" (UID: \"da434b5e-8218-4d2c-a269-b1ac41d172d0\") " pod="openstack/root-account-create-update-5n2cn" Jan 31 07:40:40 crc kubenswrapper[4908]: I0131 07:40:40.221943 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mzgh\" (UniqueName: \"kubernetes.io/projected/da434b5e-8218-4d2c-a269-b1ac41d172d0-kube-api-access-5mzgh\") pod \"root-account-create-update-5n2cn\" (UID: \"da434b5e-8218-4d2c-a269-b1ac41d172d0\") " pod="openstack/root-account-create-update-5n2cn" Jan 31 07:40:40 crc kubenswrapper[4908]: I0131 07:40:40.323849 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5n2cn" Jan 31 07:40:40 crc kubenswrapper[4908]: I0131 07:40:40.431489 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:40:40 crc kubenswrapper[4908]: I0131 07:40:40.431716 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:40:40 crc kubenswrapper[4908]: I0131 07:40:40.772936 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5n2cn"] Jan 31 07:40:40 crc kubenswrapper[4908]: I0131 07:40:40.939875 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5n2cn" event={"ID":"da434b5e-8218-4d2c-a269-b1ac41d172d0","Type":"ContainerStarted","Data":"36dc76a3a62b7c5ca8e8f1e08b3cba9c8ea3565bce20e60c68f8e7025f9aa362"} Jan 31 07:40:41 crc kubenswrapper[4908]: I0131 07:40:41.207728 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f74b-account-create-update-nz9tr" Jan 31 07:40:41 crc kubenswrapper[4908]: I0131 07:40:41.315694 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06b43a84-efc2-407f-818a-afc0151b60d8-operator-scripts\") pod \"06b43a84-efc2-407f-818a-afc0151b60d8\" (UID: \"06b43a84-efc2-407f-818a-afc0151b60d8\") " Jan 31 07:40:41 crc kubenswrapper[4908]: I0131 07:40:41.315812 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjbx8\" (UniqueName: \"kubernetes.io/projected/06b43a84-efc2-407f-818a-afc0151b60d8-kube-api-access-zjbx8\") pod \"06b43a84-efc2-407f-818a-afc0151b60d8\" (UID: \"06b43a84-efc2-407f-818a-afc0151b60d8\") " Jan 31 07:40:41 crc kubenswrapper[4908]: I0131 07:40:41.316845 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06b43a84-efc2-407f-818a-afc0151b60d8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "06b43a84-efc2-407f-818a-afc0151b60d8" (UID: "06b43a84-efc2-407f-818a-afc0151b60d8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:40:41 crc kubenswrapper[4908]: I0131 07:40:41.322180 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06b43a84-efc2-407f-818a-afc0151b60d8-kube-api-access-zjbx8" (OuterVolumeSpecName: "kube-api-access-zjbx8") pod "06b43a84-efc2-407f-818a-afc0151b60d8" (UID: "06b43a84-efc2-407f-818a-afc0151b60d8"). InnerVolumeSpecName "kube-api-access-zjbx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:40:41 crc kubenswrapper[4908]: I0131 07:40:41.417281 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjbx8\" (UniqueName: \"kubernetes.io/projected/06b43a84-efc2-407f-818a-afc0151b60d8-kube-api-access-zjbx8\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:41 crc kubenswrapper[4908]: I0131 07:40:41.417309 4908 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06b43a84-efc2-407f-818a-afc0151b60d8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:41 crc kubenswrapper[4908]: I0131 07:40:41.423829 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-x9fx8" Jan 31 07:40:41 crc kubenswrapper[4908]: I0131 07:40:41.518663 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwp88\" (UniqueName: \"kubernetes.io/projected/e53b78a9-d76e-42fe-b026-f44b248941ea-kube-api-access-mwp88\") pod \"e53b78a9-d76e-42fe-b026-f44b248941ea\" (UID: \"e53b78a9-d76e-42fe-b026-f44b248941ea\") " Jan 31 07:40:41 crc kubenswrapper[4908]: I0131 07:40:41.518849 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e53b78a9-d76e-42fe-b026-f44b248941ea-operator-scripts\") pod \"e53b78a9-d76e-42fe-b026-f44b248941ea\" (UID: \"e53b78a9-d76e-42fe-b026-f44b248941ea\") " Jan 31 07:40:41 crc kubenswrapper[4908]: I0131 07:40:41.519358 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e53b78a9-d76e-42fe-b026-f44b248941ea-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e53b78a9-d76e-42fe-b026-f44b248941ea" (UID: "e53b78a9-d76e-42fe-b026-f44b248941ea"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:40:41 crc kubenswrapper[4908]: I0131 07:40:41.529627 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e53b78a9-d76e-42fe-b026-f44b248941ea-kube-api-access-mwp88" (OuterVolumeSpecName: "kube-api-access-mwp88") pod "e53b78a9-d76e-42fe-b026-f44b248941ea" (UID: "e53b78a9-d76e-42fe-b026-f44b248941ea"). InnerVolumeSpecName "kube-api-access-mwp88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:40:41 crc kubenswrapper[4908]: I0131 07:40:41.620401 4908 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e53b78a9-d76e-42fe-b026-f44b248941ea-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:41 crc kubenswrapper[4908]: I0131 07:40:41.620706 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwp88\" (UniqueName: \"kubernetes.io/projected/e53b78a9-d76e-42fe-b026-f44b248941ea-kube-api-access-mwp88\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:41 crc kubenswrapper[4908]: I0131 07:40:41.956862 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f74b-account-create-update-nz9tr" Jan 31 07:40:41 crc kubenswrapper[4908]: I0131 07:40:41.956870 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f74b-account-create-update-nz9tr" event={"ID":"06b43a84-efc2-407f-818a-afc0151b60d8","Type":"ContainerDied","Data":"99c7764219ad758f4d1a78ded14be0224749027f33931bfe4086ccff4f8aea43"} Jan 31 07:40:41 crc kubenswrapper[4908]: I0131 07:40:41.957015 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99c7764219ad758f4d1a78ded14be0224749027f33931bfe4086ccff4f8aea43" Jan 31 07:40:41 crc kubenswrapper[4908]: I0131 07:40:41.958451 4908 generic.go:334] "Generic (PLEG): container finished" podID="da434b5e-8218-4d2c-a269-b1ac41d172d0" containerID="f50e6614aa35d50b94047b92b2bdcb064c0ad6ff15efe50dc3d4f93d26c0b004" exitCode=0 Jan 31 07:40:41 crc kubenswrapper[4908]: I0131 07:40:41.958509 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5n2cn" event={"ID":"da434b5e-8218-4d2c-a269-b1ac41d172d0","Type":"ContainerDied","Data":"f50e6614aa35d50b94047b92b2bdcb064c0ad6ff15efe50dc3d4f93d26c0b004"} Jan 31 07:40:41 crc kubenswrapper[4908]: I0131 07:40:41.959803 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-x9fx8" event={"ID":"e53b78a9-d76e-42fe-b026-f44b248941ea","Type":"ContainerDied","Data":"b5222964173e44e67f3c4d534dcf62b7acdf607b743c7988bd31393103f41637"} Jan 31 07:40:41 crc kubenswrapper[4908]: I0131 07:40:41.959825 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5222964173e44e67f3c4d534dcf62b7acdf607b743c7988bd31393103f41637" Jan 31 07:40:41 crc kubenswrapper[4908]: I0131 07:40:41.959836 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-x9fx8" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.362442 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5n2cn" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.369527 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-jt4wn" podUID="d6e9ace1-1aad-474c-a7be-73e4a08770e1" containerName="ovn-controller" probeResult="failure" output=< Jan 31 07:40:43 crc kubenswrapper[4908]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 31 07:40:43 crc kubenswrapper[4908]: > Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.406285 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-lnjb2" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.436676 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-lnjb2" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.443698 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-lt52x"] Jan 31 07:40:43 crc kubenswrapper[4908]: E0131 07:40:43.444045 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da434b5e-8218-4d2c-a269-b1ac41d172d0" containerName="mariadb-account-create-update" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.444060 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="da434b5e-8218-4d2c-a269-b1ac41d172d0" containerName="mariadb-account-create-update" Jan 31 07:40:43 crc kubenswrapper[4908]: E0131 07:40:43.444083 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06b43a84-efc2-407f-818a-afc0151b60d8" containerName="mariadb-account-create-update" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.444089 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="06b43a84-efc2-407f-818a-afc0151b60d8" containerName="mariadb-account-create-update" Jan 31 07:40:43 crc kubenswrapper[4908]: E0131 07:40:43.444102 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e53b78a9-d76e-42fe-b026-f44b248941ea" containerName="mariadb-database-create" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.444108 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="e53b78a9-d76e-42fe-b026-f44b248941ea" containerName="mariadb-database-create" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.444239 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="e53b78a9-d76e-42fe-b026-f44b248941ea" containerName="mariadb-database-create" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.444256 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="da434b5e-8218-4d2c-a269-b1ac41d172d0" containerName="mariadb-account-create-update" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.444264 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="06b43a84-efc2-407f-818a-afc0151b60d8" containerName="mariadb-account-create-update" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.444756 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-lt52x" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.448883 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.449141 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-npr59" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.465016 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-lt52x"] Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.564503 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da434b5e-8218-4d2c-a269-b1ac41d172d0-operator-scripts\") pod \"da434b5e-8218-4d2c-a269-b1ac41d172d0\" (UID: \"da434b5e-8218-4d2c-a269-b1ac41d172d0\") " Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.564656 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mzgh\" (UniqueName: \"kubernetes.io/projected/da434b5e-8218-4d2c-a269-b1ac41d172d0-kube-api-access-5mzgh\") pod \"da434b5e-8218-4d2c-a269-b1ac41d172d0\" (UID: \"da434b5e-8218-4d2c-a269-b1ac41d172d0\") " Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.564914 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg6r2\" (UniqueName: \"kubernetes.io/projected/35b314ce-a7db-42b5-b571-2f23c1065d37-kube-api-access-cg6r2\") pod \"glance-db-sync-lt52x\" (UID: \"35b314ce-a7db-42b5-b571-2f23c1065d37\") " pod="openstack/glance-db-sync-lt52x" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.564995 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35b314ce-a7db-42b5-b571-2f23c1065d37-combined-ca-bundle\") pod \"glance-db-sync-lt52x\" (UID: \"35b314ce-a7db-42b5-b571-2f23c1065d37\") " pod="openstack/glance-db-sync-lt52x" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.565023 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/35b314ce-a7db-42b5-b571-2f23c1065d37-db-sync-config-data\") pod \"glance-db-sync-lt52x\" (UID: \"35b314ce-a7db-42b5-b571-2f23c1065d37\") " pod="openstack/glance-db-sync-lt52x" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.565044 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35b314ce-a7db-42b5-b571-2f23c1065d37-config-data\") pod \"glance-db-sync-lt52x\" (UID: \"35b314ce-a7db-42b5-b571-2f23c1065d37\") " pod="openstack/glance-db-sync-lt52x" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.565339 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da434b5e-8218-4d2c-a269-b1ac41d172d0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "da434b5e-8218-4d2c-a269-b1ac41d172d0" (UID: "da434b5e-8218-4d2c-a269-b1ac41d172d0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.570889 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da434b5e-8218-4d2c-a269-b1ac41d172d0-kube-api-access-5mzgh" (OuterVolumeSpecName: "kube-api-access-5mzgh") pod "da434b5e-8218-4d2c-a269-b1ac41d172d0" (UID: "da434b5e-8218-4d2c-a269-b1ac41d172d0"). InnerVolumeSpecName "kube-api-access-5mzgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.666432 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg6r2\" (UniqueName: \"kubernetes.io/projected/35b314ce-a7db-42b5-b571-2f23c1065d37-kube-api-access-cg6r2\") pod \"glance-db-sync-lt52x\" (UID: \"35b314ce-a7db-42b5-b571-2f23c1065d37\") " pod="openstack/glance-db-sync-lt52x" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.666533 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35b314ce-a7db-42b5-b571-2f23c1065d37-combined-ca-bundle\") pod \"glance-db-sync-lt52x\" (UID: \"35b314ce-a7db-42b5-b571-2f23c1065d37\") " pod="openstack/glance-db-sync-lt52x" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.666568 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/35b314ce-a7db-42b5-b571-2f23c1065d37-db-sync-config-data\") pod \"glance-db-sync-lt52x\" (UID: \"35b314ce-a7db-42b5-b571-2f23c1065d37\") " pod="openstack/glance-db-sync-lt52x" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.666598 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35b314ce-a7db-42b5-b571-2f23c1065d37-config-data\") pod \"glance-db-sync-lt52x\" (UID: \"35b314ce-a7db-42b5-b571-2f23c1065d37\") " pod="openstack/glance-db-sync-lt52x" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.666710 4908 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da434b5e-8218-4d2c-a269-b1ac41d172d0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.666726 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mzgh\" (UniqueName: \"kubernetes.io/projected/da434b5e-8218-4d2c-a269-b1ac41d172d0-kube-api-access-5mzgh\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.670188 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/35b314ce-a7db-42b5-b571-2f23c1065d37-db-sync-config-data\") pod \"glance-db-sync-lt52x\" (UID: \"35b314ce-a7db-42b5-b571-2f23c1065d37\") " pod="openstack/glance-db-sync-lt52x" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.670389 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35b314ce-a7db-42b5-b571-2f23c1065d37-config-data\") pod \"glance-db-sync-lt52x\" (UID: \"35b314ce-a7db-42b5-b571-2f23c1065d37\") " pod="openstack/glance-db-sync-lt52x" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.673971 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35b314ce-a7db-42b5-b571-2f23c1065d37-combined-ca-bundle\") pod \"glance-db-sync-lt52x\" (UID: \"35b314ce-a7db-42b5-b571-2f23c1065d37\") " pod="openstack/glance-db-sync-lt52x" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.691013 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg6r2\" (UniqueName: \"kubernetes.io/projected/35b314ce-a7db-42b5-b571-2f23c1065d37-kube-api-access-cg6r2\") pod \"glance-db-sync-lt52x\" (UID: \"35b314ce-a7db-42b5-b571-2f23c1065d37\") " pod="openstack/glance-db-sync-lt52x" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.721112 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-jt4wn-config-vwgv9"] Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.722318 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jt4wn-config-vwgv9" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.725939 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.744436 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jt4wn-config-vwgv9"] Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.786280 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-lt52x" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.869120 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t4ld\" (UniqueName: \"kubernetes.io/projected/2de50407-9d36-4c6a-b6d7-85378a89ac33-kube-api-access-5t4ld\") pod \"ovn-controller-jt4wn-config-vwgv9\" (UID: \"2de50407-9d36-4c6a-b6d7-85378a89ac33\") " pod="openstack/ovn-controller-jt4wn-config-vwgv9" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.869194 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2de50407-9d36-4c6a-b6d7-85378a89ac33-var-log-ovn\") pod \"ovn-controller-jt4wn-config-vwgv9\" (UID: \"2de50407-9d36-4c6a-b6d7-85378a89ac33\") " pod="openstack/ovn-controller-jt4wn-config-vwgv9" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.869228 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2de50407-9d36-4c6a-b6d7-85378a89ac33-var-run\") pod \"ovn-controller-jt4wn-config-vwgv9\" (UID: \"2de50407-9d36-4c6a-b6d7-85378a89ac33\") " pod="openstack/ovn-controller-jt4wn-config-vwgv9" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.869287 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2de50407-9d36-4c6a-b6d7-85378a89ac33-scripts\") pod \"ovn-controller-jt4wn-config-vwgv9\" (UID: \"2de50407-9d36-4c6a-b6d7-85378a89ac33\") " pod="openstack/ovn-controller-jt4wn-config-vwgv9" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.869311 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2de50407-9d36-4c6a-b6d7-85378a89ac33-var-run-ovn\") pod \"ovn-controller-jt4wn-config-vwgv9\" (UID: \"2de50407-9d36-4c6a-b6d7-85378a89ac33\") " pod="openstack/ovn-controller-jt4wn-config-vwgv9" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.869345 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2de50407-9d36-4c6a-b6d7-85378a89ac33-additional-scripts\") pod \"ovn-controller-jt4wn-config-vwgv9\" (UID: \"2de50407-9d36-4c6a-b6d7-85378a89ac33\") " pod="openstack/ovn-controller-jt4wn-config-vwgv9" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.970567 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2de50407-9d36-4c6a-b6d7-85378a89ac33-scripts\") pod \"ovn-controller-jt4wn-config-vwgv9\" (UID: \"2de50407-9d36-4c6a-b6d7-85378a89ac33\") " pod="openstack/ovn-controller-jt4wn-config-vwgv9" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.970616 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2de50407-9d36-4c6a-b6d7-85378a89ac33-var-run-ovn\") pod \"ovn-controller-jt4wn-config-vwgv9\" (UID: \"2de50407-9d36-4c6a-b6d7-85378a89ac33\") " pod="openstack/ovn-controller-jt4wn-config-vwgv9" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.970666 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2de50407-9d36-4c6a-b6d7-85378a89ac33-additional-scripts\") pod \"ovn-controller-jt4wn-config-vwgv9\" (UID: \"2de50407-9d36-4c6a-b6d7-85378a89ac33\") " pod="openstack/ovn-controller-jt4wn-config-vwgv9" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.970744 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t4ld\" (UniqueName: \"kubernetes.io/projected/2de50407-9d36-4c6a-b6d7-85378a89ac33-kube-api-access-5t4ld\") pod \"ovn-controller-jt4wn-config-vwgv9\" (UID: \"2de50407-9d36-4c6a-b6d7-85378a89ac33\") " pod="openstack/ovn-controller-jt4wn-config-vwgv9" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.970804 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2de50407-9d36-4c6a-b6d7-85378a89ac33-var-log-ovn\") pod \"ovn-controller-jt4wn-config-vwgv9\" (UID: \"2de50407-9d36-4c6a-b6d7-85378a89ac33\") " pod="openstack/ovn-controller-jt4wn-config-vwgv9" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.970840 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2de50407-9d36-4c6a-b6d7-85378a89ac33-var-run\") pod \"ovn-controller-jt4wn-config-vwgv9\" (UID: \"2de50407-9d36-4c6a-b6d7-85378a89ac33\") " pod="openstack/ovn-controller-jt4wn-config-vwgv9" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.971024 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2de50407-9d36-4c6a-b6d7-85378a89ac33-var-run\") pod \"ovn-controller-jt4wn-config-vwgv9\" (UID: \"2de50407-9d36-4c6a-b6d7-85378a89ac33\") " pod="openstack/ovn-controller-jt4wn-config-vwgv9" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.971030 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2de50407-9d36-4c6a-b6d7-85378a89ac33-var-run-ovn\") pod \"ovn-controller-jt4wn-config-vwgv9\" (UID: \"2de50407-9d36-4c6a-b6d7-85378a89ac33\") " pod="openstack/ovn-controller-jt4wn-config-vwgv9" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.971296 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2de50407-9d36-4c6a-b6d7-85378a89ac33-var-log-ovn\") pod \"ovn-controller-jt4wn-config-vwgv9\" (UID: \"2de50407-9d36-4c6a-b6d7-85378a89ac33\") " pod="openstack/ovn-controller-jt4wn-config-vwgv9" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.971612 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2de50407-9d36-4c6a-b6d7-85378a89ac33-additional-scripts\") pod \"ovn-controller-jt4wn-config-vwgv9\" (UID: \"2de50407-9d36-4c6a-b6d7-85378a89ac33\") " pod="openstack/ovn-controller-jt4wn-config-vwgv9" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.974840 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2de50407-9d36-4c6a-b6d7-85378a89ac33-scripts\") pod \"ovn-controller-jt4wn-config-vwgv9\" (UID: \"2de50407-9d36-4c6a-b6d7-85378a89ac33\") " pod="openstack/ovn-controller-jt4wn-config-vwgv9" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.976573 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5n2cn" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.976942 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5n2cn" event={"ID":"da434b5e-8218-4d2c-a269-b1ac41d172d0","Type":"ContainerDied","Data":"36dc76a3a62b7c5ca8e8f1e08b3cba9c8ea3565bce20e60c68f8e7025f9aa362"} Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.976971 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36dc76a3a62b7c5ca8e8f1e08b3cba9c8ea3565bce20e60c68f8e7025f9aa362" Jan 31 07:40:43 crc kubenswrapper[4908]: I0131 07:40:43.994505 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t4ld\" (UniqueName: \"kubernetes.io/projected/2de50407-9d36-4c6a-b6d7-85378a89ac33-kube-api-access-5t4ld\") pod \"ovn-controller-jt4wn-config-vwgv9\" (UID: \"2de50407-9d36-4c6a-b6d7-85378a89ac33\") " pod="openstack/ovn-controller-jt4wn-config-vwgv9" Jan 31 07:40:44 crc kubenswrapper[4908]: I0131 07:40:44.063688 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jt4wn-config-vwgv9" Jan 31 07:40:44 crc kubenswrapper[4908]: I0131 07:40:44.306469 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jt4wn-config-vwgv9"] Jan 31 07:40:44 crc kubenswrapper[4908]: I0131 07:40:44.390994 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-lt52x"] Jan 31 07:40:44 crc kubenswrapper[4908]: W0131 07:40:44.401171 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35b314ce_a7db_42b5_b571_2f23c1065d37.slice/crio-6bf5d36d60dd52e557cefd1a1fd4e5ce3825e18d39c77d3d63d83e24baa47c94 WatchSource:0}: Error finding container 6bf5d36d60dd52e557cefd1a1fd4e5ce3825e18d39c77d3d63d83e24baa47c94: Status 404 returned error can't find the container with id 6bf5d36d60dd52e557cefd1a1fd4e5ce3825e18d39c77d3d63d83e24baa47c94 Jan 31 07:40:44 crc kubenswrapper[4908]: I0131 07:40:44.984074 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-lt52x" event={"ID":"35b314ce-a7db-42b5-b571-2f23c1065d37","Type":"ContainerStarted","Data":"6bf5d36d60dd52e557cefd1a1fd4e5ce3825e18d39c77d3d63d83e24baa47c94"} Jan 31 07:40:44 crc kubenswrapper[4908]: I0131 07:40:44.985643 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jt4wn-config-vwgv9" event={"ID":"2de50407-9d36-4c6a-b6d7-85378a89ac33","Type":"ContainerStarted","Data":"0fba0ecf14463cb8eb50c86337c160f74c904dd5caeafb6be3ab2ccd2b91b43e"} Jan 31 07:40:44 crc kubenswrapper[4908]: I0131 07:40:44.985688 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jt4wn-config-vwgv9" event={"ID":"2de50407-9d36-4c6a-b6d7-85378a89ac33","Type":"ContainerStarted","Data":"ce98e69d8b8e29c2fc70b7e634371a439aea934e4090215c7ba1babb43b1e876"} Jan 31 07:40:45 crc kubenswrapper[4908]: I0131 07:40:45.006019 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-jt4wn-config-vwgv9" podStartSLOduration=2.005993499 podStartE2EDuration="2.005993499s" podCreationTimestamp="2026-01-31 07:40:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:40:44.999697447 +0000 UTC m=+1151.615642131" watchObservedRunningTime="2026-01-31 07:40:45.005993499 +0000 UTC m=+1151.621938163" Jan 31 07:40:45 crc kubenswrapper[4908]: I0131 07:40:45.997618 4908 generic.go:334] "Generic (PLEG): container finished" podID="2de50407-9d36-4c6a-b6d7-85378a89ac33" containerID="0fba0ecf14463cb8eb50c86337c160f74c904dd5caeafb6be3ab2ccd2b91b43e" exitCode=0 Jan 31 07:40:45 crc kubenswrapper[4908]: I0131 07:40:45.997914 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jt4wn-config-vwgv9" event={"ID":"2de50407-9d36-4c6a-b6d7-85378a89ac33","Type":"ContainerDied","Data":"0fba0ecf14463cb8eb50c86337c160f74c904dd5caeafb6be3ab2ccd2b91b43e"} Jan 31 07:40:47 crc kubenswrapper[4908]: I0131 07:40:47.370036 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jt4wn-config-vwgv9" Jan 31 07:40:47 crc kubenswrapper[4908]: I0131 07:40:47.431160 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5t4ld\" (UniqueName: \"kubernetes.io/projected/2de50407-9d36-4c6a-b6d7-85378a89ac33-kube-api-access-5t4ld\") pod \"2de50407-9d36-4c6a-b6d7-85378a89ac33\" (UID: \"2de50407-9d36-4c6a-b6d7-85378a89ac33\") " Jan 31 07:40:47 crc kubenswrapper[4908]: I0131 07:40:47.431207 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2de50407-9d36-4c6a-b6d7-85378a89ac33-additional-scripts\") pod \"2de50407-9d36-4c6a-b6d7-85378a89ac33\" (UID: \"2de50407-9d36-4c6a-b6d7-85378a89ac33\") " Jan 31 07:40:47 crc kubenswrapper[4908]: I0131 07:40:47.431227 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2de50407-9d36-4c6a-b6d7-85378a89ac33-var-run-ovn\") pod \"2de50407-9d36-4c6a-b6d7-85378a89ac33\" (UID: \"2de50407-9d36-4c6a-b6d7-85378a89ac33\") " Jan 31 07:40:47 crc kubenswrapper[4908]: I0131 07:40:47.431255 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2de50407-9d36-4c6a-b6d7-85378a89ac33-var-log-ovn\") pod \"2de50407-9d36-4c6a-b6d7-85378a89ac33\" (UID: \"2de50407-9d36-4c6a-b6d7-85378a89ac33\") " Jan 31 07:40:47 crc kubenswrapper[4908]: I0131 07:40:47.431309 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2de50407-9d36-4c6a-b6d7-85378a89ac33-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "2de50407-9d36-4c6a-b6d7-85378a89ac33" (UID: "2de50407-9d36-4c6a-b6d7-85378a89ac33"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:40:47 crc kubenswrapper[4908]: I0131 07:40:47.431320 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2de50407-9d36-4c6a-b6d7-85378a89ac33-var-run\") pod \"2de50407-9d36-4c6a-b6d7-85378a89ac33\" (UID: \"2de50407-9d36-4c6a-b6d7-85378a89ac33\") " Jan 31 07:40:47 crc kubenswrapper[4908]: I0131 07:40:47.431345 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2de50407-9d36-4c6a-b6d7-85378a89ac33-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "2de50407-9d36-4c6a-b6d7-85378a89ac33" (UID: "2de50407-9d36-4c6a-b6d7-85378a89ac33"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:40:47 crc kubenswrapper[4908]: I0131 07:40:47.431377 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2de50407-9d36-4c6a-b6d7-85378a89ac33-scripts\") pod \"2de50407-9d36-4c6a-b6d7-85378a89ac33\" (UID: \"2de50407-9d36-4c6a-b6d7-85378a89ac33\") " Jan 31 07:40:47 crc kubenswrapper[4908]: I0131 07:40:47.431428 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2de50407-9d36-4c6a-b6d7-85378a89ac33-var-run" (OuterVolumeSpecName: "var-run") pod "2de50407-9d36-4c6a-b6d7-85378a89ac33" (UID: "2de50407-9d36-4c6a-b6d7-85378a89ac33"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:40:47 crc kubenswrapper[4908]: I0131 07:40:47.431593 4908 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2de50407-9d36-4c6a-b6d7-85378a89ac33-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:47 crc kubenswrapper[4908]: I0131 07:40:47.431605 4908 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2de50407-9d36-4c6a-b6d7-85378a89ac33-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:47 crc kubenswrapper[4908]: I0131 07:40:47.431613 4908 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2de50407-9d36-4c6a-b6d7-85378a89ac33-var-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:47 crc kubenswrapper[4908]: I0131 07:40:47.432116 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2de50407-9d36-4c6a-b6d7-85378a89ac33-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "2de50407-9d36-4c6a-b6d7-85378a89ac33" (UID: "2de50407-9d36-4c6a-b6d7-85378a89ac33"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:40:47 crc kubenswrapper[4908]: I0131 07:40:47.432579 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2de50407-9d36-4c6a-b6d7-85378a89ac33-scripts" (OuterVolumeSpecName: "scripts") pod "2de50407-9d36-4c6a-b6d7-85378a89ac33" (UID: "2de50407-9d36-4c6a-b6d7-85378a89ac33"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:40:47 crc kubenswrapper[4908]: I0131 07:40:47.437074 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2de50407-9d36-4c6a-b6d7-85378a89ac33-kube-api-access-5t4ld" (OuterVolumeSpecName: "kube-api-access-5t4ld") pod "2de50407-9d36-4c6a-b6d7-85378a89ac33" (UID: "2de50407-9d36-4c6a-b6d7-85378a89ac33"). InnerVolumeSpecName "kube-api-access-5t4ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:40:47 crc kubenswrapper[4908]: I0131 07:40:47.532975 4908 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2de50407-9d36-4c6a-b6d7-85378a89ac33-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:47 crc kubenswrapper[4908]: I0131 07:40:47.533046 4908 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2de50407-9d36-4c6a-b6d7-85378a89ac33-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:47 crc kubenswrapper[4908]: I0131 07:40:47.533059 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5t4ld\" (UniqueName: \"kubernetes.io/projected/2de50407-9d36-4c6a-b6d7-85378a89ac33-kube-api-access-5t4ld\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.013484 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jt4wn-config-vwgv9" event={"ID":"2de50407-9d36-4c6a-b6d7-85378a89ac33","Type":"ContainerDied","Data":"ce98e69d8b8e29c2fc70b7e634371a439aea934e4090215c7ba1babb43b1e876"} Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.013532 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce98e69d8b8e29c2fc70b7e634371a439aea934e4090215c7ba1babb43b1e876" Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.013579 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jt4wn-config-vwgv9" Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.092137 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-jt4wn-config-vwgv9"] Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.098606 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-jt4wn-config-vwgv9"] Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.199207 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-jt4wn-config-g788k"] Jan 31 07:40:48 crc kubenswrapper[4908]: E0131 07:40:48.199507 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2de50407-9d36-4c6a-b6d7-85378a89ac33" containerName="ovn-config" Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.199523 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="2de50407-9d36-4c6a-b6d7-85378a89ac33" containerName="ovn-config" Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.199680 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="2de50407-9d36-4c6a-b6d7-85378a89ac33" containerName="ovn-config" Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.200154 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jt4wn-config-g788k" Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.202348 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.217485 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jt4wn-config-g788k"] Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.336683 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-jt4wn" Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.350647 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/48106831-736b-4a62-88a8-7dd262a2809e-additional-scripts\") pod \"ovn-controller-jt4wn-config-g788k\" (UID: \"48106831-736b-4a62-88a8-7dd262a2809e\") " pod="openstack/ovn-controller-jt4wn-config-g788k" Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.350999 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/48106831-736b-4a62-88a8-7dd262a2809e-var-run\") pod \"ovn-controller-jt4wn-config-g788k\" (UID: \"48106831-736b-4a62-88a8-7dd262a2809e\") " pod="openstack/ovn-controller-jt4wn-config-g788k" Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.351027 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48106831-736b-4a62-88a8-7dd262a2809e-scripts\") pod \"ovn-controller-jt4wn-config-g788k\" (UID: \"48106831-736b-4a62-88a8-7dd262a2809e\") " pod="openstack/ovn-controller-jt4wn-config-g788k" Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.351062 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/48106831-736b-4a62-88a8-7dd262a2809e-var-run-ovn\") pod \"ovn-controller-jt4wn-config-g788k\" (UID: \"48106831-736b-4a62-88a8-7dd262a2809e\") " pod="openstack/ovn-controller-jt4wn-config-g788k" Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.351110 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gt8q\" (UniqueName: \"kubernetes.io/projected/48106831-736b-4a62-88a8-7dd262a2809e-kube-api-access-5gt8q\") pod \"ovn-controller-jt4wn-config-g788k\" (UID: \"48106831-736b-4a62-88a8-7dd262a2809e\") " pod="openstack/ovn-controller-jt4wn-config-g788k" Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.351142 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/48106831-736b-4a62-88a8-7dd262a2809e-var-log-ovn\") pod \"ovn-controller-jt4wn-config-g788k\" (UID: \"48106831-736b-4a62-88a8-7dd262a2809e\") " pod="openstack/ovn-controller-jt4wn-config-g788k" Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.454268 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/48106831-736b-4a62-88a8-7dd262a2809e-var-run\") pod \"ovn-controller-jt4wn-config-g788k\" (UID: \"48106831-736b-4a62-88a8-7dd262a2809e\") " pod="openstack/ovn-controller-jt4wn-config-g788k" Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.454366 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48106831-736b-4a62-88a8-7dd262a2809e-scripts\") pod \"ovn-controller-jt4wn-config-g788k\" (UID: \"48106831-736b-4a62-88a8-7dd262a2809e\") " pod="openstack/ovn-controller-jt4wn-config-g788k" Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.454411 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/48106831-736b-4a62-88a8-7dd262a2809e-var-run-ovn\") pod \"ovn-controller-jt4wn-config-g788k\" (UID: \"48106831-736b-4a62-88a8-7dd262a2809e\") " pod="openstack/ovn-controller-jt4wn-config-g788k" Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.454460 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gt8q\" (UniqueName: \"kubernetes.io/projected/48106831-736b-4a62-88a8-7dd262a2809e-kube-api-access-5gt8q\") pod \"ovn-controller-jt4wn-config-g788k\" (UID: \"48106831-736b-4a62-88a8-7dd262a2809e\") " pod="openstack/ovn-controller-jt4wn-config-g788k" Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.454496 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/48106831-736b-4a62-88a8-7dd262a2809e-var-log-ovn\") pod \"ovn-controller-jt4wn-config-g788k\" (UID: \"48106831-736b-4a62-88a8-7dd262a2809e\") " pod="openstack/ovn-controller-jt4wn-config-g788k" Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.454559 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/48106831-736b-4a62-88a8-7dd262a2809e-additional-scripts\") pod \"ovn-controller-jt4wn-config-g788k\" (UID: \"48106831-736b-4a62-88a8-7dd262a2809e\") " pod="openstack/ovn-controller-jt4wn-config-g788k" Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.455575 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/48106831-736b-4a62-88a8-7dd262a2809e-var-run-ovn\") pod \"ovn-controller-jt4wn-config-g788k\" (UID: \"48106831-736b-4a62-88a8-7dd262a2809e\") " pod="openstack/ovn-controller-jt4wn-config-g788k" Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.455767 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/48106831-736b-4a62-88a8-7dd262a2809e-additional-scripts\") pod \"ovn-controller-jt4wn-config-g788k\" (UID: \"48106831-736b-4a62-88a8-7dd262a2809e\") " pod="openstack/ovn-controller-jt4wn-config-g788k" Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.455805 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/48106831-736b-4a62-88a8-7dd262a2809e-var-log-ovn\") pod \"ovn-controller-jt4wn-config-g788k\" (UID: \"48106831-736b-4a62-88a8-7dd262a2809e\") " pod="openstack/ovn-controller-jt4wn-config-g788k" Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.456209 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/48106831-736b-4a62-88a8-7dd262a2809e-var-run\") pod \"ovn-controller-jt4wn-config-g788k\" (UID: \"48106831-736b-4a62-88a8-7dd262a2809e\") " pod="openstack/ovn-controller-jt4wn-config-g788k" Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.457849 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48106831-736b-4a62-88a8-7dd262a2809e-scripts\") pod \"ovn-controller-jt4wn-config-g788k\" (UID: \"48106831-736b-4a62-88a8-7dd262a2809e\") " pod="openstack/ovn-controller-jt4wn-config-g788k" Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.477111 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gt8q\" (UniqueName: \"kubernetes.io/projected/48106831-736b-4a62-88a8-7dd262a2809e-kube-api-access-5gt8q\") pod \"ovn-controller-jt4wn-config-g788k\" (UID: \"48106831-736b-4a62-88a8-7dd262a2809e\") " pod="openstack/ovn-controller-jt4wn-config-g788k" Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.530390 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jt4wn-config-g788k" Jan 31 07:40:48 crc kubenswrapper[4908]: I0131 07:40:48.900096 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jt4wn-config-g788k"] Jan 31 07:40:48 crc kubenswrapper[4908]: W0131 07:40:48.911878 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48106831_736b_4a62_88a8_7dd262a2809e.slice/crio-d7b19831f4d63882b9eada35b8ed0c9771e24d0a693b1151f5897d226d337304 WatchSource:0}: Error finding container d7b19831f4d63882b9eada35b8ed0c9771e24d0a693b1151f5897d226d337304: Status 404 returned error can't find the container with id d7b19831f4d63882b9eada35b8ed0c9771e24d0a693b1151f5897d226d337304 Jan 31 07:40:49 crc kubenswrapper[4908]: I0131 07:40:49.024364 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jt4wn-config-g788k" event={"ID":"48106831-736b-4a62-88a8-7dd262a2809e","Type":"ContainerStarted","Data":"d7b19831f4d63882b9eada35b8ed0c9771e24d0a693b1151f5897d226d337304"} Jan 31 07:40:49 crc kubenswrapper[4908]: I0131 07:40:49.816350 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 31 07:40:49 crc kubenswrapper[4908]: I0131 07:40:49.953947 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2de50407-9d36-4c6a-b6d7-85378a89ac33" path="/var/lib/kubelet/pods/2de50407-9d36-4c6a-b6d7-85378a89ac33/volumes" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.053689 4908 generic.go:334] "Generic (PLEG): container finished" podID="48106831-736b-4a62-88a8-7dd262a2809e" containerID="c3ae7e69d0069c50ce083818c9ea6e8c53b38b22a9531538ddbc07d84ab7aac7" exitCode=0 Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.053747 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jt4wn-config-g788k" event={"ID":"48106831-736b-4a62-88a8-7dd262a2809e","Type":"ContainerDied","Data":"c3ae7e69d0069c50ce083818c9ea6e8c53b38b22a9531538ddbc07d84ab7aac7"} Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.208187 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.209147 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-0c73-account-create-update-ckgrp"] Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.210233 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0c73-account-create-update-ckgrp" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.214129 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.223187 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-49kmd"] Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.224609 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-49kmd" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.240133 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0c73-account-create-update-ckgrp"] Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.288023 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-49kmd"] Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.358325 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-czx5b"] Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.360322 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-czx5b" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.388075 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzpfl\" (UniqueName: \"kubernetes.io/projected/99373758-9712-495b-bca3-40192fac3419-kube-api-access-gzpfl\") pod \"cinder-db-create-49kmd\" (UID: \"99373758-9712-495b-bca3-40192fac3419\") " pod="openstack/cinder-db-create-49kmd" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.388203 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1232d6b2-5c5a-478a-936e-cf5ab61abd80-operator-scripts\") pod \"cinder-0c73-account-create-update-ckgrp\" (UID: \"1232d6b2-5c5a-478a-936e-cf5ab61abd80\") " pod="openstack/cinder-0c73-account-create-update-ckgrp" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.388230 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99373758-9712-495b-bca3-40192fac3419-operator-scripts\") pod \"cinder-db-create-49kmd\" (UID: \"99373758-9712-495b-bca3-40192fac3419\") " pod="openstack/cinder-db-create-49kmd" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.388313 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk9hx\" (UniqueName: \"kubernetes.io/projected/1232d6b2-5c5a-478a-936e-cf5ab61abd80-kube-api-access-nk9hx\") pod \"cinder-0c73-account-create-update-ckgrp\" (UID: \"1232d6b2-5c5a-478a-936e-cf5ab61abd80\") " pod="openstack/cinder-0c73-account-create-update-ckgrp" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.388551 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-a3f7-account-create-update-xwb54"] Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.389760 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a3f7-account-create-update-xwb54" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.396483 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.424005 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-czx5b"] Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.435074 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a3f7-account-create-update-xwb54"] Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.490999 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1232d6b2-5c5a-478a-936e-cf5ab61abd80-operator-scripts\") pod \"cinder-0c73-account-create-update-ckgrp\" (UID: \"1232d6b2-5c5a-478a-936e-cf5ab61abd80\") " pod="openstack/cinder-0c73-account-create-update-ckgrp" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.491070 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99373758-9712-495b-bca3-40192fac3419-operator-scripts\") pod \"cinder-db-create-49kmd\" (UID: \"99373758-9712-495b-bca3-40192fac3419\") " pod="openstack/cinder-db-create-49kmd" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.491119 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f29d7a2-caad-495f-a5c0-b58ddb2f2790-operator-scripts\") pod \"barbican-db-create-czx5b\" (UID: \"1f29d7a2-caad-495f-a5c0-b58ddb2f2790\") " pod="openstack/barbican-db-create-czx5b" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.491228 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk9hx\" (UniqueName: \"kubernetes.io/projected/1232d6b2-5c5a-478a-936e-cf5ab61abd80-kube-api-access-nk9hx\") pod \"cinder-0c73-account-create-update-ckgrp\" (UID: \"1232d6b2-5c5a-478a-936e-cf5ab61abd80\") " pod="openstack/cinder-0c73-account-create-update-ckgrp" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.491283 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdsqt\" (UniqueName: \"kubernetes.io/projected/1f29d7a2-caad-495f-a5c0-b58ddb2f2790-kube-api-access-fdsqt\") pod \"barbican-db-create-czx5b\" (UID: \"1f29d7a2-caad-495f-a5c0-b58ddb2f2790\") " pod="openstack/barbican-db-create-czx5b" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.491334 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg475\" (UniqueName: \"kubernetes.io/projected/22e90746-ae14-4f0b-af18-258e35239b0d-kube-api-access-jg475\") pod \"barbican-a3f7-account-create-update-xwb54\" (UID: \"22e90746-ae14-4f0b-af18-258e35239b0d\") " pod="openstack/barbican-a3f7-account-create-update-xwb54" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.491385 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22e90746-ae14-4f0b-af18-258e35239b0d-operator-scripts\") pod \"barbican-a3f7-account-create-update-xwb54\" (UID: \"22e90746-ae14-4f0b-af18-258e35239b0d\") " pod="openstack/barbican-a3f7-account-create-update-xwb54" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.491478 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzpfl\" (UniqueName: \"kubernetes.io/projected/99373758-9712-495b-bca3-40192fac3419-kube-api-access-gzpfl\") pod \"cinder-db-create-49kmd\" (UID: \"99373758-9712-495b-bca3-40192fac3419\") " pod="openstack/cinder-db-create-49kmd" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.494355 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99373758-9712-495b-bca3-40192fac3419-operator-scripts\") pod \"cinder-db-create-49kmd\" (UID: \"99373758-9712-495b-bca3-40192fac3419\") " pod="openstack/cinder-db-create-49kmd" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.502325 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1232d6b2-5c5a-478a-936e-cf5ab61abd80-operator-scripts\") pod \"cinder-0c73-account-create-update-ckgrp\" (UID: \"1232d6b2-5c5a-478a-936e-cf5ab61abd80\") " pod="openstack/cinder-0c73-account-create-update-ckgrp" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.528562 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzpfl\" (UniqueName: \"kubernetes.io/projected/99373758-9712-495b-bca3-40192fac3419-kube-api-access-gzpfl\") pod \"cinder-db-create-49kmd\" (UID: \"99373758-9712-495b-bca3-40192fac3419\") " pod="openstack/cinder-db-create-49kmd" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.529369 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk9hx\" (UniqueName: \"kubernetes.io/projected/1232d6b2-5c5a-478a-936e-cf5ab61abd80-kube-api-access-nk9hx\") pod \"cinder-0c73-account-create-update-ckgrp\" (UID: \"1232d6b2-5c5a-478a-936e-cf5ab61abd80\") " pod="openstack/cinder-0c73-account-create-update-ckgrp" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.535954 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-kg2tc"] Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.537597 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0c73-account-create-update-ckgrp" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.537820 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kg2tc" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.555526 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-49kmd" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.570243 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-kg2tc"] Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.594439 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg475\" (UniqueName: \"kubernetes.io/projected/22e90746-ae14-4f0b-af18-258e35239b0d-kube-api-access-jg475\") pod \"barbican-a3f7-account-create-update-xwb54\" (UID: \"22e90746-ae14-4f0b-af18-258e35239b0d\") " pod="openstack/barbican-a3f7-account-create-update-xwb54" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.594523 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkgj2\" (UniqueName: \"kubernetes.io/projected/9fbfeac6-331c-4893-be5c-40183532e503-kube-api-access-qkgj2\") pod \"neutron-db-create-kg2tc\" (UID: \"9fbfeac6-331c-4893-be5c-40183532e503\") " pod="openstack/neutron-db-create-kg2tc" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.594550 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22e90746-ae14-4f0b-af18-258e35239b0d-operator-scripts\") pod \"barbican-a3f7-account-create-update-xwb54\" (UID: \"22e90746-ae14-4f0b-af18-258e35239b0d\") " pod="openstack/barbican-a3f7-account-create-update-xwb54" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.594629 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fbfeac6-331c-4893-be5c-40183532e503-operator-scripts\") pod \"neutron-db-create-kg2tc\" (UID: \"9fbfeac6-331c-4893-be5c-40183532e503\") " pod="openstack/neutron-db-create-kg2tc" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.594648 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f29d7a2-caad-495f-a5c0-b58ddb2f2790-operator-scripts\") pod \"barbican-db-create-czx5b\" (UID: \"1f29d7a2-caad-495f-a5c0-b58ddb2f2790\") " pod="openstack/barbican-db-create-czx5b" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.594704 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdsqt\" (UniqueName: \"kubernetes.io/projected/1f29d7a2-caad-495f-a5c0-b58ddb2f2790-kube-api-access-fdsqt\") pod \"barbican-db-create-czx5b\" (UID: \"1f29d7a2-caad-495f-a5c0-b58ddb2f2790\") " pod="openstack/barbican-db-create-czx5b" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.596319 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22e90746-ae14-4f0b-af18-258e35239b0d-operator-scripts\") pod \"barbican-a3f7-account-create-update-xwb54\" (UID: \"22e90746-ae14-4f0b-af18-258e35239b0d\") " pod="openstack/barbican-a3f7-account-create-update-xwb54" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.597514 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f29d7a2-caad-495f-a5c0-b58ddb2f2790-operator-scripts\") pod \"barbican-db-create-czx5b\" (UID: \"1f29d7a2-caad-495f-a5c0-b58ddb2f2790\") " pod="openstack/barbican-db-create-czx5b" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.645630 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdsqt\" (UniqueName: \"kubernetes.io/projected/1f29d7a2-caad-495f-a5c0-b58ddb2f2790-kube-api-access-fdsqt\") pod \"barbican-db-create-czx5b\" (UID: \"1f29d7a2-caad-495f-a5c0-b58ddb2f2790\") " pod="openstack/barbican-db-create-czx5b" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.655807 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg475\" (UniqueName: \"kubernetes.io/projected/22e90746-ae14-4f0b-af18-258e35239b0d-kube-api-access-jg475\") pod \"barbican-a3f7-account-create-update-xwb54\" (UID: \"22e90746-ae14-4f0b-af18-258e35239b0d\") " pod="openstack/barbican-a3f7-account-create-update-xwb54" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.674067 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7a8d-account-create-update-gh5qr"] Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.675709 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7a8d-account-create-update-gh5qr" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.678038 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.684265 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-4xfqn"] Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.685541 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4xfqn" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.688768 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.689003 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.689174 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-68r4f" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.689214 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.694923 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7a8d-account-create-update-gh5qr"] Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.695676 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fbfeac6-331c-4893-be5c-40183532e503-operator-scripts\") pod \"neutron-db-create-kg2tc\" (UID: \"9fbfeac6-331c-4893-be5c-40183532e503\") " pod="openstack/neutron-db-create-kg2tc" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.695796 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkgj2\" (UniqueName: \"kubernetes.io/projected/9fbfeac6-331c-4893-be5c-40183532e503-kube-api-access-qkgj2\") pod \"neutron-db-create-kg2tc\" (UID: \"9fbfeac6-331c-4893-be5c-40183532e503\") " pod="openstack/neutron-db-create-kg2tc" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.697014 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fbfeac6-331c-4893-be5c-40183532e503-operator-scripts\") pod \"neutron-db-create-kg2tc\" (UID: \"9fbfeac6-331c-4893-be5c-40183532e503\") " pod="openstack/neutron-db-create-kg2tc" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.709310 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4xfqn"] Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.709695 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-czx5b" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.713305 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkgj2\" (UniqueName: \"kubernetes.io/projected/9fbfeac6-331c-4893-be5c-40183532e503-kube-api-access-qkgj2\") pod \"neutron-db-create-kg2tc\" (UID: \"9fbfeac6-331c-4893-be5c-40183532e503\") " pod="openstack/neutron-db-create-kg2tc" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.717289 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a3f7-account-create-update-xwb54" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.797553 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq94j\" (UniqueName: \"kubernetes.io/projected/0c53a541-b434-4bf6-8b68-6b56f14fee52-kube-api-access-pq94j\") pod \"keystone-db-sync-4xfqn\" (UID: \"0c53a541-b434-4bf6-8b68-6b56f14fee52\") " pod="openstack/keystone-db-sync-4xfqn" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.797598 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdcde0c8-5958-4c81-8860-1be3a31bcb5c-operator-scripts\") pod \"neutron-7a8d-account-create-update-gh5qr\" (UID: \"fdcde0c8-5958-4c81-8860-1be3a31bcb5c\") " pod="openstack/neutron-7a8d-account-create-update-gh5qr" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.797653 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c53a541-b434-4bf6-8b68-6b56f14fee52-combined-ca-bundle\") pod \"keystone-db-sync-4xfqn\" (UID: \"0c53a541-b434-4bf6-8b68-6b56f14fee52\") " pod="openstack/keystone-db-sync-4xfqn" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.797675 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm7hr\" (UniqueName: \"kubernetes.io/projected/fdcde0c8-5958-4c81-8860-1be3a31bcb5c-kube-api-access-bm7hr\") pod \"neutron-7a8d-account-create-update-gh5qr\" (UID: \"fdcde0c8-5958-4c81-8860-1be3a31bcb5c\") " pod="openstack/neutron-7a8d-account-create-update-gh5qr" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.797710 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c53a541-b434-4bf6-8b68-6b56f14fee52-config-data\") pod \"keystone-db-sync-4xfqn\" (UID: \"0c53a541-b434-4bf6-8b68-6b56f14fee52\") " pod="openstack/keystone-db-sync-4xfqn" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.900035 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c53a541-b434-4bf6-8b68-6b56f14fee52-combined-ca-bundle\") pod \"keystone-db-sync-4xfqn\" (UID: \"0c53a541-b434-4bf6-8b68-6b56f14fee52\") " pod="openstack/keystone-db-sync-4xfqn" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.900119 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm7hr\" (UniqueName: \"kubernetes.io/projected/fdcde0c8-5958-4c81-8860-1be3a31bcb5c-kube-api-access-bm7hr\") pod \"neutron-7a8d-account-create-update-gh5qr\" (UID: \"fdcde0c8-5958-4c81-8860-1be3a31bcb5c\") " pod="openstack/neutron-7a8d-account-create-update-gh5qr" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.900184 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c53a541-b434-4bf6-8b68-6b56f14fee52-config-data\") pod \"keystone-db-sync-4xfqn\" (UID: \"0c53a541-b434-4bf6-8b68-6b56f14fee52\") " pod="openstack/keystone-db-sync-4xfqn" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.900291 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq94j\" (UniqueName: \"kubernetes.io/projected/0c53a541-b434-4bf6-8b68-6b56f14fee52-kube-api-access-pq94j\") pod \"keystone-db-sync-4xfqn\" (UID: \"0c53a541-b434-4bf6-8b68-6b56f14fee52\") " pod="openstack/keystone-db-sync-4xfqn" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.900336 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdcde0c8-5958-4c81-8860-1be3a31bcb5c-operator-scripts\") pod \"neutron-7a8d-account-create-update-gh5qr\" (UID: \"fdcde0c8-5958-4c81-8860-1be3a31bcb5c\") " pod="openstack/neutron-7a8d-account-create-update-gh5qr" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.901741 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdcde0c8-5958-4c81-8860-1be3a31bcb5c-operator-scripts\") pod \"neutron-7a8d-account-create-update-gh5qr\" (UID: \"fdcde0c8-5958-4c81-8860-1be3a31bcb5c\") " pod="openstack/neutron-7a8d-account-create-update-gh5qr" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.904825 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c53a541-b434-4bf6-8b68-6b56f14fee52-config-data\") pod \"keystone-db-sync-4xfqn\" (UID: \"0c53a541-b434-4bf6-8b68-6b56f14fee52\") " pod="openstack/keystone-db-sync-4xfqn" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.904790 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c53a541-b434-4bf6-8b68-6b56f14fee52-combined-ca-bundle\") pod \"keystone-db-sync-4xfqn\" (UID: \"0c53a541-b434-4bf6-8b68-6b56f14fee52\") " pod="openstack/keystone-db-sync-4xfqn" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.911392 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kg2tc" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.924559 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq94j\" (UniqueName: \"kubernetes.io/projected/0c53a541-b434-4bf6-8b68-6b56f14fee52-kube-api-access-pq94j\") pod \"keystone-db-sync-4xfqn\" (UID: \"0c53a541-b434-4bf6-8b68-6b56f14fee52\") " pod="openstack/keystone-db-sync-4xfqn" Jan 31 07:40:50 crc kubenswrapper[4908]: I0131 07:40:50.924712 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm7hr\" (UniqueName: \"kubernetes.io/projected/fdcde0c8-5958-4c81-8860-1be3a31bcb5c-kube-api-access-bm7hr\") pod \"neutron-7a8d-account-create-update-gh5qr\" (UID: \"fdcde0c8-5958-4c81-8860-1be3a31bcb5c\") " pod="openstack/neutron-7a8d-account-create-update-gh5qr" Jan 31 07:40:51 crc kubenswrapper[4908]: I0131 07:40:51.060641 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7a8d-account-create-update-gh5qr" Jan 31 07:40:51 crc kubenswrapper[4908]: I0131 07:40:51.067688 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4xfqn" Jan 31 07:40:58 crc kubenswrapper[4908]: I0131 07:40:58.577946 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jt4wn-config-g788k" Jan 31 07:40:58 crc kubenswrapper[4908]: I0131 07:40:58.632773 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/48106831-736b-4a62-88a8-7dd262a2809e-additional-scripts\") pod \"48106831-736b-4a62-88a8-7dd262a2809e\" (UID: \"48106831-736b-4a62-88a8-7dd262a2809e\") " Jan 31 07:40:58 crc kubenswrapper[4908]: I0131 07:40:58.632829 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/48106831-736b-4a62-88a8-7dd262a2809e-var-log-ovn\") pod \"48106831-736b-4a62-88a8-7dd262a2809e\" (UID: \"48106831-736b-4a62-88a8-7dd262a2809e\") " Jan 31 07:40:58 crc kubenswrapper[4908]: I0131 07:40:58.632907 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/48106831-736b-4a62-88a8-7dd262a2809e-var-run-ovn\") pod \"48106831-736b-4a62-88a8-7dd262a2809e\" (UID: \"48106831-736b-4a62-88a8-7dd262a2809e\") " Jan 31 07:40:58 crc kubenswrapper[4908]: I0131 07:40:58.632956 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/48106831-736b-4a62-88a8-7dd262a2809e-var-run\") pod \"48106831-736b-4a62-88a8-7dd262a2809e\" (UID: \"48106831-736b-4a62-88a8-7dd262a2809e\") " Jan 31 07:40:58 crc kubenswrapper[4908]: I0131 07:40:58.633051 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gt8q\" (UniqueName: \"kubernetes.io/projected/48106831-736b-4a62-88a8-7dd262a2809e-kube-api-access-5gt8q\") pod \"48106831-736b-4a62-88a8-7dd262a2809e\" (UID: \"48106831-736b-4a62-88a8-7dd262a2809e\") " Jan 31 07:40:58 crc kubenswrapper[4908]: I0131 07:40:58.633093 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48106831-736b-4a62-88a8-7dd262a2809e-scripts\") pod \"48106831-736b-4a62-88a8-7dd262a2809e\" (UID: \"48106831-736b-4a62-88a8-7dd262a2809e\") " Jan 31 07:40:58 crc kubenswrapper[4908]: I0131 07:40:58.634773 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48106831-736b-4a62-88a8-7dd262a2809e-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "48106831-736b-4a62-88a8-7dd262a2809e" (UID: "48106831-736b-4a62-88a8-7dd262a2809e"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:40:58 crc kubenswrapper[4908]: I0131 07:40:58.634780 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48106831-736b-4a62-88a8-7dd262a2809e-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "48106831-736b-4a62-88a8-7dd262a2809e" (UID: "48106831-736b-4a62-88a8-7dd262a2809e"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:40:58 crc kubenswrapper[4908]: I0131 07:40:58.634828 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48106831-736b-4a62-88a8-7dd262a2809e-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "48106831-736b-4a62-88a8-7dd262a2809e" (UID: "48106831-736b-4a62-88a8-7dd262a2809e"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:40:58 crc kubenswrapper[4908]: I0131 07:40:58.634855 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48106831-736b-4a62-88a8-7dd262a2809e-var-run" (OuterVolumeSpecName: "var-run") pod "48106831-736b-4a62-88a8-7dd262a2809e" (UID: "48106831-736b-4a62-88a8-7dd262a2809e"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:40:58 crc kubenswrapper[4908]: I0131 07:40:58.635092 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48106831-736b-4a62-88a8-7dd262a2809e-scripts" (OuterVolumeSpecName: "scripts") pod "48106831-736b-4a62-88a8-7dd262a2809e" (UID: "48106831-736b-4a62-88a8-7dd262a2809e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:40:58 crc kubenswrapper[4908]: I0131 07:40:58.641543 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48106831-736b-4a62-88a8-7dd262a2809e-kube-api-access-5gt8q" (OuterVolumeSpecName: "kube-api-access-5gt8q") pod "48106831-736b-4a62-88a8-7dd262a2809e" (UID: "48106831-736b-4a62-88a8-7dd262a2809e"). InnerVolumeSpecName "kube-api-access-5gt8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:40:58 crc kubenswrapper[4908]: I0131 07:40:58.704683 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a3f7-account-create-update-xwb54"] Jan 31 07:40:58 crc kubenswrapper[4908]: W0131 07:40:58.710187 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22e90746_ae14_4f0b_af18_258e35239b0d.slice/crio-b99aba23690ec634e21d9e8308bbdeb232ae3db759862abe15b5e5d85b94ef4f WatchSource:0}: Error finding container b99aba23690ec634e21d9e8308bbdeb232ae3db759862abe15b5e5d85b94ef4f: Status 404 returned error can't find the container with id b99aba23690ec634e21d9e8308bbdeb232ae3db759862abe15b5e5d85b94ef4f Jan 31 07:40:58 crc kubenswrapper[4908]: I0131 07:40:58.734674 4908 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/48106831-736b-4a62-88a8-7dd262a2809e-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:58 crc kubenswrapper[4908]: I0131 07:40:58.734706 4908 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/48106831-736b-4a62-88a8-7dd262a2809e-var-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:58 crc kubenswrapper[4908]: I0131 07:40:58.734716 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gt8q\" (UniqueName: \"kubernetes.io/projected/48106831-736b-4a62-88a8-7dd262a2809e-kube-api-access-5gt8q\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:58 crc kubenswrapper[4908]: I0131 07:40:58.734724 4908 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48106831-736b-4a62-88a8-7dd262a2809e-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:58 crc kubenswrapper[4908]: I0131 07:40:58.734734 4908 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/48106831-736b-4a62-88a8-7dd262a2809e-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:58 crc kubenswrapper[4908]: I0131 07:40:58.734741 4908 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/48106831-736b-4a62-88a8-7dd262a2809e-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:58 crc kubenswrapper[4908]: I0131 07:40:58.878000 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0c73-account-create-update-ckgrp"] Jan 31 07:40:58 crc kubenswrapper[4908]: W0131 07:40:58.879094 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1232d6b2_5c5a_478a_936e_cf5ab61abd80.slice/crio-d147deedc3ca8c9f1d2f72c464db5571e40818cd019ee8a63f1687067516f6f5 WatchSource:0}: Error finding container d147deedc3ca8c9f1d2f72c464db5571e40818cd019ee8a63f1687067516f6f5: Status 404 returned error can't find the container with id d147deedc3ca8c9f1d2f72c464db5571e40818cd019ee8a63f1687067516f6f5 Jan 31 07:40:58 crc kubenswrapper[4908]: I0131 07:40:58.967928 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-kg2tc"] Jan 31 07:40:59 crc kubenswrapper[4908]: I0131 07:40:59.069144 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4xfqn"] Jan 31 07:40:59 crc kubenswrapper[4908]: W0131 07:40:59.073528 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c53a541_b434_4bf6_8b68_6b56f14fee52.slice/crio-25dca571f9afc53a052c9e0759141d8ce587c2c3f406159f541657004f731988 WatchSource:0}: Error finding container 25dca571f9afc53a052c9e0759141d8ce587c2c3f406159f541657004f731988: Status 404 returned error can't find the container with id 25dca571f9afc53a052c9e0759141d8ce587c2c3f406159f541657004f731988 Jan 31 07:40:59 crc kubenswrapper[4908]: I0131 07:40:59.075824 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7a8d-account-create-update-gh5qr"] Jan 31 07:40:59 crc kubenswrapper[4908]: W0131 07:40:59.077463 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99373758_9712_495b_bca3_40192fac3419.slice/crio-a5d16bb91e8c47c22379c06a9b27fd0d767e9e45ae6b3651fb424f067ed622cc WatchSource:0}: Error finding container a5d16bb91e8c47c22379c06a9b27fd0d767e9e45ae6b3651fb424f067ed622cc: Status 404 returned error can't find the container with id a5d16bb91e8c47c22379c06a9b27fd0d767e9e45ae6b3651fb424f067ed622cc Jan 31 07:40:59 crc kubenswrapper[4908]: I0131 07:40:59.083472 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-49kmd"] Jan 31 07:40:59 crc kubenswrapper[4908]: I0131 07:40:59.096461 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-czx5b"] Jan 31 07:40:59 crc kubenswrapper[4908]: W0131 07:40:59.104757 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f29d7a2_caad_495f_a5c0_b58ddb2f2790.slice/crio-e881922394adcff69df6a411d4acd65a6bbd439e47e881ec88c28d2773808f5c WatchSource:0}: Error finding container e881922394adcff69df6a411d4acd65a6bbd439e47e881ec88c28d2773808f5c: Status 404 returned error can't find the container with id e881922394adcff69df6a411d4acd65a6bbd439e47e881ec88c28d2773808f5c Jan 31 07:40:59 crc kubenswrapper[4908]: I0131 07:40:59.137577 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-kg2tc" event={"ID":"9fbfeac6-331c-4893-be5c-40183532e503","Type":"ContainerStarted","Data":"c8df52cf79b2ff0dcb2ecb1aa54f50ea2bc1b66a4f81ec635c18195065239601"} Jan 31 07:40:59 crc kubenswrapper[4908]: I0131 07:40:59.140784 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0c73-account-create-update-ckgrp" event={"ID":"1232d6b2-5c5a-478a-936e-cf5ab61abd80","Type":"ContainerStarted","Data":"9ffad97bdd94d95624830aa4001770e601176f665032368cd234ac41dcefc58f"} Jan 31 07:40:59 crc kubenswrapper[4908]: I0131 07:40:59.140933 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0c73-account-create-update-ckgrp" event={"ID":"1232d6b2-5c5a-478a-936e-cf5ab61abd80","Type":"ContainerStarted","Data":"d147deedc3ca8c9f1d2f72c464db5571e40818cd019ee8a63f1687067516f6f5"} Jan 31 07:40:59 crc kubenswrapper[4908]: I0131 07:40:59.145001 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7a8d-account-create-update-gh5qr" event={"ID":"fdcde0c8-5958-4c81-8860-1be3a31bcb5c","Type":"ContainerStarted","Data":"03234c5ae42764f70f7d4d2bfe866dd8faa4f86a38dd2da4faa419f9c18a598c"} Jan 31 07:40:59 crc kubenswrapper[4908]: I0131 07:40:59.147551 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-czx5b" event={"ID":"1f29d7a2-caad-495f-a5c0-b58ddb2f2790","Type":"ContainerStarted","Data":"e881922394adcff69df6a411d4acd65a6bbd439e47e881ec88c28d2773808f5c"} Jan 31 07:40:59 crc kubenswrapper[4908]: I0131 07:40:59.150053 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a3f7-account-create-update-xwb54" event={"ID":"22e90746-ae14-4f0b-af18-258e35239b0d","Type":"ContainerStarted","Data":"9cd31e4113c0a9128d3d5415f958e99bcf9ee0251f9d07ec5593740f5dfc421e"} Jan 31 07:40:59 crc kubenswrapper[4908]: I0131 07:40:59.150101 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a3f7-account-create-update-xwb54" event={"ID":"22e90746-ae14-4f0b-af18-258e35239b0d","Type":"ContainerStarted","Data":"b99aba23690ec634e21d9e8308bbdeb232ae3db759862abe15b5e5d85b94ef4f"} Jan 31 07:40:59 crc kubenswrapper[4908]: I0131 07:40:59.163414 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-0c73-account-create-update-ckgrp" podStartSLOduration=9.163395316 podStartE2EDuration="9.163395316s" podCreationTimestamp="2026-01-31 07:40:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:40:59.161208444 +0000 UTC m=+1165.777153098" watchObservedRunningTime="2026-01-31 07:40:59.163395316 +0000 UTC m=+1165.779339970" Jan 31 07:40:59 crc kubenswrapper[4908]: I0131 07:40:59.165530 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jt4wn-config-g788k" event={"ID":"48106831-736b-4a62-88a8-7dd262a2809e","Type":"ContainerDied","Data":"d7b19831f4d63882b9eada35b8ed0c9771e24d0a693b1151f5897d226d337304"} Jan 31 07:40:59 crc kubenswrapper[4908]: I0131 07:40:59.165575 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7b19831f4d63882b9eada35b8ed0c9771e24d0a693b1151f5897d226d337304" Jan 31 07:40:59 crc kubenswrapper[4908]: I0131 07:40:59.165645 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jt4wn-config-g788k" Jan 31 07:40:59 crc kubenswrapper[4908]: I0131 07:40:59.184205 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-49kmd" event={"ID":"99373758-9712-495b-bca3-40192fac3419","Type":"ContainerStarted","Data":"a5d16bb91e8c47c22379c06a9b27fd0d767e9e45ae6b3651fb424f067ed622cc"} Jan 31 07:40:59 crc kubenswrapper[4908]: I0131 07:40:59.188047 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-a3f7-account-create-update-xwb54" podStartSLOduration=9.188029355 podStartE2EDuration="9.188029355s" podCreationTimestamp="2026-01-31 07:40:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:40:59.178789716 +0000 UTC m=+1165.794734370" watchObservedRunningTime="2026-01-31 07:40:59.188029355 +0000 UTC m=+1165.803974009" Jan 31 07:40:59 crc kubenswrapper[4908]: I0131 07:40:59.193433 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4xfqn" event={"ID":"0c53a541-b434-4bf6-8b68-6b56f14fee52","Type":"ContainerStarted","Data":"25dca571f9afc53a052c9e0759141d8ce587c2c3f406159f541657004f731988"} Jan 31 07:40:59 crc kubenswrapper[4908]: I0131 07:40:59.658012 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-jt4wn-config-g788k"] Jan 31 07:40:59 crc kubenswrapper[4908]: I0131 07:40:59.667716 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-jt4wn-config-g788k"] Jan 31 07:40:59 crc kubenswrapper[4908]: I0131 07:40:59.953042 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48106831-736b-4a62-88a8-7dd262a2809e" path="/var/lib/kubelet/pods/48106831-736b-4a62-88a8-7dd262a2809e/volumes" Jan 31 07:41:00 crc kubenswrapper[4908]: I0131 07:41:00.202760 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-lt52x" event={"ID":"35b314ce-a7db-42b5-b571-2f23c1065d37","Type":"ContainerStarted","Data":"3ecd7306e601d01a7c2f2be8420c80d74111a15ce1a330c64942316c09b14796"} Jan 31 07:41:00 crc kubenswrapper[4908]: I0131 07:41:00.206858 4908 generic.go:334] "Generic (PLEG): container finished" podID="99373758-9712-495b-bca3-40192fac3419" containerID="8dc774e1d2e9e88780381fad48fcee1627765a199313fab75a58b9ed0d9a83b5" exitCode=0 Jan 31 07:41:00 crc kubenswrapper[4908]: I0131 07:41:00.207003 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-49kmd" event={"ID":"99373758-9712-495b-bca3-40192fac3419","Type":"ContainerDied","Data":"8dc774e1d2e9e88780381fad48fcee1627765a199313fab75a58b9ed0d9a83b5"} Jan 31 07:41:00 crc kubenswrapper[4908]: I0131 07:41:00.208409 4908 generic.go:334] "Generic (PLEG): container finished" podID="9fbfeac6-331c-4893-be5c-40183532e503" containerID="5bd9d7538355958db4a02616f1dc697d8a25a8c9f57dff795fe49bccb9b6eadf" exitCode=0 Jan 31 07:41:00 crc kubenswrapper[4908]: I0131 07:41:00.208443 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-kg2tc" event={"ID":"9fbfeac6-331c-4893-be5c-40183532e503","Type":"ContainerDied","Data":"5bd9d7538355958db4a02616f1dc697d8a25a8c9f57dff795fe49bccb9b6eadf"} Jan 31 07:41:00 crc kubenswrapper[4908]: I0131 07:41:00.209903 4908 generic.go:334] "Generic (PLEG): container finished" podID="1232d6b2-5c5a-478a-936e-cf5ab61abd80" containerID="9ffad97bdd94d95624830aa4001770e601176f665032368cd234ac41dcefc58f" exitCode=0 Jan 31 07:41:00 crc kubenswrapper[4908]: I0131 07:41:00.209932 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0c73-account-create-update-ckgrp" event={"ID":"1232d6b2-5c5a-478a-936e-cf5ab61abd80","Type":"ContainerDied","Data":"9ffad97bdd94d95624830aa4001770e601176f665032368cd234ac41dcefc58f"} Jan 31 07:41:00 crc kubenswrapper[4908]: I0131 07:41:00.212126 4908 generic.go:334] "Generic (PLEG): container finished" podID="fdcde0c8-5958-4c81-8860-1be3a31bcb5c" containerID="4fa72dfb8e1a47fffe8534fee40655a18ca2d63841b56e3af3d802dbf9e4f09d" exitCode=0 Jan 31 07:41:00 crc kubenswrapper[4908]: I0131 07:41:00.212188 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7a8d-account-create-update-gh5qr" event={"ID":"fdcde0c8-5958-4c81-8860-1be3a31bcb5c","Type":"ContainerDied","Data":"4fa72dfb8e1a47fffe8534fee40655a18ca2d63841b56e3af3d802dbf9e4f09d"} Jan 31 07:41:00 crc kubenswrapper[4908]: I0131 07:41:00.220877 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-lt52x" podStartSLOduration=3.162960258 podStartE2EDuration="17.220829709s" podCreationTimestamp="2026-01-31 07:40:43 +0000 UTC" firstStartedPulling="2026-01-31 07:40:44.404064524 +0000 UTC m=+1151.020009178" lastFinishedPulling="2026-01-31 07:40:58.461933975 +0000 UTC m=+1165.077878629" observedRunningTime="2026-01-31 07:41:00.215540597 +0000 UTC m=+1166.831485271" watchObservedRunningTime="2026-01-31 07:41:00.220829709 +0000 UTC m=+1166.836774373" Jan 31 07:41:00 crc kubenswrapper[4908]: I0131 07:41:00.227207 4908 generic.go:334] "Generic (PLEG): container finished" podID="1f29d7a2-caad-495f-a5c0-b58ddb2f2790" containerID="5abdf72af07280f20669c568f8344aa7660c20c711b9af3c360863cf5e4cc72a" exitCode=0 Jan 31 07:41:00 crc kubenswrapper[4908]: I0131 07:41:00.227302 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-czx5b" event={"ID":"1f29d7a2-caad-495f-a5c0-b58ddb2f2790","Type":"ContainerDied","Data":"5abdf72af07280f20669c568f8344aa7660c20c711b9af3c360863cf5e4cc72a"} Jan 31 07:41:00 crc kubenswrapper[4908]: I0131 07:41:00.228728 4908 generic.go:334] "Generic (PLEG): container finished" podID="22e90746-ae14-4f0b-af18-258e35239b0d" containerID="9cd31e4113c0a9128d3d5415f958e99bcf9ee0251f9d07ec5593740f5dfc421e" exitCode=0 Jan 31 07:41:00 crc kubenswrapper[4908]: I0131 07:41:00.228765 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a3f7-account-create-update-xwb54" event={"ID":"22e90746-ae14-4f0b-af18-258e35239b0d","Type":"ContainerDied","Data":"9cd31e4113c0a9128d3d5415f958e99bcf9ee0251f9d07ec5593740f5dfc421e"} Jan 31 07:41:05 crc kubenswrapper[4908]: I0131 07:41:05.714291 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a3f7-account-create-update-xwb54" Jan 31 07:41:05 crc kubenswrapper[4908]: I0131 07:41:05.727036 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7a8d-account-create-update-gh5qr" Jan 31 07:41:05 crc kubenswrapper[4908]: I0131 07:41:05.733430 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-49kmd" Jan 31 07:41:05 crc kubenswrapper[4908]: I0131 07:41:05.763950 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0c73-account-create-update-ckgrp" Jan 31 07:41:05 crc kubenswrapper[4908]: I0131 07:41:05.766018 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-czx5b" Jan 31 07:41:05 crc kubenswrapper[4908]: I0131 07:41:05.775047 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kg2tc" Jan 31 07:41:05 crc kubenswrapper[4908]: I0131 07:41:05.896696 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22e90746-ae14-4f0b-af18-258e35239b0d-operator-scripts\") pod \"22e90746-ae14-4f0b-af18-258e35239b0d\" (UID: \"22e90746-ae14-4f0b-af18-258e35239b0d\") " Jan 31 07:41:05 crc kubenswrapper[4908]: I0131 07:41:05.896758 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdsqt\" (UniqueName: \"kubernetes.io/projected/1f29d7a2-caad-495f-a5c0-b58ddb2f2790-kube-api-access-fdsqt\") pod \"1f29d7a2-caad-495f-a5c0-b58ddb2f2790\" (UID: \"1f29d7a2-caad-495f-a5c0-b58ddb2f2790\") " Jan 31 07:41:05 crc kubenswrapper[4908]: I0131 07:41:05.896788 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk9hx\" (UniqueName: \"kubernetes.io/projected/1232d6b2-5c5a-478a-936e-cf5ab61abd80-kube-api-access-nk9hx\") pod \"1232d6b2-5c5a-478a-936e-cf5ab61abd80\" (UID: \"1232d6b2-5c5a-478a-936e-cf5ab61abd80\") " Jan 31 07:41:05 crc kubenswrapper[4908]: I0131 07:41:05.896850 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1232d6b2-5c5a-478a-936e-cf5ab61abd80-operator-scripts\") pod \"1232d6b2-5c5a-478a-936e-cf5ab61abd80\" (UID: \"1232d6b2-5c5a-478a-936e-cf5ab61abd80\") " Jan 31 07:41:05 crc kubenswrapper[4908]: I0131 07:41:05.896922 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f29d7a2-caad-495f-a5c0-b58ddb2f2790-operator-scripts\") pod \"1f29d7a2-caad-495f-a5c0-b58ddb2f2790\" (UID: \"1f29d7a2-caad-495f-a5c0-b58ddb2f2790\") " Jan 31 07:41:05 crc kubenswrapper[4908]: I0131 07:41:05.897152 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdcde0c8-5958-4c81-8860-1be3a31bcb5c-operator-scripts\") pod \"fdcde0c8-5958-4c81-8860-1be3a31bcb5c\" (UID: \"fdcde0c8-5958-4c81-8860-1be3a31bcb5c\") " Jan 31 07:41:05 crc kubenswrapper[4908]: I0131 07:41:05.897182 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkgj2\" (UniqueName: \"kubernetes.io/projected/9fbfeac6-331c-4893-be5c-40183532e503-kube-api-access-qkgj2\") pod \"9fbfeac6-331c-4893-be5c-40183532e503\" (UID: \"9fbfeac6-331c-4893-be5c-40183532e503\") " Jan 31 07:41:05 crc kubenswrapper[4908]: I0131 07:41:05.897233 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzpfl\" (UniqueName: \"kubernetes.io/projected/99373758-9712-495b-bca3-40192fac3419-kube-api-access-gzpfl\") pod \"99373758-9712-495b-bca3-40192fac3419\" (UID: \"99373758-9712-495b-bca3-40192fac3419\") " Jan 31 07:41:05 crc kubenswrapper[4908]: I0131 07:41:05.897319 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fbfeac6-331c-4893-be5c-40183532e503-operator-scripts\") pod \"9fbfeac6-331c-4893-be5c-40183532e503\" (UID: \"9fbfeac6-331c-4893-be5c-40183532e503\") " Jan 31 07:41:05 crc kubenswrapper[4908]: I0131 07:41:05.897346 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99373758-9712-495b-bca3-40192fac3419-operator-scripts\") pod \"99373758-9712-495b-bca3-40192fac3419\" (UID: \"99373758-9712-495b-bca3-40192fac3419\") " Jan 31 07:41:05 crc kubenswrapper[4908]: I0131 07:41:05.897370 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bm7hr\" (UniqueName: \"kubernetes.io/projected/fdcde0c8-5958-4c81-8860-1be3a31bcb5c-kube-api-access-bm7hr\") pod \"fdcde0c8-5958-4c81-8860-1be3a31bcb5c\" (UID: \"fdcde0c8-5958-4c81-8860-1be3a31bcb5c\") " Jan 31 07:41:05 crc kubenswrapper[4908]: I0131 07:41:05.897401 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg475\" (UniqueName: \"kubernetes.io/projected/22e90746-ae14-4f0b-af18-258e35239b0d-kube-api-access-jg475\") pod \"22e90746-ae14-4f0b-af18-258e35239b0d\" (UID: \"22e90746-ae14-4f0b-af18-258e35239b0d\") " Jan 31 07:41:05 crc kubenswrapper[4908]: I0131 07:41:05.901665 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdcde0c8-5958-4c81-8860-1be3a31bcb5c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fdcde0c8-5958-4c81-8860-1be3a31bcb5c" (UID: "fdcde0c8-5958-4c81-8860-1be3a31bcb5c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:41:05 crc kubenswrapper[4908]: I0131 07:41:05.901786 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f29d7a2-caad-495f-a5c0-b58ddb2f2790-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f29d7a2-caad-495f-a5c0-b58ddb2f2790" (UID: "1f29d7a2-caad-495f-a5c0-b58ddb2f2790"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:41:05 crc kubenswrapper[4908]: I0131 07:41:05.901823 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1232d6b2-5c5a-478a-936e-cf5ab61abd80-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1232d6b2-5c5a-478a-936e-cf5ab61abd80" (UID: "1232d6b2-5c5a-478a-936e-cf5ab61abd80"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:41:05 crc kubenswrapper[4908]: I0131 07:41:05.901963 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99373758-9712-495b-bca3-40192fac3419-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "99373758-9712-495b-bca3-40192fac3419" (UID: "99373758-9712-495b-bca3-40192fac3419"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:41:05 crc kubenswrapper[4908]: I0131 07:41:05.902013 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22e90746-ae14-4f0b-af18-258e35239b0d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22e90746-ae14-4f0b-af18-258e35239b0d" (UID: "22e90746-ae14-4f0b-af18-258e35239b0d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:41:05 crc kubenswrapper[4908]: I0131 07:41:05.902322 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fbfeac6-331c-4893-be5c-40183532e503-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9fbfeac6-331c-4893-be5c-40183532e503" (UID: "9fbfeac6-331c-4893-be5c-40183532e503"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:41:05 crc kubenswrapper[4908]: I0131 07:41:05.902449 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22e90746-ae14-4f0b-af18-258e35239b0d-kube-api-access-jg475" (OuterVolumeSpecName: "kube-api-access-jg475") pod "22e90746-ae14-4f0b-af18-258e35239b0d" (UID: "22e90746-ae14-4f0b-af18-258e35239b0d"). InnerVolumeSpecName "kube-api-access-jg475". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:41:05 crc kubenswrapper[4908]: I0131 07:41:05.904617 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fbfeac6-331c-4893-be5c-40183532e503-kube-api-access-qkgj2" (OuterVolumeSpecName: "kube-api-access-qkgj2") pod "9fbfeac6-331c-4893-be5c-40183532e503" (UID: "9fbfeac6-331c-4893-be5c-40183532e503"). InnerVolumeSpecName "kube-api-access-qkgj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:41:05 crc kubenswrapper[4908]: I0131 07:41:05.904657 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdcde0c8-5958-4c81-8860-1be3a31bcb5c-kube-api-access-bm7hr" (OuterVolumeSpecName: "kube-api-access-bm7hr") pod "fdcde0c8-5958-4c81-8860-1be3a31bcb5c" (UID: "fdcde0c8-5958-4c81-8860-1be3a31bcb5c"). InnerVolumeSpecName "kube-api-access-bm7hr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:41:05 crc kubenswrapper[4908]: I0131 07:41:05.904672 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1232d6b2-5c5a-478a-936e-cf5ab61abd80-kube-api-access-nk9hx" (OuterVolumeSpecName: "kube-api-access-nk9hx") pod "1232d6b2-5c5a-478a-936e-cf5ab61abd80" (UID: "1232d6b2-5c5a-478a-936e-cf5ab61abd80"). InnerVolumeSpecName "kube-api-access-nk9hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:41:05 crc kubenswrapper[4908]: I0131 07:41:05.905595 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99373758-9712-495b-bca3-40192fac3419-kube-api-access-gzpfl" (OuterVolumeSpecName: "kube-api-access-gzpfl") pod "99373758-9712-495b-bca3-40192fac3419" (UID: "99373758-9712-495b-bca3-40192fac3419"). InnerVolumeSpecName "kube-api-access-gzpfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:41:05 crc kubenswrapper[4908]: I0131 07:41:05.905653 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f29d7a2-caad-495f-a5c0-b58ddb2f2790-kube-api-access-fdsqt" (OuterVolumeSpecName: "kube-api-access-fdsqt") pod "1f29d7a2-caad-495f-a5c0-b58ddb2f2790" (UID: "1f29d7a2-caad-495f-a5c0-b58ddb2f2790"). InnerVolumeSpecName "kube-api-access-fdsqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.001946 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jg475\" (UniqueName: \"kubernetes.io/projected/22e90746-ae14-4f0b-af18-258e35239b0d-kube-api-access-jg475\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.002038 4908 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22e90746-ae14-4f0b-af18-258e35239b0d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.002049 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdsqt\" (UniqueName: \"kubernetes.io/projected/1f29d7a2-caad-495f-a5c0-b58ddb2f2790-kube-api-access-fdsqt\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.002059 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nk9hx\" (UniqueName: \"kubernetes.io/projected/1232d6b2-5c5a-478a-936e-cf5ab61abd80-kube-api-access-nk9hx\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.002140 4908 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1232d6b2-5c5a-478a-936e-cf5ab61abd80-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.002175 4908 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f29d7a2-caad-495f-a5c0-b58ddb2f2790-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.002187 4908 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdcde0c8-5958-4c81-8860-1be3a31bcb5c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.002196 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkgj2\" (UniqueName: \"kubernetes.io/projected/9fbfeac6-331c-4893-be5c-40183532e503-kube-api-access-qkgj2\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.002208 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzpfl\" (UniqueName: \"kubernetes.io/projected/99373758-9712-495b-bca3-40192fac3419-kube-api-access-gzpfl\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.002218 4908 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fbfeac6-331c-4893-be5c-40183532e503-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.002227 4908 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99373758-9712-495b-bca3-40192fac3419-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.002237 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bm7hr\" (UniqueName: \"kubernetes.io/projected/fdcde0c8-5958-4c81-8860-1be3a31bcb5c-kube-api-access-bm7hr\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.284181 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-kg2tc" event={"ID":"9fbfeac6-331c-4893-be5c-40183532e503","Type":"ContainerDied","Data":"c8df52cf79b2ff0dcb2ecb1aa54f50ea2bc1b66a4f81ec635c18195065239601"} Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.284246 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8df52cf79b2ff0dcb2ecb1aa54f50ea2bc1b66a4f81ec635c18195065239601" Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.284266 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kg2tc" Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.286365 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0c73-account-create-update-ckgrp" event={"ID":"1232d6b2-5c5a-478a-936e-cf5ab61abd80","Type":"ContainerDied","Data":"d147deedc3ca8c9f1d2f72c464db5571e40818cd019ee8a63f1687067516f6f5"} Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.286397 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d147deedc3ca8c9f1d2f72c464db5571e40818cd019ee8a63f1687067516f6f5" Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.286458 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0c73-account-create-update-ckgrp" Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.294678 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-czx5b" event={"ID":"1f29d7a2-caad-495f-a5c0-b58ddb2f2790","Type":"ContainerDied","Data":"e881922394adcff69df6a411d4acd65a6bbd439e47e881ec88c28d2773808f5c"} Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.294713 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e881922394adcff69df6a411d4acd65a6bbd439e47e881ec88c28d2773808f5c" Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.294766 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-czx5b" Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.301186 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7a8d-account-create-update-gh5qr" event={"ID":"fdcde0c8-5958-4c81-8860-1be3a31bcb5c","Type":"ContainerDied","Data":"03234c5ae42764f70f7d4d2bfe866dd8faa4f86a38dd2da4faa419f9c18a598c"} Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.301222 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03234c5ae42764f70f7d4d2bfe866dd8faa4f86a38dd2da4faa419f9c18a598c" Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.301282 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7a8d-account-create-update-gh5qr" Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.304850 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a3f7-account-create-update-xwb54" event={"ID":"22e90746-ae14-4f0b-af18-258e35239b0d","Type":"ContainerDied","Data":"b99aba23690ec634e21d9e8308bbdeb232ae3db759862abe15b5e5d85b94ef4f"} Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.304880 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a3f7-account-create-update-xwb54" Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.304885 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b99aba23690ec634e21d9e8308bbdeb232ae3db759862abe15b5e5d85b94ef4f" Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.306780 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-49kmd" Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.306779 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-49kmd" event={"ID":"99373758-9712-495b-bca3-40192fac3419","Type":"ContainerDied","Data":"a5d16bb91e8c47c22379c06a9b27fd0d767e9e45ae6b3651fb424f067ed622cc"} Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.306888 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5d16bb91e8c47c22379c06a9b27fd0d767e9e45ae6b3651fb424f067ed622cc" Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.308378 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4xfqn" event={"ID":"0c53a541-b434-4bf6-8b68-6b56f14fee52","Type":"ContainerStarted","Data":"055bed958baff07e16aee7b003c020f28515e104d6b81ee059a44dc3b50fd0dd"} Jan 31 07:41:06 crc kubenswrapper[4908]: I0131 07:41:06.328742 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-4xfqn" podStartSLOduration=9.834212542 podStartE2EDuration="16.328726904s" podCreationTimestamp="2026-01-31 07:40:50 +0000 UTC" firstStartedPulling="2026-01-31 07:40:59.077760311 +0000 UTC m=+1165.693704965" lastFinishedPulling="2026-01-31 07:41:05.572274673 +0000 UTC m=+1172.188219327" observedRunningTime="2026-01-31 07:41:06.323035363 +0000 UTC m=+1172.938980017" watchObservedRunningTime="2026-01-31 07:41:06.328726904 +0000 UTC m=+1172.944671558" Jan 31 07:41:10 crc kubenswrapper[4908]: I0131 07:41:10.431402 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:41:10 crc kubenswrapper[4908]: I0131 07:41:10.431701 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:41:14 crc kubenswrapper[4908]: I0131 07:41:14.369892 4908 generic.go:334] "Generic (PLEG): container finished" podID="0c53a541-b434-4bf6-8b68-6b56f14fee52" containerID="055bed958baff07e16aee7b003c020f28515e104d6b81ee059a44dc3b50fd0dd" exitCode=0 Jan 31 07:41:14 crc kubenswrapper[4908]: I0131 07:41:14.369997 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4xfqn" event={"ID":"0c53a541-b434-4bf6-8b68-6b56f14fee52","Type":"ContainerDied","Data":"055bed958baff07e16aee7b003c020f28515e104d6b81ee059a44dc3b50fd0dd"} Jan 31 07:41:15 crc kubenswrapper[4908]: I0131 07:41:15.380096 4908 generic.go:334] "Generic (PLEG): container finished" podID="35b314ce-a7db-42b5-b571-2f23c1065d37" containerID="3ecd7306e601d01a7c2f2be8420c80d74111a15ce1a330c64942316c09b14796" exitCode=0 Jan 31 07:41:15 crc kubenswrapper[4908]: I0131 07:41:15.380532 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-lt52x" event={"ID":"35b314ce-a7db-42b5-b571-2f23c1065d37","Type":"ContainerDied","Data":"3ecd7306e601d01a7c2f2be8420c80d74111a15ce1a330c64942316c09b14796"} Jan 31 07:41:15 crc kubenswrapper[4908]: I0131 07:41:15.713760 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4xfqn" Jan 31 07:41:15 crc kubenswrapper[4908]: I0131 07:41:15.864504 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq94j\" (UniqueName: \"kubernetes.io/projected/0c53a541-b434-4bf6-8b68-6b56f14fee52-kube-api-access-pq94j\") pod \"0c53a541-b434-4bf6-8b68-6b56f14fee52\" (UID: \"0c53a541-b434-4bf6-8b68-6b56f14fee52\") " Jan 31 07:41:15 crc kubenswrapper[4908]: I0131 07:41:15.864644 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c53a541-b434-4bf6-8b68-6b56f14fee52-combined-ca-bundle\") pod \"0c53a541-b434-4bf6-8b68-6b56f14fee52\" (UID: \"0c53a541-b434-4bf6-8b68-6b56f14fee52\") " Jan 31 07:41:15 crc kubenswrapper[4908]: I0131 07:41:15.864752 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c53a541-b434-4bf6-8b68-6b56f14fee52-config-data\") pod \"0c53a541-b434-4bf6-8b68-6b56f14fee52\" (UID: \"0c53a541-b434-4bf6-8b68-6b56f14fee52\") " Jan 31 07:41:15 crc kubenswrapper[4908]: I0131 07:41:15.874186 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c53a541-b434-4bf6-8b68-6b56f14fee52-kube-api-access-pq94j" (OuterVolumeSpecName: "kube-api-access-pq94j") pod "0c53a541-b434-4bf6-8b68-6b56f14fee52" (UID: "0c53a541-b434-4bf6-8b68-6b56f14fee52"). InnerVolumeSpecName "kube-api-access-pq94j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:41:15 crc kubenswrapper[4908]: I0131 07:41:15.889739 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c53a541-b434-4bf6-8b68-6b56f14fee52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c53a541-b434-4bf6-8b68-6b56f14fee52" (UID: "0c53a541-b434-4bf6-8b68-6b56f14fee52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:41:15 crc kubenswrapper[4908]: I0131 07:41:15.904751 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c53a541-b434-4bf6-8b68-6b56f14fee52-config-data" (OuterVolumeSpecName: "config-data") pod "0c53a541-b434-4bf6-8b68-6b56f14fee52" (UID: "0c53a541-b434-4bf6-8b68-6b56f14fee52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:41:15 crc kubenswrapper[4908]: I0131 07:41:15.966605 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq94j\" (UniqueName: \"kubernetes.io/projected/0c53a541-b434-4bf6-8b68-6b56f14fee52-kube-api-access-pq94j\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:15 crc kubenswrapper[4908]: I0131 07:41:15.966643 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c53a541-b434-4bf6-8b68-6b56f14fee52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:15 crc kubenswrapper[4908]: I0131 07:41:15.966658 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c53a541-b434-4bf6-8b68-6b56f14fee52-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.388423 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4xfqn" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.388420 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4xfqn" event={"ID":"0c53a541-b434-4bf6-8b68-6b56f14fee52","Type":"ContainerDied","Data":"25dca571f9afc53a052c9e0759141d8ce587c2c3f406159f541657004f731988"} Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.388549 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25dca571f9afc53a052c9e0759141d8ce587c2c3f406159f541657004f731988" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.638150 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-r5qzc"] Jan 31 07:41:16 crc kubenswrapper[4908]: E0131 07:41:16.638765 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f29d7a2-caad-495f-a5c0-b58ddb2f2790" containerName="mariadb-database-create" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.638778 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f29d7a2-caad-495f-a5c0-b58ddb2f2790" containerName="mariadb-database-create" Jan 31 07:41:16 crc kubenswrapper[4908]: E0131 07:41:16.638790 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fbfeac6-331c-4893-be5c-40183532e503" containerName="mariadb-database-create" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.638797 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fbfeac6-331c-4893-be5c-40183532e503" containerName="mariadb-database-create" Jan 31 07:41:16 crc kubenswrapper[4908]: E0131 07:41:16.638807 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdcde0c8-5958-4c81-8860-1be3a31bcb5c" containerName="mariadb-account-create-update" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.638814 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdcde0c8-5958-4c81-8860-1be3a31bcb5c" containerName="mariadb-account-create-update" Jan 31 07:41:16 crc kubenswrapper[4908]: E0131 07:41:16.638826 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99373758-9712-495b-bca3-40192fac3419" containerName="mariadb-database-create" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.638832 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="99373758-9712-495b-bca3-40192fac3419" containerName="mariadb-database-create" Jan 31 07:41:16 crc kubenswrapper[4908]: E0131 07:41:16.638839 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48106831-736b-4a62-88a8-7dd262a2809e" containerName="ovn-config" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.638847 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="48106831-736b-4a62-88a8-7dd262a2809e" containerName="ovn-config" Jan 31 07:41:16 crc kubenswrapper[4908]: E0131 07:41:16.638861 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22e90746-ae14-4f0b-af18-258e35239b0d" containerName="mariadb-account-create-update" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.638867 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="22e90746-ae14-4f0b-af18-258e35239b0d" containerName="mariadb-account-create-update" Jan 31 07:41:16 crc kubenswrapper[4908]: E0131 07:41:16.638878 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c53a541-b434-4bf6-8b68-6b56f14fee52" containerName="keystone-db-sync" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.638883 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c53a541-b434-4bf6-8b68-6b56f14fee52" containerName="keystone-db-sync" Jan 31 07:41:16 crc kubenswrapper[4908]: E0131 07:41:16.638893 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1232d6b2-5c5a-478a-936e-cf5ab61abd80" containerName="mariadb-account-create-update" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.638899 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="1232d6b2-5c5a-478a-936e-cf5ab61abd80" containerName="mariadb-account-create-update" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.639054 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="22e90746-ae14-4f0b-af18-258e35239b0d" containerName="mariadb-account-create-update" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.639064 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f29d7a2-caad-495f-a5c0-b58ddb2f2790" containerName="mariadb-database-create" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.639073 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="48106831-736b-4a62-88a8-7dd262a2809e" containerName="ovn-config" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.639080 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdcde0c8-5958-4c81-8860-1be3a31bcb5c" containerName="mariadb-account-create-update" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.639090 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fbfeac6-331c-4893-be5c-40183532e503" containerName="mariadb-database-create" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.639102 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="99373758-9712-495b-bca3-40192fac3419" containerName="mariadb-database-create" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.639109 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="1232d6b2-5c5a-478a-936e-cf5ab61abd80" containerName="mariadb-account-create-update" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.639117 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c53a541-b434-4bf6-8b68-6b56f14fee52" containerName="keystone-db-sync" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.641237 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fbd85b65-r5qzc" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.682646 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-r5qzc"] Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.700771 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xfz2l"] Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.701720 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xfz2l" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.704684 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.704879 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.705035 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.705173 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-68r4f" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.705318 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.742155 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xfz2l"] Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.781959 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crwqv\" (UniqueName: \"kubernetes.io/projected/4ab501e7-da69-4f12-a78c-fd28cad07038-kube-api-access-crwqv\") pod \"dnsmasq-dns-66fbd85b65-r5qzc\" (UID: \"4ab501e7-da69-4f12-a78c-fd28cad07038\") " pod="openstack/dnsmasq-dns-66fbd85b65-r5qzc" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.782078 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab501e7-da69-4f12-a78c-fd28cad07038-config\") pod \"dnsmasq-dns-66fbd85b65-r5qzc\" (UID: \"4ab501e7-da69-4f12-a78c-fd28cad07038\") " pod="openstack/dnsmasq-dns-66fbd85b65-r5qzc" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.782151 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ab501e7-da69-4f12-a78c-fd28cad07038-ovsdbserver-sb\") pod \"dnsmasq-dns-66fbd85b65-r5qzc\" (UID: \"4ab501e7-da69-4f12-a78c-fd28cad07038\") " pod="openstack/dnsmasq-dns-66fbd85b65-r5qzc" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.782213 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ab501e7-da69-4f12-a78c-fd28cad07038-dns-svc\") pod \"dnsmasq-dns-66fbd85b65-r5qzc\" (UID: \"4ab501e7-da69-4f12-a78c-fd28cad07038\") " pod="openstack/dnsmasq-dns-66fbd85b65-r5qzc" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.782259 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ab501e7-da69-4f12-a78c-fd28cad07038-ovsdbserver-nb\") pod \"dnsmasq-dns-66fbd85b65-r5qzc\" (UID: \"4ab501e7-da69-4f12-a78c-fd28cad07038\") " pod="openstack/dnsmasq-dns-66fbd85b65-r5qzc" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.798068 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5d97fb449c-bcz8d"] Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.808379 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d97fb449c-bcz8d" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.814595 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.814673 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.814603 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-vvpfc" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.814859 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.833564 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5d97fb449c-bcz8d"] Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.884685 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-8bx9v"] Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.885233 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36-logs\") pod \"horizon-5d97fb449c-bcz8d\" (UID: \"c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36\") " pod="openstack/horizon-5d97fb449c-bcz8d" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.885277 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ab501e7-da69-4f12-a78c-fd28cad07038-ovsdbserver-sb\") pod \"dnsmasq-dns-66fbd85b65-r5qzc\" (UID: \"4ab501e7-da69-4f12-a78c-fd28cad07038\") " pod="openstack/dnsmasq-dns-66fbd85b65-r5qzc" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.885305 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/610073fa-52a8-4f40-8598-372d9f418a91-combined-ca-bundle\") pod \"keystone-bootstrap-xfz2l\" (UID: \"610073fa-52a8-4f40-8598-372d9f418a91\") " pod="openstack/keystone-bootstrap-xfz2l" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.885334 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ab501e7-da69-4f12-a78c-fd28cad07038-dns-svc\") pod \"dnsmasq-dns-66fbd85b65-r5qzc\" (UID: \"4ab501e7-da69-4f12-a78c-fd28cad07038\") " pod="openstack/dnsmasq-dns-66fbd85b65-r5qzc" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.885355 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ab501e7-da69-4f12-a78c-fd28cad07038-ovsdbserver-nb\") pod \"dnsmasq-dns-66fbd85b65-r5qzc\" (UID: \"4ab501e7-da69-4f12-a78c-fd28cad07038\") " pod="openstack/dnsmasq-dns-66fbd85b65-r5qzc" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.885375 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/610073fa-52a8-4f40-8598-372d9f418a91-credential-keys\") pod \"keystone-bootstrap-xfz2l\" (UID: \"610073fa-52a8-4f40-8598-372d9f418a91\") " pod="openstack/keystone-bootstrap-xfz2l" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.885413 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crwqv\" (UniqueName: \"kubernetes.io/projected/4ab501e7-da69-4f12-a78c-fd28cad07038-kube-api-access-crwqv\") pod \"dnsmasq-dns-66fbd85b65-r5qzc\" (UID: \"4ab501e7-da69-4f12-a78c-fd28cad07038\") " pod="openstack/dnsmasq-dns-66fbd85b65-r5qzc" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.885437 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36-scripts\") pod \"horizon-5d97fb449c-bcz8d\" (UID: \"c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36\") " pod="openstack/horizon-5d97fb449c-bcz8d" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.885455 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36-config-data\") pod \"horizon-5d97fb449c-bcz8d\" (UID: \"c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36\") " pod="openstack/horizon-5d97fb449c-bcz8d" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.885470 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36-horizon-secret-key\") pod \"horizon-5d97fb449c-bcz8d\" (UID: \"c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36\") " pod="openstack/horizon-5d97fb449c-bcz8d" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.885488 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab501e7-da69-4f12-a78c-fd28cad07038-config\") pod \"dnsmasq-dns-66fbd85b65-r5qzc\" (UID: \"4ab501e7-da69-4f12-a78c-fd28cad07038\") " pod="openstack/dnsmasq-dns-66fbd85b65-r5qzc" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.885507 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/610073fa-52a8-4f40-8598-372d9f418a91-config-data\") pod \"keystone-bootstrap-xfz2l\" (UID: \"610073fa-52a8-4f40-8598-372d9f418a91\") " pod="openstack/keystone-bootstrap-xfz2l" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.885524 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/610073fa-52a8-4f40-8598-372d9f418a91-fernet-keys\") pod \"keystone-bootstrap-xfz2l\" (UID: \"610073fa-52a8-4f40-8598-372d9f418a91\") " pod="openstack/keystone-bootstrap-xfz2l" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.885540 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-649tg\" (UniqueName: \"kubernetes.io/projected/610073fa-52a8-4f40-8598-372d9f418a91-kube-api-access-649tg\") pod \"keystone-bootstrap-xfz2l\" (UID: \"610073fa-52a8-4f40-8598-372d9f418a91\") " pod="openstack/keystone-bootstrap-xfz2l" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.885558 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/610073fa-52a8-4f40-8598-372d9f418a91-scripts\") pod \"keystone-bootstrap-xfz2l\" (UID: \"610073fa-52a8-4f40-8598-372d9f418a91\") " pod="openstack/keystone-bootstrap-xfz2l" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.885572 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6ht2\" (UniqueName: \"kubernetes.io/projected/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36-kube-api-access-q6ht2\") pod \"horizon-5d97fb449c-bcz8d\" (UID: \"c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36\") " pod="openstack/horizon-5d97fb449c-bcz8d" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.885738 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8bx9v" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.886468 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ab501e7-da69-4f12-a78c-fd28cad07038-ovsdbserver-sb\") pod \"dnsmasq-dns-66fbd85b65-r5qzc\" (UID: \"4ab501e7-da69-4f12-a78c-fd28cad07038\") " pod="openstack/dnsmasq-dns-66fbd85b65-r5qzc" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.887032 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ab501e7-da69-4f12-a78c-fd28cad07038-ovsdbserver-nb\") pod \"dnsmasq-dns-66fbd85b65-r5qzc\" (UID: \"4ab501e7-da69-4f12-a78c-fd28cad07038\") " pod="openstack/dnsmasq-dns-66fbd85b65-r5qzc" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.887060 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab501e7-da69-4f12-a78c-fd28cad07038-config\") pod \"dnsmasq-dns-66fbd85b65-r5qzc\" (UID: \"4ab501e7-da69-4f12-a78c-fd28cad07038\") " pod="openstack/dnsmasq-dns-66fbd85b65-r5qzc" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.887831 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ab501e7-da69-4f12-a78c-fd28cad07038-dns-svc\") pod \"dnsmasq-dns-66fbd85b65-r5qzc\" (UID: \"4ab501e7-da69-4f12-a78c-fd28cad07038\") " pod="openstack/dnsmasq-dns-66fbd85b65-r5qzc" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.894122 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.910920 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.911198 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-cpqgs" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.924923 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-8bx9v"] Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.948067 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crwqv\" (UniqueName: \"kubernetes.io/projected/4ab501e7-da69-4f12-a78c-fd28cad07038-kube-api-access-crwqv\") pod \"dnsmasq-dns-66fbd85b65-r5qzc\" (UID: \"4ab501e7-da69-4f12-a78c-fd28cad07038\") " pod="openstack/dnsmasq-dns-66fbd85b65-r5qzc" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.984475 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fbd85b65-r5qzc" Jan 31 07:41:16 crc kubenswrapper[4908]: I0131 07:41:16.986469 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36-logs\") pod \"horizon-5d97fb449c-bcz8d\" (UID: \"c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36\") " pod="openstack/horizon-5d97fb449c-bcz8d" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.001747 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/610073fa-52a8-4f40-8598-372d9f418a91-combined-ca-bundle\") pod \"keystone-bootstrap-xfz2l\" (UID: \"610073fa-52a8-4f40-8598-372d9f418a91\") " pod="openstack/keystone-bootstrap-xfz2l" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.017243 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/610073fa-52a8-4f40-8598-372d9f418a91-credential-keys\") pod \"keystone-bootstrap-xfz2l\" (UID: \"610073fa-52a8-4f40-8598-372d9f418a91\") " pod="openstack/keystone-bootstrap-xfz2l" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.018266 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36-scripts\") pod \"horizon-5d97fb449c-bcz8d\" (UID: \"c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36\") " pod="openstack/horizon-5d97fb449c-bcz8d" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.018574 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36-config-data\") pod \"horizon-5d97fb449c-bcz8d\" (UID: \"c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36\") " pod="openstack/horizon-5d97fb449c-bcz8d" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.018833 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36-horizon-secret-key\") pod \"horizon-5d97fb449c-bcz8d\" (UID: \"c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36\") " pod="openstack/horizon-5d97fb449c-bcz8d" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.019290 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/610073fa-52a8-4f40-8598-372d9f418a91-config-data\") pod \"keystone-bootstrap-xfz2l\" (UID: \"610073fa-52a8-4f40-8598-372d9f418a91\") " pod="openstack/keystone-bootstrap-xfz2l" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.019433 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/610073fa-52a8-4f40-8598-372d9f418a91-fernet-keys\") pod \"keystone-bootstrap-xfz2l\" (UID: \"610073fa-52a8-4f40-8598-372d9f418a91\") " pod="openstack/keystone-bootstrap-xfz2l" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.019530 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-649tg\" (UniqueName: \"kubernetes.io/projected/610073fa-52a8-4f40-8598-372d9f418a91-kube-api-access-649tg\") pod \"keystone-bootstrap-xfz2l\" (UID: \"610073fa-52a8-4f40-8598-372d9f418a91\") " pod="openstack/keystone-bootstrap-xfz2l" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.019672 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/610073fa-52a8-4f40-8598-372d9f418a91-scripts\") pod \"keystone-bootstrap-xfz2l\" (UID: \"610073fa-52a8-4f40-8598-372d9f418a91\") " pod="openstack/keystone-bootstrap-xfz2l" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.019785 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6ht2\" (UniqueName: \"kubernetes.io/projected/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36-kube-api-access-q6ht2\") pod \"horizon-5d97fb449c-bcz8d\" (UID: \"c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36\") " pod="openstack/horizon-5d97fb449c-bcz8d" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:16.999466 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36-logs\") pod \"horizon-5d97fb449c-bcz8d\" (UID: \"c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36\") " pod="openstack/horizon-5d97fb449c-bcz8d" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:16.995513 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-lt52x" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.020534 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36-scripts\") pod \"horizon-5d97fb449c-bcz8d\" (UID: \"c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36\") " pod="openstack/horizon-5d97fb449c-bcz8d" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.015759 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/610073fa-52a8-4f40-8598-372d9f418a91-combined-ca-bundle\") pod \"keystone-bootstrap-xfz2l\" (UID: \"610073fa-52a8-4f40-8598-372d9f418a91\") " pod="openstack/keystone-bootstrap-xfz2l" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.021387 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36-config-data\") pod \"horizon-5d97fb449c-bcz8d\" (UID: \"c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36\") " pod="openstack/horizon-5d97fb449c-bcz8d" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.032859 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/610073fa-52a8-4f40-8598-372d9f418a91-credential-keys\") pod \"keystone-bootstrap-xfz2l\" (UID: \"610073fa-52a8-4f40-8598-372d9f418a91\") " pod="openstack/keystone-bootstrap-xfz2l" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.038475 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36-horizon-secret-key\") pod \"horizon-5d97fb449c-bcz8d\" (UID: \"c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36\") " pod="openstack/horizon-5d97fb449c-bcz8d" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.050907 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/610073fa-52a8-4f40-8598-372d9f418a91-fernet-keys\") pod \"keystone-bootstrap-xfz2l\" (UID: \"610073fa-52a8-4f40-8598-372d9f418a91\") " pod="openstack/keystone-bootstrap-xfz2l" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.055139 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-lbm78"] Jan 31 07:41:17 crc kubenswrapper[4908]: E0131 07:41:17.055805 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35b314ce-a7db-42b5-b571-2f23c1065d37" containerName="glance-db-sync" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.055914 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="35b314ce-a7db-42b5-b571-2f23c1065d37" containerName="glance-db-sync" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.056230 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="35b314ce-a7db-42b5-b571-2f23c1065d37" containerName="glance-db-sync" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.066806 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/610073fa-52a8-4f40-8598-372d9f418a91-config-data\") pod \"keystone-bootstrap-xfz2l\" (UID: \"610073fa-52a8-4f40-8598-372d9f418a91\") " pod="openstack/keystone-bootstrap-xfz2l" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.072023 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lbm78" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.074267 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.074597 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.098246 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-d9cpc" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.099122 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/610073fa-52a8-4f40-8598-372d9f418a91-scripts\") pod \"keystone-bootstrap-xfz2l\" (UID: \"610073fa-52a8-4f40-8598-372d9f418a91\") " pod="openstack/keystone-bootstrap-xfz2l" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.106519 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-649tg\" (UniqueName: \"kubernetes.io/projected/610073fa-52a8-4f40-8598-372d9f418a91-kube-api-access-649tg\") pod \"keystone-bootstrap-xfz2l\" (UID: \"610073fa-52a8-4f40-8598-372d9f418a91\") " pod="openstack/keystone-bootstrap-xfz2l" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.134841 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg6r2\" (UniqueName: \"kubernetes.io/projected/35b314ce-a7db-42b5-b571-2f23c1065d37-kube-api-access-cg6r2\") pod \"35b314ce-a7db-42b5-b571-2f23c1065d37\" (UID: \"35b314ce-a7db-42b5-b571-2f23c1065d37\") " Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.134885 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35b314ce-a7db-42b5-b571-2f23c1065d37-config-data\") pod \"35b314ce-a7db-42b5-b571-2f23c1065d37\" (UID: \"35b314ce-a7db-42b5-b571-2f23c1065d37\") " Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.134907 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/35b314ce-a7db-42b5-b571-2f23c1065d37-db-sync-config-data\") pod \"35b314ce-a7db-42b5-b571-2f23c1065d37\" (UID: \"35b314ce-a7db-42b5-b571-2f23c1065d37\") " Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.134929 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35b314ce-a7db-42b5-b571-2f23c1065d37-combined-ca-bundle\") pod \"35b314ce-a7db-42b5-b571-2f23c1065d37\" (UID: \"35b314ce-a7db-42b5-b571-2f23c1065d37\") " Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.135377 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d1b5c255-9609-4fc5-a3af-10d0faf40366-config\") pod \"neutron-db-sync-lbm78\" (UID: \"d1b5c255-9609-4fc5-a3af-10d0faf40366\") " pod="openstack/neutron-db-sync-lbm78" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.135404 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1b5c255-9609-4fc5-a3af-10d0faf40366-combined-ca-bundle\") pod \"neutron-db-sync-lbm78\" (UID: \"d1b5c255-9609-4fc5-a3af-10d0faf40366\") " pod="openstack/neutron-db-sync-lbm78" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.135429 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee8dbc71-e43b-49a6-9d68-78b987f39b89-config-data\") pod \"cinder-db-sync-8bx9v\" (UID: \"ee8dbc71-e43b-49a6-9d68-78b987f39b89\") " pod="openstack/cinder-db-sync-8bx9v" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.136095 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ee8dbc71-e43b-49a6-9d68-78b987f39b89-db-sync-config-data\") pod \"cinder-db-sync-8bx9v\" (UID: \"ee8dbc71-e43b-49a6-9d68-78b987f39b89\") " pod="openstack/cinder-db-sync-8bx9v" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.136138 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee8dbc71-e43b-49a6-9d68-78b987f39b89-scripts\") pod \"cinder-db-sync-8bx9v\" (UID: \"ee8dbc71-e43b-49a6-9d68-78b987f39b89\") " pod="openstack/cinder-db-sync-8bx9v" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.136158 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp484\" (UniqueName: \"kubernetes.io/projected/d1b5c255-9609-4fc5-a3af-10d0faf40366-kube-api-access-hp484\") pod \"neutron-db-sync-lbm78\" (UID: \"d1b5c255-9609-4fc5-a3af-10d0faf40366\") " pod="openstack/neutron-db-sync-lbm78" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.136182 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjxvn\" (UniqueName: \"kubernetes.io/projected/ee8dbc71-e43b-49a6-9d68-78b987f39b89-kube-api-access-wjxvn\") pod \"cinder-db-sync-8bx9v\" (UID: \"ee8dbc71-e43b-49a6-9d68-78b987f39b89\") " pod="openstack/cinder-db-sync-8bx9v" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.136209 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee8dbc71-e43b-49a6-9d68-78b987f39b89-combined-ca-bundle\") pod \"cinder-db-sync-8bx9v\" (UID: \"ee8dbc71-e43b-49a6-9d68-78b987f39b89\") " pod="openstack/cinder-db-sync-8bx9v" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.136238 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ee8dbc71-e43b-49a6-9d68-78b987f39b89-etc-machine-id\") pod \"cinder-db-sync-8bx9v\" (UID: \"ee8dbc71-e43b-49a6-9d68-78b987f39b89\") " pod="openstack/cinder-db-sync-8bx9v" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.140907 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35b314ce-a7db-42b5-b571-2f23c1065d37-kube-api-access-cg6r2" (OuterVolumeSpecName: "kube-api-access-cg6r2") pod "35b314ce-a7db-42b5-b571-2f23c1065d37" (UID: "35b314ce-a7db-42b5-b571-2f23c1065d37"). InnerVolumeSpecName "kube-api-access-cg6r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.144621 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-lbm78"] Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.146707 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35b314ce-a7db-42b5-b571-2f23c1065d37-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "35b314ce-a7db-42b5-b571-2f23c1065d37" (UID: "35b314ce-a7db-42b5-b571-2f23c1065d37"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.164567 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6ht2\" (UniqueName: \"kubernetes.io/projected/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36-kube-api-access-q6ht2\") pod \"horizon-5d97fb449c-bcz8d\" (UID: \"c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36\") " pod="openstack/horizon-5d97fb449c-bcz8d" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.180885 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6db75f7db5-rvdn8"] Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.182827 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6db75f7db5-rvdn8" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.198311 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-krd5r"] Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.199422 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-krd5r" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.201925 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-9gcrz" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.203204 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.203303 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.207379 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-krd5r"] Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.214636 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35b314ce-a7db-42b5-b571-2f23c1065d37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35b314ce-a7db-42b5-b571-2f23c1065d37" (UID: "35b314ce-a7db-42b5-b571-2f23c1065d37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.218785 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6db75f7db5-rvdn8"] Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.230236 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35b314ce-a7db-42b5-b571-2f23c1065d37-config-data" (OuterVolumeSpecName: "config-data") pod "35b314ce-a7db-42b5-b571-2f23c1065d37" (UID: "35b314ce-a7db-42b5-b571-2f23c1065d37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.238194 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ee8dbc71-e43b-49a6-9d68-78b987f39b89-db-sync-config-data\") pod \"cinder-db-sync-8bx9v\" (UID: \"ee8dbc71-e43b-49a6-9d68-78b987f39b89\") " pod="openstack/cinder-db-sync-8bx9v" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.238247 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d523cbb4-de42-4a3f-9636-ec5b89ad51b7-logs\") pod \"horizon-6db75f7db5-rvdn8\" (UID: \"d523cbb4-de42-4a3f-9636-ec5b89ad51b7\") " pod="openstack/horizon-6db75f7db5-rvdn8" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.238285 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b59881d-759b-492a-b475-c27f092660c6-combined-ca-bundle\") pod \"placement-db-sync-krd5r\" (UID: \"3b59881d-759b-492a-b475-c27f092660c6\") " pod="openstack/placement-db-sync-krd5r" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.238311 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b59881d-759b-492a-b475-c27f092660c6-logs\") pod \"placement-db-sync-krd5r\" (UID: \"3b59881d-759b-492a-b475-c27f092660c6\") " pod="openstack/placement-db-sync-krd5r" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.238442 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee8dbc71-e43b-49a6-9d68-78b987f39b89-scripts\") pod \"cinder-db-sync-8bx9v\" (UID: \"ee8dbc71-e43b-49a6-9d68-78b987f39b89\") " pod="openstack/cinder-db-sync-8bx9v" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.238498 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp484\" (UniqueName: \"kubernetes.io/projected/d1b5c255-9609-4fc5-a3af-10d0faf40366-kube-api-access-hp484\") pod \"neutron-db-sync-lbm78\" (UID: \"d1b5c255-9609-4fc5-a3af-10d0faf40366\") " pod="openstack/neutron-db-sync-lbm78" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.238555 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjxvn\" (UniqueName: \"kubernetes.io/projected/ee8dbc71-e43b-49a6-9d68-78b987f39b89-kube-api-access-wjxvn\") pod \"cinder-db-sync-8bx9v\" (UID: \"ee8dbc71-e43b-49a6-9d68-78b987f39b89\") " pod="openstack/cinder-db-sync-8bx9v" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.238596 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d523cbb4-de42-4a3f-9636-ec5b89ad51b7-horizon-secret-key\") pod \"horizon-6db75f7db5-rvdn8\" (UID: \"d523cbb4-de42-4a3f-9636-ec5b89ad51b7\") " pod="openstack/horizon-6db75f7db5-rvdn8" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.238626 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee8dbc71-e43b-49a6-9d68-78b987f39b89-combined-ca-bundle\") pod \"cinder-db-sync-8bx9v\" (UID: \"ee8dbc71-e43b-49a6-9d68-78b987f39b89\") " pod="openstack/cinder-db-sync-8bx9v" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.238649 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d523cbb4-de42-4a3f-9636-ec5b89ad51b7-config-data\") pod \"horizon-6db75f7db5-rvdn8\" (UID: \"d523cbb4-de42-4a3f-9636-ec5b89ad51b7\") " pod="openstack/horizon-6db75f7db5-rvdn8" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.238688 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b59881d-759b-492a-b475-c27f092660c6-config-data\") pod \"placement-db-sync-krd5r\" (UID: \"3b59881d-759b-492a-b475-c27f092660c6\") " pod="openstack/placement-db-sync-krd5r" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.238736 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ee8dbc71-e43b-49a6-9d68-78b987f39b89-etc-machine-id\") pod \"cinder-db-sync-8bx9v\" (UID: \"ee8dbc71-e43b-49a6-9d68-78b987f39b89\") " pod="openstack/cinder-db-sync-8bx9v" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.238825 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28xrd\" (UniqueName: \"kubernetes.io/projected/d523cbb4-de42-4a3f-9636-ec5b89ad51b7-kube-api-access-28xrd\") pod \"horizon-6db75f7db5-rvdn8\" (UID: \"d523cbb4-de42-4a3f-9636-ec5b89ad51b7\") " pod="openstack/horizon-6db75f7db5-rvdn8" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.238893 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d1b5c255-9609-4fc5-a3af-10d0faf40366-config\") pod \"neutron-db-sync-lbm78\" (UID: \"d1b5c255-9609-4fc5-a3af-10d0faf40366\") " pod="openstack/neutron-db-sync-lbm78" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.238935 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1b5c255-9609-4fc5-a3af-10d0faf40366-combined-ca-bundle\") pod \"neutron-db-sync-lbm78\" (UID: \"d1b5c255-9609-4fc5-a3af-10d0faf40366\") " pod="openstack/neutron-db-sync-lbm78" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.238967 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee8dbc71-e43b-49a6-9d68-78b987f39b89-config-data\") pod \"cinder-db-sync-8bx9v\" (UID: \"ee8dbc71-e43b-49a6-9d68-78b987f39b89\") " pod="openstack/cinder-db-sync-8bx9v" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.243855 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d523cbb4-de42-4a3f-9636-ec5b89ad51b7-scripts\") pod \"horizon-6db75f7db5-rvdn8\" (UID: \"d523cbb4-de42-4a3f-9636-ec5b89ad51b7\") " pod="openstack/horizon-6db75f7db5-rvdn8" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.244005 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lwzh\" (UniqueName: \"kubernetes.io/projected/3b59881d-759b-492a-b475-c27f092660c6-kube-api-access-2lwzh\") pod \"placement-db-sync-krd5r\" (UID: \"3b59881d-759b-492a-b475-c27f092660c6\") " pod="openstack/placement-db-sync-krd5r" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.244190 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b59881d-759b-492a-b475-c27f092660c6-scripts\") pod \"placement-db-sync-krd5r\" (UID: \"3b59881d-759b-492a-b475-c27f092660c6\") " pod="openstack/placement-db-sync-krd5r" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.244877 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cg6r2\" (UniqueName: \"kubernetes.io/projected/35b314ce-a7db-42b5-b571-2f23c1065d37-kube-api-access-cg6r2\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.244903 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35b314ce-a7db-42b5-b571-2f23c1065d37-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.244966 4908 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/35b314ce-a7db-42b5-b571-2f23c1065d37-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.245204 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35b314ce-a7db-42b5-b571-2f23c1065d37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.242881 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ee8dbc71-e43b-49a6-9d68-78b987f39b89-etc-machine-id\") pod \"cinder-db-sync-8bx9v\" (UID: \"ee8dbc71-e43b-49a6-9d68-78b987f39b89\") " pod="openstack/cinder-db-sync-8bx9v" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.242925 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.248155 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.250007 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d1b5c255-9609-4fc5-a3af-10d0faf40366-config\") pod \"neutron-db-sync-lbm78\" (UID: \"d1b5c255-9609-4fc5-a3af-10d0faf40366\") " pod="openstack/neutron-db-sync-lbm78" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.251201 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-r5qzc"] Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.255495 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.255518 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.256501 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1b5c255-9609-4fc5-a3af-10d0faf40366-combined-ca-bundle\") pod \"neutron-db-sync-lbm78\" (UID: \"d1b5c255-9609-4fc5-a3af-10d0faf40366\") " pod="openstack/neutron-db-sync-lbm78" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.262517 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-bmhth"] Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.263479 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bmhth" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.264140 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp484\" (UniqueName: \"kubernetes.io/projected/d1b5c255-9609-4fc5-a3af-10d0faf40366-kube-api-access-hp484\") pod \"neutron-db-sync-lbm78\" (UID: \"d1b5c255-9609-4fc5-a3af-10d0faf40366\") " pod="openstack/neutron-db-sync-lbm78" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.265933 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-sfjjd" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.266330 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.296789 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.304651 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-bmhth"] Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.312469 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee8dbc71-e43b-49a6-9d68-78b987f39b89-combined-ca-bundle\") pod \"cinder-db-sync-8bx9v\" (UID: \"ee8dbc71-e43b-49a6-9d68-78b987f39b89\") " pod="openstack/cinder-db-sync-8bx9v" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.325618 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-klrpw"] Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.327300 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf59f66bf-klrpw" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.347878 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-klrpw"] Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.352290 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lhkv\" (UniqueName: \"kubernetes.io/projected/9f90a57b-f1f8-4fc5-ac97-bdd418912544-kube-api-access-7lhkv\") pod \"ceilometer-0\" (UID: \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\") " pod="openstack/ceilometer-0" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.352340 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b59881d-759b-492a-b475-c27f092660c6-combined-ca-bundle\") pod \"placement-db-sync-krd5r\" (UID: \"3b59881d-759b-492a-b475-c27f092660c6\") " pod="openstack/placement-db-sync-krd5r" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.352363 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b59881d-759b-492a-b475-c27f092660c6-logs\") pod \"placement-db-sync-krd5r\" (UID: \"3b59881d-759b-492a-b475-c27f092660c6\") " pod="openstack/placement-db-sync-krd5r" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.352650 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d523cbb4-de42-4a3f-9636-ec5b89ad51b7-horizon-secret-key\") pod \"horizon-6db75f7db5-rvdn8\" (UID: \"d523cbb4-de42-4a3f-9636-ec5b89ad51b7\") " pod="openstack/horizon-6db75f7db5-rvdn8" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.352699 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d523cbb4-de42-4a3f-9636-ec5b89ad51b7-config-data\") pod \"horizon-6db75f7db5-rvdn8\" (UID: \"d523cbb4-de42-4a3f-9636-ec5b89ad51b7\") " pod="openstack/horizon-6db75f7db5-rvdn8" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.352728 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b59881d-759b-492a-b475-c27f092660c6-config-data\") pod \"placement-db-sync-krd5r\" (UID: \"3b59881d-759b-492a-b475-c27f092660c6\") " pod="openstack/placement-db-sync-krd5r" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.352803 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a9f3cf33-1d4c-4fae-ac0e-54d917d43325-db-sync-config-data\") pod \"barbican-db-sync-bmhth\" (UID: \"a9f3cf33-1d4c-4fae-ac0e-54d917d43325\") " pod="openstack/barbican-db-sync-bmhth" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.352893 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28xrd\" (UniqueName: \"kubernetes.io/projected/d523cbb4-de42-4a3f-9636-ec5b89ad51b7-kube-api-access-28xrd\") pod \"horizon-6db75f7db5-rvdn8\" (UID: \"d523cbb4-de42-4a3f-9636-ec5b89ad51b7\") " pod="openstack/horizon-6db75f7db5-rvdn8" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.352954 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f90a57b-f1f8-4fc5-ac97-bdd418912544-log-httpd\") pod \"ceilometer-0\" (UID: \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\") " pod="openstack/ceilometer-0" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.353021 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f90a57b-f1f8-4fc5-ac97-bdd418912544-scripts\") pod \"ceilometer-0\" (UID: \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\") " pod="openstack/ceilometer-0" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.353045 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9f3cf33-1d4c-4fae-ac0e-54d917d43325-combined-ca-bundle\") pod \"barbican-db-sync-bmhth\" (UID: \"a9f3cf33-1d4c-4fae-ac0e-54d917d43325\") " pod="openstack/barbican-db-sync-bmhth" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.353123 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f90a57b-f1f8-4fc5-ac97-bdd418912544-config-data\") pod \"ceilometer-0\" (UID: \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\") " pod="openstack/ceilometer-0" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.353177 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5jb9\" (UniqueName: \"kubernetes.io/projected/a9f3cf33-1d4c-4fae-ac0e-54d917d43325-kube-api-access-s5jb9\") pod \"barbican-db-sync-bmhth\" (UID: \"a9f3cf33-1d4c-4fae-ac0e-54d917d43325\") " pod="openstack/barbican-db-sync-bmhth" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.353208 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d523cbb4-de42-4a3f-9636-ec5b89ad51b7-scripts\") pod \"horizon-6db75f7db5-rvdn8\" (UID: \"d523cbb4-de42-4a3f-9636-ec5b89ad51b7\") " pod="openstack/horizon-6db75f7db5-rvdn8" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.353260 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lwzh\" (UniqueName: \"kubernetes.io/projected/3b59881d-759b-492a-b475-c27f092660c6-kube-api-access-2lwzh\") pod \"placement-db-sync-krd5r\" (UID: \"3b59881d-759b-492a-b475-c27f092660c6\") " pod="openstack/placement-db-sync-krd5r" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.353331 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f90a57b-f1f8-4fc5-ac97-bdd418912544-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\") " pod="openstack/ceilometer-0" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.353373 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f90a57b-f1f8-4fc5-ac97-bdd418912544-run-httpd\") pod \"ceilometer-0\" (UID: \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\") " pod="openstack/ceilometer-0" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.353416 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f90a57b-f1f8-4fc5-ac97-bdd418912544-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\") " pod="openstack/ceilometer-0" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.353438 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b59881d-759b-492a-b475-c27f092660c6-scripts\") pod \"placement-db-sync-krd5r\" (UID: \"3b59881d-759b-492a-b475-c27f092660c6\") " pod="openstack/placement-db-sync-krd5r" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.353506 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d523cbb4-de42-4a3f-9636-ec5b89ad51b7-logs\") pod \"horizon-6db75f7db5-rvdn8\" (UID: \"d523cbb4-de42-4a3f-9636-ec5b89ad51b7\") " pod="openstack/horizon-6db75f7db5-rvdn8" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.357359 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xfz2l" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.397862 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-lt52x" event={"ID":"35b314ce-a7db-42b5-b571-2f23c1065d37","Type":"ContainerDied","Data":"6bf5d36d60dd52e557cefd1a1fd4e5ce3825e18d39c77d3d63d83e24baa47c94"} Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.397900 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bf5d36d60dd52e557cefd1a1fd4e5ce3825e18d39c77d3d63d83e24baa47c94" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.397962 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-lt52x" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.448342 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d97fb449c-bcz8d" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.450440 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee8dbc71-e43b-49a6-9d68-78b987f39b89-scripts\") pod \"cinder-db-sync-8bx9v\" (UID: \"ee8dbc71-e43b-49a6-9d68-78b987f39b89\") " pod="openstack/cinder-db-sync-8bx9v" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.455122 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3b3e29e1-fd95-49af-9f16-d5ed825c46b4-ovsdbserver-sb\") pod \"dnsmasq-dns-6bf59f66bf-klrpw\" (UID: \"3b3e29e1-fd95-49af-9f16-d5ed825c46b4\") " pod="openstack/dnsmasq-dns-6bf59f66bf-klrpw" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.455190 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b3e29e1-fd95-49af-9f16-d5ed825c46b4-config\") pod \"dnsmasq-dns-6bf59f66bf-klrpw\" (UID: \"3b3e29e1-fd95-49af-9f16-d5ed825c46b4\") " pod="openstack/dnsmasq-dns-6bf59f66bf-klrpw" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.455217 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lhkv\" (UniqueName: \"kubernetes.io/projected/9f90a57b-f1f8-4fc5-ac97-bdd418912544-kube-api-access-7lhkv\") pod \"ceilometer-0\" (UID: \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\") " pod="openstack/ceilometer-0" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.455260 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3b3e29e1-fd95-49af-9f16-d5ed825c46b4-ovsdbserver-nb\") pod \"dnsmasq-dns-6bf59f66bf-klrpw\" (UID: \"3b3e29e1-fd95-49af-9f16-d5ed825c46b4\") " pod="openstack/dnsmasq-dns-6bf59f66bf-klrpw" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.455446 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a9f3cf33-1d4c-4fae-ac0e-54d917d43325-db-sync-config-data\") pod \"barbican-db-sync-bmhth\" (UID: \"a9f3cf33-1d4c-4fae-ac0e-54d917d43325\") " pod="openstack/barbican-db-sync-bmhth" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.455549 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f90a57b-f1f8-4fc5-ac97-bdd418912544-log-httpd\") pod \"ceilometer-0\" (UID: \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\") " pod="openstack/ceilometer-0" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.455601 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp9zg\" (UniqueName: \"kubernetes.io/projected/3b3e29e1-fd95-49af-9f16-d5ed825c46b4-kube-api-access-xp9zg\") pod \"dnsmasq-dns-6bf59f66bf-klrpw\" (UID: \"3b3e29e1-fd95-49af-9f16-d5ed825c46b4\") " pod="openstack/dnsmasq-dns-6bf59f66bf-klrpw" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.455621 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ee8dbc71-e43b-49a6-9d68-78b987f39b89-db-sync-config-data\") pod \"cinder-db-sync-8bx9v\" (UID: \"ee8dbc71-e43b-49a6-9d68-78b987f39b89\") " pod="openstack/cinder-db-sync-8bx9v" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.455680 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f90a57b-f1f8-4fc5-ac97-bdd418912544-scripts\") pod \"ceilometer-0\" (UID: \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\") " pod="openstack/ceilometer-0" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.455708 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9f3cf33-1d4c-4fae-ac0e-54d917d43325-combined-ca-bundle\") pod \"barbican-db-sync-bmhth\" (UID: \"a9f3cf33-1d4c-4fae-ac0e-54d917d43325\") " pod="openstack/barbican-db-sync-bmhth" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.455741 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b3e29e1-fd95-49af-9f16-d5ed825c46b4-dns-svc\") pod \"dnsmasq-dns-6bf59f66bf-klrpw\" (UID: \"3b3e29e1-fd95-49af-9f16-d5ed825c46b4\") " pod="openstack/dnsmasq-dns-6bf59f66bf-klrpw" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.455791 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f90a57b-f1f8-4fc5-ac97-bdd418912544-config-data\") pod \"ceilometer-0\" (UID: \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\") " pod="openstack/ceilometer-0" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.455820 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5jb9\" (UniqueName: \"kubernetes.io/projected/a9f3cf33-1d4c-4fae-ac0e-54d917d43325-kube-api-access-s5jb9\") pod \"barbican-db-sync-bmhth\" (UID: \"a9f3cf33-1d4c-4fae-ac0e-54d917d43325\") " pod="openstack/barbican-db-sync-bmhth" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.455834 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjxvn\" (UniqueName: \"kubernetes.io/projected/ee8dbc71-e43b-49a6-9d68-78b987f39b89-kube-api-access-wjxvn\") pod \"cinder-db-sync-8bx9v\" (UID: \"ee8dbc71-e43b-49a6-9d68-78b987f39b89\") " pod="openstack/cinder-db-sync-8bx9v" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.456065 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f90a57b-f1f8-4fc5-ac97-bdd418912544-log-httpd\") pod \"ceilometer-0\" (UID: \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\") " pod="openstack/ceilometer-0" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.456102 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lbm78" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.456225 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee8dbc71-e43b-49a6-9d68-78b987f39b89-config-data\") pod \"cinder-db-sync-8bx9v\" (UID: \"ee8dbc71-e43b-49a6-9d68-78b987f39b89\") " pod="openstack/cinder-db-sync-8bx9v" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.456852 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f90a57b-f1f8-4fc5-ac97-bdd418912544-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\") " pod="openstack/ceilometer-0" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.456895 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f90a57b-f1f8-4fc5-ac97-bdd418912544-run-httpd\") pod \"ceilometer-0\" (UID: \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\") " pod="openstack/ceilometer-0" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.456911 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f90a57b-f1f8-4fc5-ac97-bdd418912544-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\") " pod="openstack/ceilometer-0" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.459445 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f90a57b-f1f8-4fc5-ac97-bdd418912544-run-httpd\") pod \"ceilometer-0\" (UID: \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\") " pod="openstack/ceilometer-0" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.459671 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f90a57b-f1f8-4fc5-ac97-bdd418912544-scripts\") pod \"ceilometer-0\" (UID: \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\") " pod="openstack/ceilometer-0" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.459937 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f90a57b-f1f8-4fc5-ac97-bdd418912544-config-data\") pod \"ceilometer-0\" (UID: \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\") " pod="openstack/ceilometer-0" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.460050 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a9f3cf33-1d4c-4fae-ac0e-54d917d43325-db-sync-config-data\") pod \"barbican-db-sync-bmhth\" (UID: \"a9f3cf33-1d4c-4fae-ac0e-54d917d43325\") " pod="openstack/barbican-db-sync-bmhth" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.460086 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f90a57b-f1f8-4fc5-ac97-bdd418912544-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\") " pod="openstack/ceilometer-0" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.463832 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f90a57b-f1f8-4fc5-ac97-bdd418912544-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\") " pod="openstack/ceilometer-0" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.465857 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9f3cf33-1d4c-4fae-ac0e-54d917d43325-combined-ca-bundle\") pod \"barbican-db-sync-bmhth\" (UID: \"a9f3cf33-1d4c-4fae-ac0e-54d917d43325\") " pod="openstack/barbican-db-sync-bmhth" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.471290 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lhkv\" (UniqueName: \"kubernetes.io/projected/9f90a57b-f1f8-4fc5-ac97-bdd418912544-kube-api-access-7lhkv\") pod \"ceilometer-0\" (UID: \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\") " pod="openstack/ceilometer-0" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.474738 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5jb9\" (UniqueName: \"kubernetes.io/projected/a9f3cf33-1d4c-4fae-ac0e-54d917d43325-kube-api-access-s5jb9\") pod \"barbican-db-sync-bmhth\" (UID: \"a9f3cf33-1d4c-4fae-ac0e-54d917d43325\") " pod="openstack/barbican-db-sync-bmhth" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.478307 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d523cbb4-de42-4a3f-9636-ec5b89ad51b7-horizon-secret-key\") pod \"horizon-6db75f7db5-rvdn8\" (UID: \"d523cbb4-de42-4a3f-9636-ec5b89ad51b7\") " pod="openstack/horizon-6db75f7db5-rvdn8" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.480064 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d523cbb4-de42-4a3f-9636-ec5b89ad51b7-logs\") pod \"horizon-6db75f7db5-rvdn8\" (UID: \"d523cbb4-de42-4a3f-9636-ec5b89ad51b7\") " pod="openstack/horizon-6db75f7db5-rvdn8" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.480117 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d523cbb4-de42-4a3f-9636-ec5b89ad51b7-scripts\") pod \"horizon-6db75f7db5-rvdn8\" (UID: \"d523cbb4-de42-4a3f-9636-ec5b89ad51b7\") " pod="openstack/horizon-6db75f7db5-rvdn8" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.482431 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b59881d-759b-492a-b475-c27f092660c6-logs\") pod \"placement-db-sync-krd5r\" (UID: \"3b59881d-759b-492a-b475-c27f092660c6\") " pod="openstack/placement-db-sync-krd5r" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.486610 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28xrd\" (UniqueName: \"kubernetes.io/projected/d523cbb4-de42-4a3f-9636-ec5b89ad51b7-kube-api-access-28xrd\") pod \"horizon-6db75f7db5-rvdn8\" (UID: \"d523cbb4-de42-4a3f-9636-ec5b89ad51b7\") " pod="openstack/horizon-6db75f7db5-rvdn8" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.487257 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d523cbb4-de42-4a3f-9636-ec5b89ad51b7-config-data\") pod \"horizon-6db75f7db5-rvdn8\" (UID: \"d523cbb4-de42-4a3f-9636-ec5b89ad51b7\") " pod="openstack/horizon-6db75f7db5-rvdn8" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.489394 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b59881d-759b-492a-b475-c27f092660c6-scripts\") pod \"placement-db-sync-krd5r\" (UID: \"3b59881d-759b-492a-b475-c27f092660c6\") " pod="openstack/placement-db-sync-krd5r" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.492522 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lwzh\" (UniqueName: \"kubernetes.io/projected/3b59881d-759b-492a-b475-c27f092660c6-kube-api-access-2lwzh\") pod \"placement-db-sync-krd5r\" (UID: \"3b59881d-759b-492a-b475-c27f092660c6\") " pod="openstack/placement-db-sync-krd5r" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.492738 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b59881d-759b-492a-b475-c27f092660c6-combined-ca-bundle\") pod \"placement-db-sync-krd5r\" (UID: \"3b59881d-759b-492a-b475-c27f092660c6\") " pod="openstack/placement-db-sync-krd5r" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.493422 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b59881d-759b-492a-b475-c27f092660c6-config-data\") pod \"placement-db-sync-krd5r\" (UID: \"3b59881d-759b-492a-b475-c27f092660c6\") " pod="openstack/placement-db-sync-krd5r" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.519384 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6db75f7db5-rvdn8" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.528513 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-krd5r" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.558971 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp9zg\" (UniqueName: \"kubernetes.io/projected/3b3e29e1-fd95-49af-9f16-d5ed825c46b4-kube-api-access-xp9zg\") pod \"dnsmasq-dns-6bf59f66bf-klrpw\" (UID: \"3b3e29e1-fd95-49af-9f16-d5ed825c46b4\") " pod="openstack/dnsmasq-dns-6bf59f66bf-klrpw" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.559120 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b3e29e1-fd95-49af-9f16-d5ed825c46b4-dns-svc\") pod \"dnsmasq-dns-6bf59f66bf-klrpw\" (UID: \"3b3e29e1-fd95-49af-9f16-d5ed825c46b4\") " pod="openstack/dnsmasq-dns-6bf59f66bf-klrpw" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.559233 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3b3e29e1-fd95-49af-9f16-d5ed825c46b4-ovsdbserver-sb\") pod \"dnsmasq-dns-6bf59f66bf-klrpw\" (UID: \"3b3e29e1-fd95-49af-9f16-d5ed825c46b4\") " pod="openstack/dnsmasq-dns-6bf59f66bf-klrpw" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.559263 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b3e29e1-fd95-49af-9f16-d5ed825c46b4-config\") pod \"dnsmasq-dns-6bf59f66bf-klrpw\" (UID: \"3b3e29e1-fd95-49af-9f16-d5ed825c46b4\") " pod="openstack/dnsmasq-dns-6bf59f66bf-klrpw" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.559294 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3b3e29e1-fd95-49af-9f16-d5ed825c46b4-ovsdbserver-nb\") pod \"dnsmasq-dns-6bf59f66bf-klrpw\" (UID: \"3b3e29e1-fd95-49af-9f16-d5ed825c46b4\") " pod="openstack/dnsmasq-dns-6bf59f66bf-klrpw" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.560798 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3b3e29e1-fd95-49af-9f16-d5ed825c46b4-ovsdbserver-sb\") pod \"dnsmasq-dns-6bf59f66bf-klrpw\" (UID: \"3b3e29e1-fd95-49af-9f16-d5ed825c46b4\") " pod="openstack/dnsmasq-dns-6bf59f66bf-klrpw" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.561239 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3b3e29e1-fd95-49af-9f16-d5ed825c46b4-ovsdbserver-nb\") pod \"dnsmasq-dns-6bf59f66bf-klrpw\" (UID: \"3b3e29e1-fd95-49af-9f16-d5ed825c46b4\") " pod="openstack/dnsmasq-dns-6bf59f66bf-klrpw" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.561759 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b3e29e1-fd95-49af-9f16-d5ed825c46b4-config\") pod \"dnsmasq-dns-6bf59f66bf-klrpw\" (UID: \"3b3e29e1-fd95-49af-9f16-d5ed825c46b4\") " pod="openstack/dnsmasq-dns-6bf59f66bf-klrpw" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.563032 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b3e29e1-fd95-49af-9f16-d5ed825c46b4-dns-svc\") pod \"dnsmasq-dns-6bf59f66bf-klrpw\" (UID: \"3b3e29e1-fd95-49af-9f16-d5ed825c46b4\") " pod="openstack/dnsmasq-dns-6bf59f66bf-klrpw" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.576247 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp9zg\" (UniqueName: \"kubernetes.io/projected/3b3e29e1-fd95-49af-9f16-d5ed825c46b4-kube-api-access-xp9zg\") pod \"dnsmasq-dns-6bf59f66bf-klrpw\" (UID: \"3b3e29e1-fd95-49af-9f16-d5ed825c46b4\") " pod="openstack/dnsmasq-dns-6bf59f66bf-klrpw" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.583336 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.597315 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bmhth" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.700685 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8bx9v" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.759019 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf59f66bf-klrpw" Jan 31 07:41:17 crc kubenswrapper[4908]: I0131 07:41:17.781356 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-r5qzc"] Jan 31 07:41:28 crc kubenswrapper[4908]: W0131 07:41:17.973418 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ab501e7_da69_4f12_a78c_fd28cad07038.slice/crio-9bd31019863f688df9777044f94f684f8cdb941e7c82b1dbe438a74d1489d9fd WatchSource:0}: Error finding container 9bd31019863f688df9777044f94f684f8cdb941e7c82b1dbe438a74d1489d9fd: Status 404 returned error can't find the container with id 9bd31019863f688df9777044f94f684f8cdb941e7c82b1dbe438a74d1489d9fd Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:18.123878 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5d97fb449c-bcz8d"] Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:18.134568 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-klrpw"] Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:18.234372 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xfz2l"] Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:18.287285 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx"] Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:18.299827 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:18.338525 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx"] Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:18.394472 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r69rw\" (UniqueName: \"kubernetes.io/projected/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c-kube-api-access-r69rw\") pod \"dnsmasq-dns-5b6dbdb6f5-5zhxx\" (UID: \"c7fc3e04-968b-4c28-84b6-1cb5d95ad52c\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:18.394903 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-5zhxx\" (UID: \"c7fc3e04-968b-4c28-84b6-1cb5d95ad52c\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:18.395102 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-5zhxx\" (UID: \"c7fc3e04-968b-4c28-84b6-1cb5d95ad52c\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:18.395150 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-5zhxx\" (UID: \"c7fc3e04-968b-4c28-84b6-1cb5d95ad52c\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:18.395265 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c-config\") pod \"dnsmasq-dns-5b6dbdb6f5-5zhxx\" (UID: \"c7fc3e04-968b-4c28-84b6-1cb5d95ad52c\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:18.449522 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66fbd85b65-r5qzc" event={"ID":"4ab501e7-da69-4f12-a78c-fd28cad07038","Type":"ContainerStarted","Data":"9bd31019863f688df9777044f94f684f8cdb941e7c82b1dbe438a74d1489d9fd"} Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:18.452475 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xfz2l" event={"ID":"610073fa-52a8-4f40-8598-372d9f418a91","Type":"ContainerStarted","Data":"eb651da69afe327a435df0397e97a4ce49e37cb7905fabe72893e5a5a57386c3"} Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:18.457590 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d97fb449c-bcz8d" event={"ID":"c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36","Type":"ContainerStarted","Data":"560657e196bfa20a34463ac1d5d4658a9bf816f81d554a38948f9bb2cf4c3d61"} Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:18.496876 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-5zhxx\" (UID: \"c7fc3e04-968b-4c28-84b6-1cb5d95ad52c\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:18.496956 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c-config\") pod \"dnsmasq-dns-5b6dbdb6f5-5zhxx\" (UID: \"c7fc3e04-968b-4c28-84b6-1cb5d95ad52c\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:18.497041 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r69rw\" (UniqueName: \"kubernetes.io/projected/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c-kube-api-access-r69rw\") pod \"dnsmasq-dns-5b6dbdb6f5-5zhxx\" (UID: \"c7fc3e04-968b-4c28-84b6-1cb5d95ad52c\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:18.497066 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-5zhxx\" (UID: \"c7fc3e04-968b-4c28-84b6-1cb5d95ad52c\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:18.497141 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-5zhxx\" (UID: \"c7fc3e04-968b-4c28-84b6-1cb5d95ad52c\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:18.498239 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-5zhxx\" (UID: \"c7fc3e04-968b-4c28-84b6-1cb5d95ad52c\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:18.498348 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c-config\") pod \"dnsmasq-dns-5b6dbdb6f5-5zhxx\" (UID: \"c7fc3e04-968b-4c28-84b6-1cb5d95ad52c\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:18.498817 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-5zhxx\" (UID: \"c7fc3e04-968b-4c28-84b6-1cb5d95ad52c\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:18.499338 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-5zhxx\" (UID: \"c7fc3e04-968b-4c28-84b6-1cb5d95ad52c\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:18.524126 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r69rw\" (UniqueName: \"kubernetes.io/projected/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c-kube-api-access-r69rw\") pod \"dnsmasq-dns-5b6dbdb6f5-5zhxx\" (UID: \"c7fc3e04-968b-4c28-84b6-1cb5d95ad52c\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:18.673369 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:19.071219 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5d97fb449c-bcz8d"] Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:19.174348 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5c67dbb4cc-s5dk4"] Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:19.184164 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c67dbb4cc-s5dk4" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:19.213910 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c67dbb4cc-s5dk4"] Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:19.312177 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7c25925c-5085-4002-970e-65facdd68148-horizon-secret-key\") pod \"horizon-5c67dbb4cc-s5dk4\" (UID: \"7c25925c-5085-4002-970e-65facdd68148\") " pod="openstack/horizon-5c67dbb4cc-s5dk4" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:19.312283 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c25925c-5085-4002-970e-65facdd68148-logs\") pod \"horizon-5c67dbb4cc-s5dk4\" (UID: \"7c25925c-5085-4002-970e-65facdd68148\") " pod="openstack/horizon-5c67dbb4cc-s5dk4" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:19.312353 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnwxv\" (UniqueName: \"kubernetes.io/projected/7c25925c-5085-4002-970e-65facdd68148-kube-api-access-vnwxv\") pod \"horizon-5c67dbb4cc-s5dk4\" (UID: \"7c25925c-5085-4002-970e-65facdd68148\") " pod="openstack/horizon-5c67dbb4cc-s5dk4" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:19.312418 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c25925c-5085-4002-970e-65facdd68148-scripts\") pod \"horizon-5c67dbb4cc-s5dk4\" (UID: \"7c25925c-5085-4002-970e-65facdd68148\") " pod="openstack/horizon-5c67dbb4cc-s5dk4" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:19.312453 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c25925c-5085-4002-970e-65facdd68148-config-data\") pod \"horizon-5c67dbb4cc-s5dk4\" (UID: \"7c25925c-5085-4002-970e-65facdd68148\") " pod="openstack/horizon-5c67dbb4cc-s5dk4" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:19.420310 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnwxv\" (UniqueName: \"kubernetes.io/projected/7c25925c-5085-4002-970e-65facdd68148-kube-api-access-vnwxv\") pod \"horizon-5c67dbb4cc-s5dk4\" (UID: \"7c25925c-5085-4002-970e-65facdd68148\") " pod="openstack/horizon-5c67dbb4cc-s5dk4" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:19.420395 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c25925c-5085-4002-970e-65facdd68148-scripts\") pod \"horizon-5c67dbb4cc-s5dk4\" (UID: \"7c25925c-5085-4002-970e-65facdd68148\") " pod="openstack/horizon-5c67dbb4cc-s5dk4" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:19.420441 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c25925c-5085-4002-970e-65facdd68148-config-data\") pod \"horizon-5c67dbb4cc-s5dk4\" (UID: \"7c25925c-5085-4002-970e-65facdd68148\") " pod="openstack/horizon-5c67dbb4cc-s5dk4" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:19.420518 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7c25925c-5085-4002-970e-65facdd68148-horizon-secret-key\") pod \"horizon-5c67dbb4cc-s5dk4\" (UID: \"7c25925c-5085-4002-970e-65facdd68148\") " pod="openstack/horizon-5c67dbb4cc-s5dk4" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:19.420547 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c25925c-5085-4002-970e-65facdd68148-logs\") pod \"horizon-5c67dbb4cc-s5dk4\" (UID: \"7c25925c-5085-4002-970e-65facdd68148\") " pod="openstack/horizon-5c67dbb4cc-s5dk4" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:19.421275 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c25925c-5085-4002-970e-65facdd68148-logs\") pod \"horizon-5c67dbb4cc-s5dk4\" (UID: \"7c25925c-5085-4002-970e-65facdd68148\") " pod="openstack/horizon-5c67dbb4cc-s5dk4" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:19.422348 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c25925c-5085-4002-970e-65facdd68148-scripts\") pod \"horizon-5c67dbb4cc-s5dk4\" (UID: \"7c25925c-5085-4002-970e-65facdd68148\") " pod="openstack/horizon-5c67dbb4cc-s5dk4" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:19.427107 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c25925c-5085-4002-970e-65facdd68148-config-data\") pod \"horizon-5c67dbb4cc-s5dk4\" (UID: \"7c25925c-5085-4002-970e-65facdd68148\") " pod="openstack/horizon-5c67dbb4cc-s5dk4" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:19.441115 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7c25925c-5085-4002-970e-65facdd68148-horizon-secret-key\") pod \"horizon-5c67dbb4cc-s5dk4\" (UID: \"7c25925c-5085-4002-970e-65facdd68148\") " pod="openstack/horizon-5c67dbb4cc-s5dk4" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:19.460507 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnwxv\" (UniqueName: \"kubernetes.io/projected/7c25925c-5085-4002-970e-65facdd68148-kube-api-access-vnwxv\") pod \"horizon-5c67dbb4cc-s5dk4\" (UID: \"7c25925c-5085-4002-970e-65facdd68148\") " pod="openstack/horizon-5c67dbb4cc-s5dk4" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:19.505459 4908 generic.go:334] "Generic (PLEG): container finished" podID="4ab501e7-da69-4f12-a78c-fd28cad07038" containerID="35feb5ca8466d5e0dd39e2025fbeded26c332ce0cbcfb9600c057852cb346080" exitCode=0 Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:19.505539 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66fbd85b65-r5qzc" event={"ID":"4ab501e7-da69-4f12-a78c-fd28cad07038","Type":"ContainerDied","Data":"35feb5ca8466d5e0dd39e2025fbeded26c332ce0cbcfb9600c057852cb346080"} Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:19.510494 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c67dbb4cc-s5dk4" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:19.511656 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xfz2l" event={"ID":"610073fa-52a8-4f40-8598-372d9f418a91","Type":"ContainerStarted","Data":"c77171f2efe0b99af1fef8b07aa42561940f514c261810a02748485c80129739"} Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:19.563494 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xfz2l" podStartSLOduration=3.563475146 podStartE2EDuration="3.563475146s" podCreationTimestamp="2026-01-31 07:41:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:41:19.563343343 +0000 UTC m=+1186.179287997" watchObservedRunningTime="2026-01-31 07:41:19.563475146 +0000 UTC m=+1186.179419800" Jan 31 07:41:28 crc kubenswrapper[4908]: I0131 07:41:28.569094 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:41:29 crc kubenswrapper[4908]: I0131 07:41:29.024308 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6db75f7db5-rvdn8"] Jan 31 07:41:29 crc kubenswrapper[4908]: I0131 07:41:29.032597 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-krd5r"] Jan 31 07:41:29 crc kubenswrapper[4908]: I0131 07:41:29.050586 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-lbm78"] Jan 31 07:41:29 crc kubenswrapper[4908]: I0131 07:41:29.063055 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:41:29 crc kubenswrapper[4908]: I0131 07:41:29.072439 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-klrpw"] Jan 31 07:41:29 crc kubenswrapper[4908]: I0131 07:41:29.078799 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-bmhth"] Jan 31 07:41:29 crc kubenswrapper[4908]: I0131 07:41:29.088337 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-8bx9v"] Jan 31 07:41:29 crc kubenswrapper[4908]: I0131 07:41:29.101151 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c67dbb4cc-s5dk4"] Jan 31 07:41:29 crc kubenswrapper[4908]: I0131 07:41:29.111014 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx"] Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.238342 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6db75f7db5-rvdn8"] Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.278038 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5575f74d-z4ns7"] Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.279440 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5575f74d-z4ns7" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.281829 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.287761 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5575f74d-z4ns7"] Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.372803 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c67dbb4cc-s5dk4"] Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.404996 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7c4bcc4864-knpgw"] Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.406692 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c4bcc4864-knpgw" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.414648 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4638785-cdd7-4526-ba1e-4e1772877042-logs\") pod \"horizon-5575f74d-z4ns7\" (UID: \"d4638785-cdd7-4526-ba1e-4e1772877042\") " pod="openstack/horizon-5575f74d-z4ns7" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.414691 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4638785-cdd7-4526-ba1e-4e1772877042-horizon-tls-certs\") pod \"horizon-5575f74d-z4ns7\" (UID: \"d4638785-cdd7-4526-ba1e-4e1772877042\") " pod="openstack/horizon-5575f74d-z4ns7" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.414849 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4638785-cdd7-4526-ba1e-4e1772877042-config-data\") pod \"horizon-5575f74d-z4ns7\" (UID: \"d4638785-cdd7-4526-ba1e-4e1772877042\") " pod="openstack/horizon-5575f74d-z4ns7" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.414971 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4638785-cdd7-4526-ba1e-4e1772877042-combined-ca-bundle\") pod \"horizon-5575f74d-z4ns7\" (UID: \"d4638785-cdd7-4526-ba1e-4e1772877042\") " pod="openstack/horizon-5575f74d-z4ns7" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.415077 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d4638785-cdd7-4526-ba1e-4e1772877042-horizon-secret-key\") pod \"horizon-5575f74d-z4ns7\" (UID: \"d4638785-cdd7-4526-ba1e-4e1772877042\") " pod="openstack/horizon-5575f74d-z4ns7" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.415134 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4638785-cdd7-4526-ba1e-4e1772877042-scripts\") pod \"horizon-5575f74d-z4ns7\" (UID: \"d4638785-cdd7-4526-ba1e-4e1772877042\") " pod="openstack/horizon-5575f74d-z4ns7" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.415190 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssnbb\" (UniqueName: \"kubernetes.io/projected/d4638785-cdd7-4526-ba1e-4e1772877042-kube-api-access-ssnbb\") pod \"horizon-5575f74d-z4ns7\" (UID: \"d4638785-cdd7-4526-ba1e-4e1772877042\") " pod="openstack/horizon-5575f74d-z4ns7" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.419134 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c4bcc4864-knpgw"] Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.516561 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4638785-cdd7-4526-ba1e-4e1772877042-logs\") pod \"horizon-5575f74d-z4ns7\" (UID: \"d4638785-cdd7-4526-ba1e-4e1772877042\") " pod="openstack/horizon-5575f74d-z4ns7" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.516615 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4638785-cdd7-4526-ba1e-4e1772877042-horizon-tls-certs\") pod \"horizon-5575f74d-z4ns7\" (UID: \"d4638785-cdd7-4526-ba1e-4e1772877042\") " pod="openstack/horizon-5575f74d-z4ns7" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.516671 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee45fe79-e3e5-494d-a355-4f8cd5401c8f-scripts\") pod \"horizon-7c4bcc4864-knpgw\" (UID: \"ee45fe79-e3e5-494d-a355-4f8cd5401c8f\") " pod="openstack/horizon-7c4bcc4864-knpgw" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.516706 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4638785-cdd7-4526-ba1e-4e1772877042-config-data\") pod \"horizon-5575f74d-z4ns7\" (UID: \"d4638785-cdd7-4526-ba1e-4e1772877042\") " pod="openstack/horizon-5575f74d-z4ns7" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.516737 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee45fe79-e3e5-494d-a355-4f8cd5401c8f-logs\") pod \"horizon-7c4bcc4864-knpgw\" (UID: \"ee45fe79-e3e5-494d-a355-4f8cd5401c8f\") " pod="openstack/horizon-7c4bcc4864-knpgw" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.516781 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ee45fe79-e3e5-494d-a355-4f8cd5401c8f-horizon-secret-key\") pod \"horizon-7c4bcc4864-knpgw\" (UID: \"ee45fe79-e3e5-494d-a355-4f8cd5401c8f\") " pod="openstack/horizon-7c4bcc4864-knpgw" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.516820 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4638785-cdd7-4526-ba1e-4e1772877042-combined-ca-bundle\") pod \"horizon-5575f74d-z4ns7\" (UID: \"d4638785-cdd7-4526-ba1e-4e1772877042\") " pod="openstack/horizon-5575f74d-z4ns7" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.516845 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee45fe79-e3e5-494d-a355-4f8cd5401c8f-combined-ca-bundle\") pod \"horizon-7c4bcc4864-knpgw\" (UID: \"ee45fe79-e3e5-494d-a355-4f8cd5401c8f\") " pod="openstack/horizon-7c4bcc4864-knpgw" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.516888 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d4638785-cdd7-4526-ba1e-4e1772877042-horizon-secret-key\") pod \"horizon-5575f74d-z4ns7\" (UID: \"d4638785-cdd7-4526-ba1e-4e1772877042\") " pod="openstack/horizon-5575f74d-z4ns7" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.516914 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee45fe79-e3e5-494d-a355-4f8cd5401c8f-horizon-tls-certs\") pod \"horizon-7c4bcc4864-knpgw\" (UID: \"ee45fe79-e3e5-494d-a355-4f8cd5401c8f\") " pod="openstack/horizon-7c4bcc4864-knpgw" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.516950 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4638785-cdd7-4526-ba1e-4e1772877042-scripts\") pod \"horizon-5575f74d-z4ns7\" (UID: \"d4638785-cdd7-4526-ba1e-4e1772877042\") " pod="openstack/horizon-5575f74d-z4ns7" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.516961 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4638785-cdd7-4526-ba1e-4e1772877042-logs\") pod \"horizon-5575f74d-z4ns7\" (UID: \"d4638785-cdd7-4526-ba1e-4e1772877042\") " pod="openstack/horizon-5575f74d-z4ns7" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.516995 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee45fe79-e3e5-494d-a355-4f8cd5401c8f-config-data\") pod \"horizon-7c4bcc4864-knpgw\" (UID: \"ee45fe79-e3e5-494d-a355-4f8cd5401c8f\") " pod="openstack/horizon-7c4bcc4864-knpgw" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.517582 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssnbb\" (UniqueName: \"kubernetes.io/projected/d4638785-cdd7-4526-ba1e-4e1772877042-kube-api-access-ssnbb\") pod \"horizon-5575f74d-z4ns7\" (UID: \"d4638785-cdd7-4526-ba1e-4e1772877042\") " pod="openstack/horizon-5575f74d-z4ns7" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.517956 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4638785-cdd7-4526-ba1e-4e1772877042-config-data\") pod \"horizon-5575f74d-z4ns7\" (UID: \"d4638785-cdd7-4526-ba1e-4e1772877042\") " pod="openstack/horizon-5575f74d-z4ns7" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.518008 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mjsx\" (UniqueName: \"kubernetes.io/projected/ee45fe79-e3e5-494d-a355-4f8cd5401c8f-kube-api-access-6mjsx\") pod \"horizon-7c4bcc4864-knpgw\" (UID: \"ee45fe79-e3e5-494d-a355-4f8cd5401c8f\") " pod="openstack/horizon-7c4bcc4864-knpgw" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.518141 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4638785-cdd7-4526-ba1e-4e1772877042-scripts\") pod \"horizon-5575f74d-z4ns7\" (UID: \"d4638785-cdd7-4526-ba1e-4e1772877042\") " pod="openstack/horizon-5575f74d-z4ns7" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.533579 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d4638785-cdd7-4526-ba1e-4e1772877042-horizon-secret-key\") pod \"horizon-5575f74d-z4ns7\" (UID: \"d4638785-cdd7-4526-ba1e-4e1772877042\") " pod="openstack/horizon-5575f74d-z4ns7" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.534564 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4638785-cdd7-4526-ba1e-4e1772877042-horizon-tls-certs\") pod \"horizon-5575f74d-z4ns7\" (UID: \"d4638785-cdd7-4526-ba1e-4e1772877042\") " pod="openstack/horizon-5575f74d-z4ns7" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.540279 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssnbb\" (UniqueName: \"kubernetes.io/projected/d4638785-cdd7-4526-ba1e-4e1772877042-kube-api-access-ssnbb\") pod \"horizon-5575f74d-z4ns7\" (UID: \"d4638785-cdd7-4526-ba1e-4e1772877042\") " pod="openstack/horizon-5575f74d-z4ns7" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.543625 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4638785-cdd7-4526-ba1e-4e1772877042-combined-ca-bundle\") pod \"horizon-5575f74d-z4ns7\" (UID: \"d4638785-cdd7-4526-ba1e-4e1772877042\") " pod="openstack/horizon-5575f74d-z4ns7" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.608565 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5575f74d-z4ns7" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.621519 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee45fe79-e3e5-494d-a355-4f8cd5401c8f-config-data\") pod \"horizon-7c4bcc4864-knpgw\" (UID: \"ee45fe79-e3e5-494d-a355-4f8cd5401c8f\") " pod="openstack/horizon-7c4bcc4864-knpgw" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.621676 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mjsx\" (UniqueName: \"kubernetes.io/projected/ee45fe79-e3e5-494d-a355-4f8cd5401c8f-kube-api-access-6mjsx\") pod \"horizon-7c4bcc4864-knpgw\" (UID: \"ee45fe79-e3e5-494d-a355-4f8cd5401c8f\") " pod="openstack/horizon-7c4bcc4864-knpgw" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.621987 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee45fe79-e3e5-494d-a355-4f8cd5401c8f-scripts\") pod \"horizon-7c4bcc4864-knpgw\" (UID: \"ee45fe79-e3e5-494d-a355-4f8cd5401c8f\") " pod="openstack/horizon-7c4bcc4864-knpgw" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.622099 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee45fe79-e3e5-494d-a355-4f8cd5401c8f-logs\") pod \"horizon-7c4bcc4864-knpgw\" (UID: \"ee45fe79-e3e5-494d-a355-4f8cd5401c8f\") " pod="openstack/horizon-7c4bcc4864-knpgw" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.622212 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ee45fe79-e3e5-494d-a355-4f8cd5401c8f-horizon-secret-key\") pod \"horizon-7c4bcc4864-knpgw\" (UID: \"ee45fe79-e3e5-494d-a355-4f8cd5401c8f\") " pod="openstack/horizon-7c4bcc4864-knpgw" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.622323 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee45fe79-e3e5-494d-a355-4f8cd5401c8f-combined-ca-bundle\") pod \"horizon-7c4bcc4864-knpgw\" (UID: \"ee45fe79-e3e5-494d-a355-4f8cd5401c8f\") " pod="openstack/horizon-7c4bcc4864-knpgw" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.622419 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee45fe79-e3e5-494d-a355-4f8cd5401c8f-horizon-tls-certs\") pod \"horizon-7c4bcc4864-knpgw\" (UID: \"ee45fe79-e3e5-494d-a355-4f8cd5401c8f\") " pod="openstack/horizon-7c4bcc4864-knpgw" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.622438 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee45fe79-e3e5-494d-a355-4f8cd5401c8f-logs\") pod \"horizon-7c4bcc4864-knpgw\" (UID: \"ee45fe79-e3e5-494d-a355-4f8cd5401c8f\") " pod="openstack/horizon-7c4bcc4864-knpgw" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.622523 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee45fe79-e3e5-494d-a355-4f8cd5401c8f-scripts\") pod \"horizon-7c4bcc4864-knpgw\" (UID: \"ee45fe79-e3e5-494d-a355-4f8cd5401c8f\") " pod="openstack/horizon-7c4bcc4864-knpgw" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.623451 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee45fe79-e3e5-494d-a355-4f8cd5401c8f-config-data\") pod \"horizon-7c4bcc4864-knpgw\" (UID: \"ee45fe79-e3e5-494d-a355-4f8cd5401c8f\") " pod="openstack/horizon-7c4bcc4864-knpgw" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.626229 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ee45fe79-e3e5-494d-a355-4f8cd5401c8f-horizon-secret-key\") pod \"horizon-7c4bcc4864-knpgw\" (UID: \"ee45fe79-e3e5-494d-a355-4f8cd5401c8f\") " pod="openstack/horizon-7c4bcc4864-knpgw" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.626653 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee45fe79-e3e5-494d-a355-4f8cd5401c8f-horizon-tls-certs\") pod \"horizon-7c4bcc4864-knpgw\" (UID: \"ee45fe79-e3e5-494d-a355-4f8cd5401c8f\") " pod="openstack/horizon-7c4bcc4864-knpgw" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.626959 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee45fe79-e3e5-494d-a355-4f8cd5401c8f-combined-ca-bundle\") pod \"horizon-7c4bcc4864-knpgw\" (UID: \"ee45fe79-e3e5-494d-a355-4f8cd5401c8f\") " pod="openstack/horizon-7c4bcc4864-knpgw" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.647201 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mjsx\" (UniqueName: \"kubernetes.io/projected/ee45fe79-e3e5-494d-a355-4f8cd5401c8f-kube-api-access-6mjsx\") pod \"horizon-7c4bcc4864-knpgw\" (UID: \"ee45fe79-e3e5-494d-a355-4f8cd5401c8f\") " pod="openstack/horizon-7c4bcc4864-knpgw" Jan 31 07:41:30 crc kubenswrapper[4908]: I0131 07:41:30.721337 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c4bcc4864-knpgw" Jan 31 07:41:31 crc kubenswrapper[4908]: W0131 07:41:31.236476 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd523cbb4_de42_4a3f_9636_ec5b89ad51b7.slice/crio-c0153662ce181a89091f6f642f6558a9789fef19a793e7c0631e048ea94699bf WatchSource:0}: Error finding container c0153662ce181a89091f6f642f6558a9789fef19a793e7c0631e048ea94699bf: Status 404 returned error can't find the container with id c0153662ce181a89091f6f642f6558a9789fef19a793e7c0631e048ea94699bf Jan 31 07:41:31 crc kubenswrapper[4908]: E0131 07:41:31.239173 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 31 07:41:31 crc kubenswrapper[4908]: E0131 07:41:31.239403 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n654h55bh55ch566h678h64fh54ch599h694h695h674h67h599h5f5h5fdhfchd7h57h9bh567h64hddh586hf4h576h68bh549h6ch8chfdh595hcdq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6ht2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5d97fb449c-bcz8d_openstack(c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:41:31 crc kubenswrapper[4908]: E0131 07:41:31.241410 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5d97fb449c-bcz8d" podUID="c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36" Jan 31 07:41:31 crc kubenswrapper[4908]: W0131 07:41:31.262239 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f90a57b_f1f8_4fc5_ac97_bdd418912544.slice/crio-eff0a2029a325efd2710c0a7e74c2ce9dbbc6c580c7de075f2092f2c05d19278 WatchSource:0}: Error finding container eff0a2029a325efd2710c0a7e74c2ce9dbbc6c580c7de075f2092f2c05d19278: Status 404 returned error can't find the container with id eff0a2029a325efd2710c0a7e74c2ce9dbbc6c580c7de075f2092f2c05d19278 Jan 31 07:41:31 crc kubenswrapper[4908]: W0131 07:41:31.267597 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c25925c_5085_4002_970e_65facdd68148.slice/crio-ef2ae5d05d76cd9c53912fef3252291fd9b528244a1fc792b733993a3e071763 WatchSource:0}: Error finding container ef2ae5d05d76cd9c53912fef3252291fd9b528244a1fc792b733993a3e071763: Status 404 returned error can't find the container with id ef2ae5d05d76cd9c53912fef3252291fd9b528244a1fc792b733993a3e071763 Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.291898 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fbd85b65-r5qzc" Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.434353 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crwqv\" (UniqueName: \"kubernetes.io/projected/4ab501e7-da69-4f12-a78c-fd28cad07038-kube-api-access-crwqv\") pod \"4ab501e7-da69-4f12-a78c-fd28cad07038\" (UID: \"4ab501e7-da69-4f12-a78c-fd28cad07038\") " Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.434641 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ab501e7-da69-4f12-a78c-fd28cad07038-ovsdbserver-sb\") pod \"4ab501e7-da69-4f12-a78c-fd28cad07038\" (UID: \"4ab501e7-da69-4f12-a78c-fd28cad07038\") " Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.434705 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ab501e7-da69-4f12-a78c-fd28cad07038-dns-svc\") pod \"4ab501e7-da69-4f12-a78c-fd28cad07038\" (UID: \"4ab501e7-da69-4f12-a78c-fd28cad07038\") " Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.434813 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab501e7-da69-4f12-a78c-fd28cad07038-config\") pod \"4ab501e7-da69-4f12-a78c-fd28cad07038\" (UID: \"4ab501e7-da69-4f12-a78c-fd28cad07038\") " Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.434842 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ab501e7-da69-4f12-a78c-fd28cad07038-ovsdbserver-nb\") pod \"4ab501e7-da69-4f12-a78c-fd28cad07038\" (UID: \"4ab501e7-da69-4f12-a78c-fd28cad07038\") " Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.447511 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ab501e7-da69-4f12-a78c-fd28cad07038-kube-api-access-crwqv" (OuterVolumeSpecName: "kube-api-access-crwqv") pod "4ab501e7-da69-4f12-a78c-fd28cad07038" (UID: "4ab501e7-da69-4f12-a78c-fd28cad07038"). InnerVolumeSpecName "kube-api-access-crwqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.482920 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ab501e7-da69-4f12-a78c-fd28cad07038-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4ab501e7-da69-4f12-a78c-fd28cad07038" (UID: "4ab501e7-da69-4f12-a78c-fd28cad07038"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.515139 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ab501e7-da69-4f12-a78c-fd28cad07038-config" (OuterVolumeSpecName: "config") pod "4ab501e7-da69-4f12-a78c-fd28cad07038" (UID: "4ab501e7-da69-4f12-a78c-fd28cad07038"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.526606 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ab501e7-da69-4f12-a78c-fd28cad07038-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4ab501e7-da69-4f12-a78c-fd28cad07038" (UID: "4ab501e7-da69-4f12-a78c-fd28cad07038"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.537724 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab501e7-da69-4f12-a78c-fd28cad07038-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.537756 4908 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ab501e7-da69-4f12-a78c-fd28cad07038-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.537773 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crwqv\" (UniqueName: \"kubernetes.io/projected/4ab501e7-da69-4f12-a78c-fd28cad07038-kube-api-access-crwqv\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.537821 4908 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ab501e7-da69-4f12-a78c-fd28cad07038-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.580303 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ab501e7-da69-4f12-a78c-fd28cad07038-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4ab501e7-da69-4f12-a78c-fd28cad07038" (UID: "4ab501e7-da69-4f12-a78c-fd28cad07038"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.639348 4908 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ab501e7-da69-4f12-a78c-fd28cad07038-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.653811 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6db75f7db5-rvdn8" event={"ID":"d523cbb4-de42-4a3f-9636-ec5b89ad51b7","Type":"ContainerStarted","Data":"c0153662ce181a89091f6f642f6558a9789fef19a793e7c0631e048ea94699bf"} Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.656770 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c67dbb4cc-s5dk4" event={"ID":"7c25925c-5085-4002-970e-65facdd68148","Type":"ContainerStarted","Data":"ef2ae5d05d76cd9c53912fef3252291fd9b528244a1fc792b733993a3e071763"} Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.658429 4908 generic.go:334] "Generic (PLEG): container finished" podID="3b3e29e1-fd95-49af-9f16-d5ed825c46b4" containerID="7ea91fd779a9b513d971ee0cdc8c16e424010845760bf348146bc4aa7f072b2d" exitCode=0 Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.658472 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf59f66bf-klrpw" event={"ID":"3b3e29e1-fd95-49af-9f16-d5ed825c46b4","Type":"ContainerDied","Data":"7ea91fd779a9b513d971ee0cdc8c16e424010845760bf348146bc4aa7f072b2d"} Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.658489 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf59f66bf-klrpw" event={"ID":"3b3e29e1-fd95-49af-9f16-d5ed825c46b4","Type":"ContainerStarted","Data":"50daa71c8464127a4a75b2fafae36ae7bdbe2a2f276e71cf201ec0b7b84fb007"} Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.660805 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-krd5r" event={"ID":"3b59881d-759b-492a-b475-c27f092660c6","Type":"ContainerStarted","Data":"8561bd6d73e2c2f3a973d0eb03e083a54fc635d325c546c39553be6b69c9c487"} Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.662097 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f90a57b-f1f8-4fc5-ac97-bdd418912544","Type":"ContainerStarted","Data":"eff0a2029a325efd2710c0a7e74c2ce9dbbc6c580c7de075f2092f2c05d19278"} Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.663330 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lbm78" event={"ID":"d1b5c255-9609-4fc5-a3af-10d0faf40366","Type":"ContainerStarted","Data":"51d33265242be6c5d8c594fbe23fcc24be49780a685e54cda6de66912ed24619"} Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.663352 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lbm78" event={"ID":"d1b5c255-9609-4fc5-a3af-10d0faf40366","Type":"ContainerStarted","Data":"ff1082203cf4e751061e08d00f5194a4e6544972e1736467ac4d09e79e3aa977"} Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.667648 4908 generic.go:334] "Generic (PLEG): container finished" podID="610073fa-52a8-4f40-8598-372d9f418a91" containerID="c77171f2efe0b99af1fef8b07aa42561940f514c261810a02748485c80129739" exitCode=0 Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.667702 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xfz2l" event={"ID":"610073fa-52a8-4f40-8598-372d9f418a91","Type":"ContainerDied","Data":"c77171f2efe0b99af1fef8b07aa42561940f514c261810a02748485c80129739"} Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.672238 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bmhth" event={"ID":"a9f3cf33-1d4c-4fae-ac0e-54d917d43325","Type":"ContainerStarted","Data":"c1d2d8c20cd17fa159ab12e9301c65a4893e82407e653fdd123d28af9dfe83e0"} Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.678506 4908 generic.go:334] "Generic (PLEG): container finished" podID="c7fc3e04-968b-4c28-84b6-1cb5d95ad52c" containerID="661125ce2d6af699b6880ba16e7232844af4958271706b5011c7d6e840eab426" exitCode=0 Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.679479 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx" event={"ID":"c7fc3e04-968b-4c28-84b6-1cb5d95ad52c","Type":"ContainerDied","Data":"661125ce2d6af699b6880ba16e7232844af4958271706b5011c7d6e840eab426"} Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.679506 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx" event={"ID":"c7fc3e04-968b-4c28-84b6-1cb5d95ad52c","Type":"ContainerStarted","Data":"c6f49810702dfcd0bf07d72c1b7f17a9e424035905821d2b484694548af4766a"} Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.686856 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fbd85b65-r5qzc" Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.687442 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66fbd85b65-r5qzc" event={"ID":"4ab501e7-da69-4f12-a78c-fd28cad07038","Type":"ContainerDied","Data":"9bd31019863f688df9777044f94f684f8cdb941e7c82b1dbe438a74d1489d9fd"} Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.687503 4908 scope.go:117] "RemoveContainer" containerID="35feb5ca8466d5e0dd39e2025fbeded26c332ce0cbcfb9600c057852cb346080" Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.691285 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8bx9v" event={"ID":"ee8dbc71-e43b-49a6-9d68-78b987f39b89","Type":"ContainerStarted","Data":"a6e8ef87ca1a02976b2e75ea54b18d453b8994c3d5cda64dd0f8011116a3ae95"} Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.776061 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-lbm78" podStartSLOduration=15.776039508 podStartE2EDuration="15.776039508s" podCreationTimestamp="2026-01-31 07:41:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:41:31.770468112 +0000 UTC m=+1198.386412766" watchObservedRunningTime="2026-01-31 07:41:31.776039508 +0000 UTC m=+1198.391984172" Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.809265 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5575f74d-z4ns7"] Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.837826 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c4bcc4864-knpgw"] Jan 31 07:41:31 crc kubenswrapper[4908]: W0131 07:41:31.915466 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee45fe79_e3e5_494d_a355_4f8cd5401c8f.slice/crio-67d184fb3bdd67387971641b19a061b9d5ca06321e5895c6aaa7ec2d3401a278 WatchSource:0}: Error finding container 67d184fb3bdd67387971641b19a061b9d5ca06321e5895c6aaa7ec2d3401a278: Status 404 returned error can't find the container with id 67d184fb3bdd67387971641b19a061b9d5ca06321e5895c6aaa7ec2d3401a278 Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.983362 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-r5qzc"] Jan 31 07:41:31 crc kubenswrapper[4908]: I0131 07:41:31.987871 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-r5qzc"] Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.275796 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf59f66bf-klrpw" Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.306709 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d97fb449c-bcz8d" Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.367692 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3b3e29e1-fd95-49af-9f16-d5ed825c46b4-ovsdbserver-sb\") pod \"3b3e29e1-fd95-49af-9f16-d5ed825c46b4\" (UID: \"3b3e29e1-fd95-49af-9f16-d5ed825c46b4\") " Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.367813 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xp9zg\" (UniqueName: \"kubernetes.io/projected/3b3e29e1-fd95-49af-9f16-d5ed825c46b4-kube-api-access-xp9zg\") pod \"3b3e29e1-fd95-49af-9f16-d5ed825c46b4\" (UID: \"3b3e29e1-fd95-49af-9f16-d5ed825c46b4\") " Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.367915 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b3e29e1-fd95-49af-9f16-d5ed825c46b4-dns-svc\") pod \"3b3e29e1-fd95-49af-9f16-d5ed825c46b4\" (UID: \"3b3e29e1-fd95-49af-9f16-d5ed825c46b4\") " Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.368684 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3b3e29e1-fd95-49af-9f16-d5ed825c46b4-ovsdbserver-nb\") pod \"3b3e29e1-fd95-49af-9f16-d5ed825c46b4\" (UID: \"3b3e29e1-fd95-49af-9f16-d5ed825c46b4\") " Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.368773 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b3e29e1-fd95-49af-9f16-d5ed825c46b4-config\") pod \"3b3e29e1-fd95-49af-9f16-d5ed825c46b4\" (UID: \"3b3e29e1-fd95-49af-9f16-d5ed825c46b4\") " Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.373654 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b3e29e1-fd95-49af-9f16-d5ed825c46b4-kube-api-access-xp9zg" (OuterVolumeSpecName: "kube-api-access-xp9zg") pod "3b3e29e1-fd95-49af-9f16-d5ed825c46b4" (UID: "3b3e29e1-fd95-49af-9f16-d5ed825c46b4"). InnerVolumeSpecName "kube-api-access-xp9zg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.470443 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6ht2\" (UniqueName: \"kubernetes.io/projected/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36-kube-api-access-q6ht2\") pod \"c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36\" (UID: \"c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36\") " Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.470768 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36-horizon-secret-key\") pod \"c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36\" (UID: \"c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36\") " Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.470806 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36-logs\") pod \"c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36\" (UID: \"c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36\") " Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.470863 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36-config-data\") pod \"c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36\" (UID: \"c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36\") " Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.471034 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36-scripts\") pod \"c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36\" (UID: \"c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36\") " Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.471230 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36-logs" (OuterVolumeSpecName: "logs") pod "c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36" (UID: "c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.471585 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xp9zg\" (UniqueName: \"kubernetes.io/projected/3b3e29e1-fd95-49af-9f16-d5ed825c46b4-kube-api-access-xp9zg\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.471607 4908 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.471677 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36-scripts" (OuterVolumeSpecName: "scripts") pod "c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36" (UID: "c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.471728 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36-config-data" (OuterVolumeSpecName: "config-data") pod "c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36" (UID: "c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.490251 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36" (UID: "c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.490282 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36-kube-api-access-q6ht2" (OuterVolumeSpecName: "kube-api-access-q6ht2") pod "c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36" (UID: "c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36"). InnerVolumeSpecName "kube-api-access-q6ht2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.492770 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b3e29e1-fd95-49af-9f16-d5ed825c46b4-config" (OuterVolumeSpecName: "config") pod "3b3e29e1-fd95-49af-9f16-d5ed825c46b4" (UID: "3b3e29e1-fd95-49af-9f16-d5ed825c46b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.501858 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b3e29e1-fd95-49af-9f16-d5ed825c46b4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3b3e29e1-fd95-49af-9f16-d5ed825c46b4" (UID: "3b3e29e1-fd95-49af-9f16-d5ed825c46b4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.513486 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b3e29e1-fd95-49af-9f16-d5ed825c46b4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3b3e29e1-fd95-49af-9f16-d5ed825c46b4" (UID: "3b3e29e1-fd95-49af-9f16-d5ed825c46b4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.515692 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b3e29e1-fd95-49af-9f16-d5ed825c46b4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3b3e29e1-fd95-49af-9f16-d5ed825c46b4" (UID: "3b3e29e1-fd95-49af-9f16-d5ed825c46b4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.574231 4908 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.574263 4908 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b3e29e1-fd95-49af-9f16-d5ed825c46b4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.574275 4908 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3b3e29e1-fd95-49af-9f16-d5ed825c46b4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.574288 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6ht2\" (UniqueName: \"kubernetes.io/projected/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36-kube-api-access-q6ht2\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.574300 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b3e29e1-fd95-49af-9f16-d5ed825c46b4-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.574310 4908 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.574322 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.574372 4908 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3b3e29e1-fd95-49af-9f16-d5ed825c46b4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.727214 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c4bcc4864-knpgw" event={"ID":"ee45fe79-e3e5-494d-a355-4f8cd5401c8f","Type":"ContainerStarted","Data":"4b2bb60a60fe54779e8639335b90be520c00c5e278804e53a1df352dcda675b9"} Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.727487 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c4bcc4864-knpgw" event={"ID":"ee45fe79-e3e5-494d-a355-4f8cd5401c8f","Type":"ContainerStarted","Data":"67d184fb3bdd67387971641b19a061b9d5ca06321e5895c6aaa7ec2d3401a278"} Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.753330 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d97fb449c-bcz8d" event={"ID":"c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36","Type":"ContainerDied","Data":"560657e196bfa20a34463ac1d5d4658a9bf816f81d554a38948f9bb2cf4c3d61"} Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.753354 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d97fb449c-bcz8d" Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.758171 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5575f74d-z4ns7" event={"ID":"d4638785-cdd7-4526-ba1e-4e1772877042","Type":"ContainerStarted","Data":"58d59291e69542654281566570683863121d3ebd52a0446fbb9eb03de6979ac0"} Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.758217 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5575f74d-z4ns7" event={"ID":"d4638785-cdd7-4526-ba1e-4e1772877042","Type":"ContainerStarted","Data":"37ae495fceadba4a8baff3789dd55dbbfb331d086181232b9633dab47f10741c"} Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.762041 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6db75f7db5-rvdn8" event={"ID":"d523cbb4-de42-4a3f-9636-ec5b89ad51b7","Type":"ContainerStarted","Data":"d4f42d63e581d82b6957567ac64af5a484da60b48cde1cf6866b235fca7500ce"} Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.762103 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6db75f7db5-rvdn8" event={"ID":"d523cbb4-de42-4a3f-9636-ec5b89ad51b7","Type":"ContainerStarted","Data":"086ab50e85bd3ec0467afd4182200a16399686c83a1be2a0280b12ee18936994"} Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.762118 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6db75f7db5-rvdn8" podUID="d523cbb4-de42-4a3f-9636-ec5b89ad51b7" containerName="horizon-log" containerID="cri-o://086ab50e85bd3ec0467afd4182200a16399686c83a1be2a0280b12ee18936994" gracePeriod=30 Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.762173 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6db75f7db5-rvdn8" podUID="d523cbb4-de42-4a3f-9636-ec5b89ad51b7" containerName="horizon" containerID="cri-o://d4f42d63e581d82b6957567ac64af5a484da60b48cde1cf6866b235fca7500ce" gracePeriod=30 Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.765371 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c67dbb4cc-s5dk4" event={"ID":"7c25925c-5085-4002-970e-65facdd68148","Type":"ContainerStarted","Data":"70266d995b55dfdd9c4a891ce83853aa414d5e7e73155ffd800d3b894014fa96"} Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.768912 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf59f66bf-klrpw" event={"ID":"3b3e29e1-fd95-49af-9f16-d5ed825c46b4","Type":"ContainerDied","Data":"50daa71c8464127a4a75b2fafae36ae7bdbe2a2f276e71cf201ec0b7b84fb007"} Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.768956 4908 scope.go:117] "RemoveContainer" containerID="7ea91fd779a9b513d971ee0cdc8c16e424010845760bf348146bc4aa7f072b2d" Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.768927 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf59f66bf-klrpw" Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.786591 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6db75f7db5-rvdn8" podStartSLOduration=16.286420873 podStartE2EDuration="16.786568696s" podCreationTimestamp="2026-01-31 07:41:16 +0000 UTC" firstStartedPulling="2026-01-31 07:41:31.243264526 +0000 UTC m=+1197.859209180" lastFinishedPulling="2026-01-31 07:41:31.743412349 +0000 UTC m=+1198.359357003" observedRunningTime="2026-01-31 07:41:32.785036828 +0000 UTC m=+1199.400981502" watchObservedRunningTime="2026-01-31 07:41:32.786568696 +0000 UTC m=+1199.402513350" Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.792849 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx" event={"ID":"c7fc3e04-968b-4c28-84b6-1cb5d95ad52c","Type":"ContainerStarted","Data":"d50769a9eb35ecb483198f59a855657aab9b4dd430b68690ac7822adc4c3eef9"} Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.794094 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx" Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.835202 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx" podStartSLOduration=14.835141515 podStartE2EDuration="14.835141515s" podCreationTimestamp="2026-01-31 07:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:41:32.816576611 +0000 UTC m=+1199.432521285" watchObservedRunningTime="2026-01-31 07:41:32.835141515 +0000 UTC m=+1199.451086189" Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.915676 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-klrpw"] Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.940220 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-klrpw"] Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.952055 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5d97fb449c-bcz8d"] Jan 31 07:41:32 crc kubenswrapper[4908]: I0131 07:41:32.958043 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5d97fb449c-bcz8d"] Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.236377 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xfz2l" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.411555 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/610073fa-52a8-4f40-8598-372d9f418a91-combined-ca-bundle\") pod \"610073fa-52a8-4f40-8598-372d9f418a91\" (UID: \"610073fa-52a8-4f40-8598-372d9f418a91\") " Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.411659 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/610073fa-52a8-4f40-8598-372d9f418a91-config-data\") pod \"610073fa-52a8-4f40-8598-372d9f418a91\" (UID: \"610073fa-52a8-4f40-8598-372d9f418a91\") " Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.411720 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/610073fa-52a8-4f40-8598-372d9f418a91-credential-keys\") pod \"610073fa-52a8-4f40-8598-372d9f418a91\" (UID: \"610073fa-52a8-4f40-8598-372d9f418a91\") " Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.411785 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/610073fa-52a8-4f40-8598-372d9f418a91-scripts\") pod \"610073fa-52a8-4f40-8598-372d9f418a91\" (UID: \"610073fa-52a8-4f40-8598-372d9f418a91\") " Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.411847 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/610073fa-52a8-4f40-8598-372d9f418a91-fernet-keys\") pod \"610073fa-52a8-4f40-8598-372d9f418a91\" (UID: \"610073fa-52a8-4f40-8598-372d9f418a91\") " Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.411890 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-649tg\" (UniqueName: \"kubernetes.io/projected/610073fa-52a8-4f40-8598-372d9f418a91-kube-api-access-649tg\") pod \"610073fa-52a8-4f40-8598-372d9f418a91\" (UID: \"610073fa-52a8-4f40-8598-372d9f418a91\") " Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.419556 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/610073fa-52a8-4f40-8598-372d9f418a91-kube-api-access-649tg" (OuterVolumeSpecName: "kube-api-access-649tg") pod "610073fa-52a8-4f40-8598-372d9f418a91" (UID: "610073fa-52a8-4f40-8598-372d9f418a91"). InnerVolumeSpecName "kube-api-access-649tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.419669 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/610073fa-52a8-4f40-8598-372d9f418a91-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "610073fa-52a8-4f40-8598-372d9f418a91" (UID: "610073fa-52a8-4f40-8598-372d9f418a91"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.420908 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/610073fa-52a8-4f40-8598-372d9f418a91-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "610073fa-52a8-4f40-8598-372d9f418a91" (UID: "610073fa-52a8-4f40-8598-372d9f418a91"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.421643 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/610073fa-52a8-4f40-8598-372d9f418a91-scripts" (OuterVolumeSpecName: "scripts") pod "610073fa-52a8-4f40-8598-372d9f418a91" (UID: "610073fa-52a8-4f40-8598-372d9f418a91"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.444117 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/610073fa-52a8-4f40-8598-372d9f418a91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "610073fa-52a8-4f40-8598-372d9f418a91" (UID: "610073fa-52a8-4f40-8598-372d9f418a91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.464074 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/610073fa-52a8-4f40-8598-372d9f418a91-config-data" (OuterVolumeSpecName: "config-data") pod "610073fa-52a8-4f40-8598-372d9f418a91" (UID: "610073fa-52a8-4f40-8598-372d9f418a91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.514330 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/610073fa-52a8-4f40-8598-372d9f418a91-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.514368 4908 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/610073fa-52a8-4f40-8598-372d9f418a91-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.514381 4908 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/610073fa-52a8-4f40-8598-372d9f418a91-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.514415 4908 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/610073fa-52a8-4f40-8598-372d9f418a91-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.514431 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-649tg\" (UniqueName: \"kubernetes.io/projected/610073fa-52a8-4f40-8598-372d9f418a91-kube-api-access-649tg\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.514443 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/610073fa-52a8-4f40-8598-372d9f418a91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.786005 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xfz2l"] Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.798160 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xfz2l"] Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.808052 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c4bcc4864-knpgw" event={"ID":"ee45fe79-e3e5-494d-a355-4f8cd5401c8f","Type":"ContainerStarted","Data":"8f4ec9d0f722eef2047897b8c0581fcabf97cfaf8cdeec8b24d782190c2ff4d4"} Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.812094 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb651da69afe327a435df0397e97a4ce49e37cb7905fabe72893e5a5a57386c3" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.812144 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xfz2l" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.815189 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5575f74d-z4ns7" event={"ID":"d4638785-cdd7-4526-ba1e-4e1772877042","Type":"ContainerStarted","Data":"3f02a725a5cf0f911eb45588a65e49dc4b79959c69f08f892b82fdd352b40798"} Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.820499 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c67dbb4cc-s5dk4" event={"ID":"7c25925c-5085-4002-970e-65facdd68148","Type":"ContainerStarted","Data":"bdd197811d1f07feefe7ee9f4993d1ef69a24abc07b7c09f1cd6bd135540a6f0"} Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.820570 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5c67dbb4cc-s5dk4" podUID="7c25925c-5085-4002-970e-65facdd68148" containerName="horizon-log" containerID="cri-o://70266d995b55dfdd9c4a891ce83853aa414d5e7e73155ffd800d3b894014fa96" gracePeriod=30 Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.820592 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5c67dbb4cc-s5dk4" podUID="7c25925c-5085-4002-970e-65facdd68148" containerName="horizon" containerID="cri-o://bdd197811d1f07feefe7ee9f4993d1ef69a24abc07b7c09f1cd6bd135540a6f0" gracePeriod=30 Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.837876 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7c4bcc4864-knpgw" podStartSLOduration=3.8378174 podStartE2EDuration="3.8378174s" podCreationTimestamp="2026-01-31 07:41:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:41:33.836911498 +0000 UTC m=+1200.452856152" watchObservedRunningTime="2026-01-31 07:41:33.8378174 +0000 UTC m=+1200.453762044" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.889332 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-ptzt6"] Jan 31 07:41:33 crc kubenswrapper[4908]: E0131 07:41:33.889658 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="610073fa-52a8-4f40-8598-372d9f418a91" containerName="keystone-bootstrap" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.889672 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="610073fa-52a8-4f40-8598-372d9f418a91" containerName="keystone-bootstrap" Jan 31 07:41:33 crc kubenswrapper[4908]: E0131 07:41:33.889688 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ab501e7-da69-4f12-a78c-fd28cad07038" containerName="init" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.889694 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ab501e7-da69-4f12-a78c-fd28cad07038" containerName="init" Jan 31 07:41:33 crc kubenswrapper[4908]: E0131 07:41:33.889710 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3e29e1-fd95-49af-9f16-d5ed825c46b4" containerName="init" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.889716 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3e29e1-fd95-49af-9f16-d5ed825c46b4" containerName="init" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.889917 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b3e29e1-fd95-49af-9f16-d5ed825c46b4" containerName="init" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.889926 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ab501e7-da69-4f12-a78c-fd28cad07038" containerName="init" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.889949 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="610073fa-52a8-4f40-8598-372d9f418a91" containerName="keystone-bootstrap" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.890475 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ptzt6" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.892944 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.893208 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-68r4f" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.893398 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.893552 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.900570 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5c67dbb4cc-s5dk4" podStartSLOduration=14.41793872 podStartE2EDuration="14.900545775s" podCreationTimestamp="2026-01-31 07:41:19 +0000 UTC" firstStartedPulling="2026-01-31 07:41:31.269642071 +0000 UTC m=+1197.885586725" lastFinishedPulling="2026-01-31 07:41:31.752249126 +0000 UTC m=+1198.368193780" observedRunningTime="2026-01-31 07:41:33.878829354 +0000 UTC m=+1200.494774008" watchObservedRunningTime="2026-01-31 07:41:33.900545775 +0000 UTC m=+1200.516490429" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.901053 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.927094 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ptzt6"] Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.932466 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5575f74d-z4ns7" podStartSLOduration=3.932444146 podStartE2EDuration="3.932444146s" podCreationTimestamp="2026-01-31 07:41:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:41:33.904997615 +0000 UTC m=+1200.520942269" watchObservedRunningTime="2026-01-31 07:41:33.932444146 +0000 UTC m=+1200.548388800" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.951614 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b3e29e1-fd95-49af-9f16-d5ed825c46b4" path="/var/lib/kubelet/pods/3b3e29e1-fd95-49af-9f16-d5ed825c46b4/volumes" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.952291 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ab501e7-da69-4f12-a78c-fd28cad07038" path="/var/lib/kubelet/pods/4ab501e7-da69-4f12-a78c-fd28cad07038/volumes" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.952807 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="610073fa-52a8-4f40-8598-372d9f418a91" path="/var/lib/kubelet/pods/610073fa-52a8-4f40-8598-372d9f418a91/volumes" Jan 31 07:41:33 crc kubenswrapper[4908]: I0131 07:41:33.954041 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36" path="/var/lib/kubelet/pods/c6a1a9a7-1b8c-4da9-9390-adf2e7d78f36/volumes" Jan 31 07:41:34 crc kubenswrapper[4908]: I0131 07:41:34.026065 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/570bea44-7c9b-4296-9188-5e5e590e4493-fernet-keys\") pod \"keystone-bootstrap-ptzt6\" (UID: \"570bea44-7c9b-4296-9188-5e5e590e4493\") " pod="openstack/keystone-bootstrap-ptzt6" Jan 31 07:41:34 crc kubenswrapper[4908]: I0131 07:41:34.026110 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/570bea44-7c9b-4296-9188-5e5e590e4493-scripts\") pod \"keystone-bootstrap-ptzt6\" (UID: \"570bea44-7c9b-4296-9188-5e5e590e4493\") " pod="openstack/keystone-bootstrap-ptzt6" Jan 31 07:41:34 crc kubenswrapper[4908]: I0131 07:41:34.026159 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vqln\" (UniqueName: \"kubernetes.io/projected/570bea44-7c9b-4296-9188-5e5e590e4493-kube-api-access-7vqln\") pod \"keystone-bootstrap-ptzt6\" (UID: \"570bea44-7c9b-4296-9188-5e5e590e4493\") " pod="openstack/keystone-bootstrap-ptzt6" Jan 31 07:41:34 crc kubenswrapper[4908]: I0131 07:41:34.026372 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/570bea44-7c9b-4296-9188-5e5e590e4493-credential-keys\") pod \"keystone-bootstrap-ptzt6\" (UID: \"570bea44-7c9b-4296-9188-5e5e590e4493\") " pod="openstack/keystone-bootstrap-ptzt6" Jan 31 07:41:34 crc kubenswrapper[4908]: I0131 07:41:34.026400 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/570bea44-7c9b-4296-9188-5e5e590e4493-combined-ca-bundle\") pod \"keystone-bootstrap-ptzt6\" (UID: \"570bea44-7c9b-4296-9188-5e5e590e4493\") " pod="openstack/keystone-bootstrap-ptzt6" Jan 31 07:41:34 crc kubenswrapper[4908]: I0131 07:41:34.026479 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/570bea44-7c9b-4296-9188-5e5e590e4493-config-data\") pod \"keystone-bootstrap-ptzt6\" (UID: \"570bea44-7c9b-4296-9188-5e5e590e4493\") " pod="openstack/keystone-bootstrap-ptzt6" Jan 31 07:41:34 crc kubenswrapper[4908]: I0131 07:41:34.128553 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/570bea44-7c9b-4296-9188-5e5e590e4493-credential-keys\") pod \"keystone-bootstrap-ptzt6\" (UID: \"570bea44-7c9b-4296-9188-5e5e590e4493\") " pod="openstack/keystone-bootstrap-ptzt6" Jan 31 07:41:34 crc kubenswrapper[4908]: I0131 07:41:34.128600 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/570bea44-7c9b-4296-9188-5e5e590e4493-combined-ca-bundle\") pod \"keystone-bootstrap-ptzt6\" (UID: \"570bea44-7c9b-4296-9188-5e5e590e4493\") " pod="openstack/keystone-bootstrap-ptzt6" Jan 31 07:41:34 crc kubenswrapper[4908]: I0131 07:41:34.129383 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/570bea44-7c9b-4296-9188-5e5e590e4493-config-data\") pod \"keystone-bootstrap-ptzt6\" (UID: \"570bea44-7c9b-4296-9188-5e5e590e4493\") " pod="openstack/keystone-bootstrap-ptzt6" Jan 31 07:41:34 crc kubenswrapper[4908]: I0131 07:41:34.129468 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/570bea44-7c9b-4296-9188-5e5e590e4493-fernet-keys\") pod \"keystone-bootstrap-ptzt6\" (UID: \"570bea44-7c9b-4296-9188-5e5e590e4493\") " pod="openstack/keystone-bootstrap-ptzt6" Jan 31 07:41:34 crc kubenswrapper[4908]: I0131 07:41:34.129494 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/570bea44-7c9b-4296-9188-5e5e590e4493-scripts\") pod \"keystone-bootstrap-ptzt6\" (UID: \"570bea44-7c9b-4296-9188-5e5e590e4493\") " pod="openstack/keystone-bootstrap-ptzt6" Jan 31 07:41:34 crc kubenswrapper[4908]: I0131 07:41:34.129539 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vqln\" (UniqueName: \"kubernetes.io/projected/570bea44-7c9b-4296-9188-5e5e590e4493-kube-api-access-7vqln\") pod \"keystone-bootstrap-ptzt6\" (UID: \"570bea44-7c9b-4296-9188-5e5e590e4493\") " pod="openstack/keystone-bootstrap-ptzt6" Jan 31 07:41:34 crc kubenswrapper[4908]: I0131 07:41:34.135303 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/570bea44-7c9b-4296-9188-5e5e590e4493-scripts\") pod \"keystone-bootstrap-ptzt6\" (UID: \"570bea44-7c9b-4296-9188-5e5e590e4493\") " pod="openstack/keystone-bootstrap-ptzt6" Jan 31 07:41:34 crc kubenswrapper[4908]: I0131 07:41:34.135400 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/570bea44-7c9b-4296-9188-5e5e590e4493-credential-keys\") pod \"keystone-bootstrap-ptzt6\" (UID: \"570bea44-7c9b-4296-9188-5e5e590e4493\") " pod="openstack/keystone-bootstrap-ptzt6" Jan 31 07:41:34 crc kubenswrapper[4908]: I0131 07:41:34.135531 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/570bea44-7c9b-4296-9188-5e5e590e4493-combined-ca-bundle\") pod \"keystone-bootstrap-ptzt6\" (UID: \"570bea44-7c9b-4296-9188-5e5e590e4493\") " pod="openstack/keystone-bootstrap-ptzt6" Jan 31 07:41:34 crc kubenswrapper[4908]: I0131 07:41:34.143791 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/570bea44-7c9b-4296-9188-5e5e590e4493-config-data\") pod \"keystone-bootstrap-ptzt6\" (UID: \"570bea44-7c9b-4296-9188-5e5e590e4493\") " pod="openstack/keystone-bootstrap-ptzt6" Jan 31 07:41:34 crc kubenswrapper[4908]: I0131 07:41:34.144231 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/570bea44-7c9b-4296-9188-5e5e590e4493-fernet-keys\") pod \"keystone-bootstrap-ptzt6\" (UID: \"570bea44-7c9b-4296-9188-5e5e590e4493\") " pod="openstack/keystone-bootstrap-ptzt6" Jan 31 07:41:34 crc kubenswrapper[4908]: I0131 07:41:34.147595 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vqln\" (UniqueName: \"kubernetes.io/projected/570bea44-7c9b-4296-9188-5e5e590e4493-kube-api-access-7vqln\") pod \"keystone-bootstrap-ptzt6\" (UID: \"570bea44-7c9b-4296-9188-5e5e590e4493\") " pod="openstack/keystone-bootstrap-ptzt6" Jan 31 07:41:34 crc kubenswrapper[4908]: I0131 07:41:34.216146 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ptzt6" Jan 31 07:41:34 crc kubenswrapper[4908]: I0131 07:41:34.647933 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ptzt6"] Jan 31 07:41:34 crc kubenswrapper[4908]: W0131 07:41:34.652211 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod570bea44_7c9b_4296_9188_5e5e590e4493.slice/crio-1088540517a9511bde5e623b5408c33d061d9700fced1d8bb2e9602c865a363f WatchSource:0}: Error finding container 1088540517a9511bde5e623b5408c33d061d9700fced1d8bb2e9602c865a363f: Status 404 returned error can't find the container with id 1088540517a9511bde5e623b5408c33d061d9700fced1d8bb2e9602c865a363f Jan 31 07:41:34 crc kubenswrapper[4908]: I0131 07:41:34.830578 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ptzt6" event={"ID":"570bea44-7c9b-4296-9188-5e5e590e4493","Type":"ContainerStarted","Data":"1088540517a9511bde5e623b5408c33d061d9700fced1d8bb2e9602c865a363f"} Jan 31 07:41:36 crc kubenswrapper[4908]: I0131 07:41:36.845309 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ptzt6" event={"ID":"570bea44-7c9b-4296-9188-5e5e590e4493","Type":"ContainerStarted","Data":"180e154f7fbd0350557b954c2fc464484a1cfd2215ffd4d20d43eb6a00b5aa42"} Jan 31 07:41:37 crc kubenswrapper[4908]: I0131 07:41:37.520883 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6db75f7db5-rvdn8" Jan 31 07:41:38 crc kubenswrapper[4908]: I0131 07:41:38.675127 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx" Jan 31 07:41:38 crc kubenswrapper[4908]: I0131 07:41:38.738579 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-bdv78"] Jan 31 07:41:38 crc kubenswrapper[4908]: I0131 07:41:38.740385 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-bdv78" podUID="269111c9-ab7c-4600-a09a-d542812483ba" containerName="dnsmasq-dns" containerID="cri-o://7f7c6afdbcc44ffd8afad0e835a7c5e55cd422049835838287de2b23120a5d27" gracePeriod=10 Jan 31 07:41:38 crc kubenswrapper[4908]: I0131 07:41:38.881310 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-ptzt6" podStartSLOduration=5.881292075 podStartE2EDuration="5.881292075s" podCreationTimestamp="2026-01-31 07:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:41:38.878808004 +0000 UTC m=+1205.494752678" watchObservedRunningTime="2026-01-31 07:41:38.881292075 +0000 UTC m=+1205.497236719" Jan 31 07:41:39 crc kubenswrapper[4908]: I0131 07:41:39.511320 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5c67dbb4cc-s5dk4" Jan 31 07:41:39 crc kubenswrapper[4908]: I0131 07:41:39.877819 4908 generic.go:334] "Generic (PLEG): container finished" podID="269111c9-ab7c-4600-a09a-d542812483ba" containerID="7f7c6afdbcc44ffd8afad0e835a7c5e55cd422049835838287de2b23120a5d27" exitCode=0 Jan 31 07:41:39 crc kubenswrapper[4908]: I0131 07:41:39.877885 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-bdv78" event={"ID":"269111c9-ab7c-4600-a09a-d542812483ba","Type":"ContainerDied","Data":"7f7c6afdbcc44ffd8afad0e835a7c5e55cd422049835838287de2b23120a5d27"} Jan 31 07:41:40 crc kubenswrapper[4908]: I0131 07:41:40.431425 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:41:40 crc kubenswrapper[4908]: I0131 07:41:40.431555 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:41:40 crc kubenswrapper[4908]: I0131 07:41:40.431664 4908 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 07:41:40 crc kubenswrapper[4908]: I0131 07:41:40.433117 4908 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c224d07a5673ea9c5d3566a1e4b3b321889159f5901a3aea765d960e0553cfde"} pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 07:41:40 crc kubenswrapper[4908]: I0131 07:41:40.433252 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" containerID="cri-o://c224d07a5673ea9c5d3566a1e4b3b321889159f5901a3aea765d960e0553cfde" gracePeriod=600 Jan 31 07:41:40 crc kubenswrapper[4908]: I0131 07:41:40.609517 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5575f74d-z4ns7" Jan 31 07:41:40 crc kubenswrapper[4908]: I0131 07:41:40.609587 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5575f74d-z4ns7" Jan 31 07:41:40 crc kubenswrapper[4908]: I0131 07:41:40.722556 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7c4bcc4864-knpgw" Jan 31 07:41:40 crc kubenswrapper[4908]: I0131 07:41:40.722855 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7c4bcc4864-knpgw" Jan 31 07:41:40 crc kubenswrapper[4908]: I0131 07:41:40.736282 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-bdv78" podUID="269111c9-ab7c-4600-a09a-d542812483ba" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Jan 31 07:41:41 crc kubenswrapper[4908]: I0131 07:41:41.905337 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerDied","Data":"c224d07a5673ea9c5d3566a1e4b3b321889159f5901a3aea765d960e0553cfde"} Jan 31 07:41:41 crc kubenswrapper[4908]: I0131 07:41:41.905374 4908 generic.go:334] "Generic (PLEG): container finished" podID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerID="c224d07a5673ea9c5d3566a1e4b3b321889159f5901a3aea765d960e0553cfde" exitCode=0 Jan 31 07:41:41 crc kubenswrapper[4908]: I0131 07:41:41.905405 4908 scope.go:117] "RemoveContainer" containerID="58539bfd78268412e99de62573981b4cb5c5685bca0dc270f70e958484596b19" Jan 31 07:41:45 crc kubenswrapper[4908]: I0131 07:41:45.736493 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-bdv78" podUID="269111c9-ab7c-4600-a09a-d542812483ba" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Jan 31 07:41:50 crc kubenswrapper[4908]: I0131 07:41:50.612710 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5575f74d-z4ns7" podUID="d4638785-cdd7-4526-ba1e-4e1772877042" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.141:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.141:8443: connect: connection refused" Jan 31 07:41:50 crc kubenswrapper[4908]: I0131 07:41:50.725491 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7c4bcc4864-knpgw" podUID="ee45fe79-e3e5-494d-a355-4f8cd5401c8f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Jan 31 07:41:50 crc kubenswrapper[4908]: I0131 07:41:50.737065 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-bdv78" podUID="269111c9-ab7c-4600-a09a-d542812483ba" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Jan 31 07:41:50 crc kubenswrapper[4908]: I0131 07:41:50.737357 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-bdv78" Jan 31 07:41:55 crc kubenswrapper[4908]: I0131 07:41:55.737091 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-bdv78" podUID="269111c9-ab7c-4600-a09a-d542812483ba" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Jan 31 07:41:58 crc kubenswrapper[4908]: E0131 07:41:58.520391 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 31 07:41:58 crc kubenswrapper[4908]: E0131 07:41:58.520804 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndch55dhcdh54bh674h66h56h576hb8h74hbh55bhb8h567h666h57h664h8dhdch55h564h5dbh644h99hd6h6dh5ch598h585hdh8bhfbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7lhkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9f90a57b-f1f8-4fc5-ac97-bdd418912544): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:42:00 crc kubenswrapper[4908]: I0131 07:42:00.736384 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-bdv78" podUID="269111c9-ab7c-4600-a09a-d542812483ba" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Jan 31 07:42:01 crc kubenswrapper[4908]: I0131 07:42:01.141362 4908 generic.go:334] "Generic (PLEG): container finished" podID="570bea44-7c9b-4296-9188-5e5e590e4493" containerID="180e154f7fbd0350557b954c2fc464484a1cfd2215ffd4d20d43eb6a00b5aa42" exitCode=0 Jan 31 07:42:01 crc kubenswrapper[4908]: I0131 07:42:01.141554 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ptzt6" event={"ID":"570bea44-7c9b-4296-9188-5e5e590e4493","Type":"ContainerDied","Data":"180e154f7fbd0350557b954c2fc464484a1cfd2215ffd4d20d43eb6a00b5aa42"} Jan 31 07:42:02 crc kubenswrapper[4908]: I0131 07:42:02.465220 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5575f74d-z4ns7" Jan 31 07:42:02 crc kubenswrapper[4908]: I0131 07:42:02.712362 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7c4bcc4864-knpgw" Jan 31 07:42:03 crc kubenswrapper[4908]: I0131 07:42:03.161000 4908 generic.go:334] "Generic (PLEG): container finished" podID="d523cbb4-de42-4a3f-9636-ec5b89ad51b7" containerID="d4f42d63e581d82b6957567ac64af5a484da60b48cde1cf6866b235fca7500ce" exitCode=137 Jan 31 07:42:03 crc kubenswrapper[4908]: I0131 07:42:03.161041 4908 generic.go:334] "Generic (PLEG): container finished" podID="d523cbb4-de42-4a3f-9636-ec5b89ad51b7" containerID="086ab50e85bd3ec0467afd4182200a16399686c83a1be2a0280b12ee18936994" exitCode=137 Jan 31 07:42:03 crc kubenswrapper[4908]: I0131 07:42:03.161063 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6db75f7db5-rvdn8" event={"ID":"d523cbb4-de42-4a3f-9636-ec5b89ad51b7","Type":"ContainerDied","Data":"d4f42d63e581d82b6957567ac64af5a484da60b48cde1cf6866b235fca7500ce"} Jan 31 07:42:03 crc kubenswrapper[4908]: I0131 07:42:03.161091 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6db75f7db5-rvdn8" event={"ID":"d523cbb4-de42-4a3f-9636-ec5b89ad51b7","Type":"ContainerDied","Data":"086ab50e85bd3ec0467afd4182200a16399686c83a1be2a0280b12ee18936994"} Jan 31 07:42:04 crc kubenswrapper[4908]: I0131 07:42:04.122944 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5575f74d-z4ns7" Jan 31 07:42:04 crc kubenswrapper[4908]: I0131 07:42:04.461227 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7c4bcc4864-knpgw" Jan 31 07:42:04 crc kubenswrapper[4908]: I0131 07:42:04.533140 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5575f74d-z4ns7"] Jan 31 07:42:04 crc kubenswrapper[4908]: I0131 07:42:04.533622 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5575f74d-z4ns7" podUID="d4638785-cdd7-4526-ba1e-4e1772877042" containerName="horizon-log" containerID="cri-o://58d59291e69542654281566570683863121d3ebd52a0446fbb9eb03de6979ac0" gracePeriod=30 Jan 31 07:42:04 crc kubenswrapper[4908]: I0131 07:42:04.534047 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5575f74d-z4ns7" podUID="d4638785-cdd7-4526-ba1e-4e1772877042" containerName="horizon" containerID="cri-o://3f02a725a5cf0f911eb45588a65e49dc4b79959c69f08f892b82fdd352b40798" gracePeriod=30 Jan 31 07:42:05 crc kubenswrapper[4908]: I0131 07:42:05.188088 4908 generic.go:334] "Generic (PLEG): container finished" podID="7c25925c-5085-4002-970e-65facdd68148" containerID="bdd197811d1f07feefe7ee9f4993d1ef69a24abc07b7c09f1cd6bd135540a6f0" exitCode=137 Jan 31 07:42:05 crc kubenswrapper[4908]: I0131 07:42:05.188122 4908 generic.go:334] "Generic (PLEG): container finished" podID="7c25925c-5085-4002-970e-65facdd68148" containerID="70266d995b55dfdd9c4a891ce83853aa414d5e7e73155ffd800d3b894014fa96" exitCode=137 Jan 31 07:42:05 crc kubenswrapper[4908]: I0131 07:42:05.188144 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c67dbb4cc-s5dk4" event={"ID":"7c25925c-5085-4002-970e-65facdd68148","Type":"ContainerDied","Data":"bdd197811d1f07feefe7ee9f4993d1ef69a24abc07b7c09f1cd6bd135540a6f0"} Jan 31 07:42:05 crc kubenswrapper[4908]: I0131 07:42:05.188172 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c67dbb4cc-s5dk4" event={"ID":"7c25925c-5085-4002-970e-65facdd68148","Type":"ContainerDied","Data":"70266d995b55dfdd9c4a891ce83853aa414d5e7e73155ffd800d3b894014fa96"} Jan 31 07:42:09 crc kubenswrapper[4908]: E0131 07:42:09.165209 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 31 07:42:09 crc kubenswrapper[4908]: E0131 07:42:09.165562 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2lwzh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-krd5r_openstack(3b59881d-759b-492a-b475-c27f092660c6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:42:09 crc kubenswrapper[4908]: E0131 07:42:09.166773 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-krd5r" podUID="3b59881d-759b-492a-b475-c27f092660c6" Jan 31 07:42:09 crc kubenswrapper[4908]: I0131 07:42:09.218357 4908 generic.go:334] "Generic (PLEG): container finished" podID="d4638785-cdd7-4526-ba1e-4e1772877042" containerID="3f02a725a5cf0f911eb45588a65e49dc4b79959c69f08f892b82fdd352b40798" exitCode=0 Jan 31 07:42:09 crc kubenswrapper[4908]: I0131 07:42:09.218440 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5575f74d-z4ns7" event={"ID":"d4638785-cdd7-4526-ba1e-4e1772877042","Type":"ContainerDied","Data":"3f02a725a5cf0f911eb45588a65e49dc4b79959c69f08f892b82fdd352b40798"} Jan 31 07:42:09 crc kubenswrapper[4908]: E0131 07:42:09.220067 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-krd5r" podUID="3b59881d-759b-492a-b475-c27f092660c6" Jan 31 07:42:10 crc kubenswrapper[4908]: I0131 07:42:10.609428 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5575f74d-z4ns7" podUID="d4638785-cdd7-4526-ba1e-4e1772877042" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.141:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.141:8443: connect: connection refused" Jan 31 07:42:10 crc kubenswrapper[4908]: I0131 07:42:10.738454 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-bdv78" podUID="269111c9-ab7c-4600-a09a-d542812483ba" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: i/o timeout" Jan 31 07:42:15 crc kubenswrapper[4908]: I0131 07:42:15.739085 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-bdv78" podUID="269111c9-ab7c-4600-a09a-d542812483ba" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: i/o timeout" Jan 31 07:42:20 crc kubenswrapper[4908]: I0131 07:42:20.609992 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5575f74d-z4ns7" podUID="d4638785-cdd7-4526-ba1e-4e1772877042" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.141:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.141:8443: connect: connection refused" Jan 31 07:42:20 crc kubenswrapper[4908]: I0131 07:42:20.740488 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-bdv78" podUID="269111c9-ab7c-4600-a09a-d542812483ba" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: i/o timeout" Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.689063 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-bdv78" Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.696486 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ptzt6" Jan 31 07:42:23 crc kubenswrapper[4908]: E0131 07:42:23.758695 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 31 07:42:23 crc kubenswrapper[4908]: E0131 07:42:23.759117 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s5jb9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-bmhth_openstack(a9f3cf33-1d4c-4fae-ac0e-54d917d43325): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:42:23 crc kubenswrapper[4908]: E0131 07:42:23.761158 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-bmhth" podUID="a9f3cf33-1d4c-4fae-ac0e-54d917d43325" Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.769292 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/269111c9-ab7c-4600-a09a-d542812483ba-ovsdbserver-nb\") pod \"269111c9-ab7c-4600-a09a-d542812483ba\" (UID: \"269111c9-ab7c-4600-a09a-d542812483ba\") " Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.769351 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/269111c9-ab7c-4600-a09a-d542812483ba-dns-svc\") pod \"269111c9-ab7c-4600-a09a-d542812483ba\" (UID: \"269111c9-ab7c-4600-a09a-d542812483ba\") " Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.769379 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/570bea44-7c9b-4296-9188-5e5e590e4493-credential-keys\") pod \"570bea44-7c9b-4296-9188-5e5e590e4493\" (UID: \"570bea44-7c9b-4296-9188-5e5e590e4493\") " Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.769405 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/269111c9-ab7c-4600-a09a-d542812483ba-ovsdbserver-sb\") pod \"269111c9-ab7c-4600-a09a-d542812483ba\" (UID: \"269111c9-ab7c-4600-a09a-d542812483ba\") " Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.769440 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr4qw\" (UniqueName: \"kubernetes.io/projected/269111c9-ab7c-4600-a09a-d542812483ba-kube-api-access-nr4qw\") pod \"269111c9-ab7c-4600-a09a-d542812483ba\" (UID: \"269111c9-ab7c-4600-a09a-d542812483ba\") " Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.769465 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/570bea44-7c9b-4296-9188-5e5e590e4493-fernet-keys\") pod \"570bea44-7c9b-4296-9188-5e5e590e4493\" (UID: \"570bea44-7c9b-4296-9188-5e5e590e4493\") " Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.769489 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vqln\" (UniqueName: \"kubernetes.io/projected/570bea44-7c9b-4296-9188-5e5e590e4493-kube-api-access-7vqln\") pod \"570bea44-7c9b-4296-9188-5e5e590e4493\" (UID: \"570bea44-7c9b-4296-9188-5e5e590e4493\") " Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.769509 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/570bea44-7c9b-4296-9188-5e5e590e4493-scripts\") pod \"570bea44-7c9b-4296-9188-5e5e590e4493\" (UID: \"570bea44-7c9b-4296-9188-5e5e590e4493\") " Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.769579 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/570bea44-7c9b-4296-9188-5e5e590e4493-combined-ca-bundle\") pod \"570bea44-7c9b-4296-9188-5e5e590e4493\" (UID: \"570bea44-7c9b-4296-9188-5e5e590e4493\") " Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.769602 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/570bea44-7c9b-4296-9188-5e5e590e4493-config-data\") pod \"570bea44-7c9b-4296-9188-5e5e590e4493\" (UID: \"570bea44-7c9b-4296-9188-5e5e590e4493\") " Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.769645 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/269111c9-ab7c-4600-a09a-d542812483ba-config\") pod \"269111c9-ab7c-4600-a09a-d542812483ba\" (UID: \"269111c9-ab7c-4600-a09a-d542812483ba\") " Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.791677 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/269111c9-ab7c-4600-a09a-d542812483ba-kube-api-access-nr4qw" (OuterVolumeSpecName: "kube-api-access-nr4qw") pod "269111c9-ab7c-4600-a09a-d542812483ba" (UID: "269111c9-ab7c-4600-a09a-d542812483ba"). InnerVolumeSpecName "kube-api-access-nr4qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.791737 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/570bea44-7c9b-4296-9188-5e5e590e4493-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "570bea44-7c9b-4296-9188-5e5e590e4493" (UID: "570bea44-7c9b-4296-9188-5e5e590e4493"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.791798 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/570bea44-7c9b-4296-9188-5e5e590e4493-kube-api-access-7vqln" (OuterVolumeSpecName: "kube-api-access-7vqln") pod "570bea44-7c9b-4296-9188-5e5e590e4493" (UID: "570bea44-7c9b-4296-9188-5e5e590e4493"). InnerVolumeSpecName "kube-api-access-7vqln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.798404 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/570bea44-7c9b-4296-9188-5e5e590e4493-scripts" (OuterVolumeSpecName: "scripts") pod "570bea44-7c9b-4296-9188-5e5e590e4493" (UID: "570bea44-7c9b-4296-9188-5e5e590e4493"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.807102 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/570bea44-7c9b-4296-9188-5e5e590e4493-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "570bea44-7c9b-4296-9188-5e5e590e4493" (UID: "570bea44-7c9b-4296-9188-5e5e590e4493"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.823557 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/570bea44-7c9b-4296-9188-5e5e590e4493-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "570bea44-7c9b-4296-9188-5e5e590e4493" (UID: "570bea44-7c9b-4296-9188-5e5e590e4493"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.826592 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/570bea44-7c9b-4296-9188-5e5e590e4493-config-data" (OuterVolumeSpecName: "config-data") pod "570bea44-7c9b-4296-9188-5e5e590e4493" (UID: "570bea44-7c9b-4296-9188-5e5e590e4493"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.848302 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/269111c9-ab7c-4600-a09a-d542812483ba-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "269111c9-ab7c-4600-a09a-d542812483ba" (UID: "269111c9-ab7c-4600-a09a-d542812483ba"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.860184 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/269111c9-ab7c-4600-a09a-d542812483ba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "269111c9-ab7c-4600-a09a-d542812483ba" (UID: "269111c9-ab7c-4600-a09a-d542812483ba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.864685 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/269111c9-ab7c-4600-a09a-d542812483ba-config" (OuterVolumeSpecName: "config") pod "269111c9-ab7c-4600-a09a-d542812483ba" (UID: "269111c9-ab7c-4600-a09a-d542812483ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.869891 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/269111c9-ab7c-4600-a09a-d542812483ba-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "269111c9-ab7c-4600-a09a-d542812483ba" (UID: "269111c9-ab7c-4600-a09a-d542812483ba"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.870868 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vqln\" (UniqueName: \"kubernetes.io/projected/570bea44-7c9b-4296-9188-5e5e590e4493-kube-api-access-7vqln\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.870884 4908 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/570bea44-7c9b-4296-9188-5e5e590e4493-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.870895 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/570bea44-7c9b-4296-9188-5e5e590e4493-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.870904 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/570bea44-7c9b-4296-9188-5e5e590e4493-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.870913 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/269111c9-ab7c-4600-a09a-d542812483ba-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.870920 4908 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/269111c9-ab7c-4600-a09a-d542812483ba-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.870928 4908 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/269111c9-ab7c-4600-a09a-d542812483ba-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.870936 4908 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/570bea44-7c9b-4296-9188-5e5e590e4493-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.870944 4908 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/269111c9-ab7c-4600-a09a-d542812483ba-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.870953 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr4qw\" (UniqueName: \"kubernetes.io/projected/269111c9-ab7c-4600-a09a-d542812483ba-kube-api-access-nr4qw\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:23 crc kubenswrapper[4908]: I0131 07:42:23.870961 4908 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/570bea44-7c9b-4296-9188-5e5e590e4493-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:24 crc kubenswrapper[4908]: I0131 07:42:24.380706 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-bdv78" Jan 31 07:42:24 crc kubenswrapper[4908]: I0131 07:42:24.380699 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-bdv78" event={"ID":"269111c9-ab7c-4600-a09a-d542812483ba","Type":"ContainerDied","Data":"226d2d504d6cc45455120e7b440c676d45694b1dd486fa4e19ffd70d364fdf97"} Jan 31 07:42:24 crc kubenswrapper[4908]: I0131 07:42:24.389197 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ptzt6" event={"ID":"570bea44-7c9b-4296-9188-5e5e590e4493","Type":"ContainerDied","Data":"1088540517a9511bde5e623b5408c33d061d9700fced1d8bb2e9602c865a363f"} Jan 31 07:42:24 crc kubenswrapper[4908]: I0131 07:42:24.389568 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1088540517a9511bde5e623b5408c33d061d9700fced1d8bb2e9602c865a363f" Jan 31 07:42:24 crc kubenswrapper[4908]: I0131 07:42:24.389536 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ptzt6" Jan 31 07:42:24 crc kubenswrapper[4908]: E0131 07:42:24.404056 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-bmhth" podUID="a9f3cf33-1d4c-4fae-ac0e-54d917d43325" Jan 31 07:42:24 crc kubenswrapper[4908]: I0131 07:42:24.418510 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-bdv78"] Jan 31 07:42:24 crc kubenswrapper[4908]: I0131 07:42:24.428336 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-bdv78"] Jan 31 07:42:24 crc kubenswrapper[4908]: I0131 07:42:24.827365 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7d75564955-lkfg2"] Jan 31 07:42:24 crc kubenswrapper[4908]: E0131 07:42:24.827807 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="269111c9-ab7c-4600-a09a-d542812483ba" containerName="init" Jan 31 07:42:24 crc kubenswrapper[4908]: I0131 07:42:24.827821 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="269111c9-ab7c-4600-a09a-d542812483ba" containerName="init" Jan 31 07:42:24 crc kubenswrapper[4908]: E0131 07:42:24.827835 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="269111c9-ab7c-4600-a09a-d542812483ba" containerName="dnsmasq-dns" Jan 31 07:42:24 crc kubenswrapper[4908]: I0131 07:42:24.827841 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="269111c9-ab7c-4600-a09a-d542812483ba" containerName="dnsmasq-dns" Jan 31 07:42:24 crc kubenswrapper[4908]: E0131 07:42:24.827852 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="570bea44-7c9b-4296-9188-5e5e590e4493" containerName="keystone-bootstrap" Jan 31 07:42:24 crc kubenswrapper[4908]: I0131 07:42:24.827860 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="570bea44-7c9b-4296-9188-5e5e590e4493" containerName="keystone-bootstrap" Jan 31 07:42:24 crc kubenswrapper[4908]: I0131 07:42:24.828134 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="269111c9-ab7c-4600-a09a-d542812483ba" containerName="dnsmasq-dns" Jan 31 07:42:24 crc kubenswrapper[4908]: I0131 07:42:24.828156 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="570bea44-7c9b-4296-9188-5e5e590e4493" containerName="keystone-bootstrap" Jan 31 07:42:24 crc kubenswrapper[4908]: I0131 07:42:24.828735 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7d75564955-lkfg2" Jan 31 07:42:24 crc kubenswrapper[4908]: I0131 07:42:24.831968 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 07:42:24 crc kubenswrapper[4908]: I0131 07:42:24.832262 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-68r4f" Jan 31 07:42:24 crc kubenswrapper[4908]: I0131 07:42:24.832573 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 31 07:42:24 crc kubenswrapper[4908]: I0131 07:42:24.832608 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 07:42:24 crc kubenswrapper[4908]: I0131 07:42:24.832799 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 07:42:24 crc kubenswrapper[4908]: I0131 07:42:24.832962 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 31 07:42:24 crc kubenswrapper[4908]: I0131 07:42:24.849626 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7d75564955-lkfg2"] Jan 31 07:42:24 crc kubenswrapper[4908]: I0131 07:42:24.991113 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/24d8623b-0810-4e1a-8ab2-83e734e71cb3-fernet-keys\") pod \"keystone-7d75564955-lkfg2\" (UID: \"24d8623b-0810-4e1a-8ab2-83e734e71cb3\") " pod="openstack/keystone-7d75564955-lkfg2" Jan 31 07:42:24 crc kubenswrapper[4908]: I0131 07:42:24.991180 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/24d8623b-0810-4e1a-8ab2-83e734e71cb3-credential-keys\") pod \"keystone-7d75564955-lkfg2\" (UID: \"24d8623b-0810-4e1a-8ab2-83e734e71cb3\") " pod="openstack/keystone-7d75564955-lkfg2" Jan 31 07:42:24 crc kubenswrapper[4908]: I0131 07:42:24.991220 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngqpj\" (UniqueName: \"kubernetes.io/projected/24d8623b-0810-4e1a-8ab2-83e734e71cb3-kube-api-access-ngqpj\") pod \"keystone-7d75564955-lkfg2\" (UID: \"24d8623b-0810-4e1a-8ab2-83e734e71cb3\") " pod="openstack/keystone-7d75564955-lkfg2" Jan 31 07:42:24 crc kubenswrapper[4908]: I0131 07:42:24.991293 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d8623b-0810-4e1a-8ab2-83e734e71cb3-internal-tls-certs\") pod \"keystone-7d75564955-lkfg2\" (UID: \"24d8623b-0810-4e1a-8ab2-83e734e71cb3\") " pod="openstack/keystone-7d75564955-lkfg2" Jan 31 07:42:24 crc kubenswrapper[4908]: I0131 07:42:24.991327 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d8623b-0810-4e1a-8ab2-83e734e71cb3-combined-ca-bundle\") pod \"keystone-7d75564955-lkfg2\" (UID: \"24d8623b-0810-4e1a-8ab2-83e734e71cb3\") " pod="openstack/keystone-7d75564955-lkfg2" Jan 31 07:42:24 crc kubenswrapper[4908]: I0131 07:42:24.991364 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d8623b-0810-4e1a-8ab2-83e734e71cb3-config-data\") pod \"keystone-7d75564955-lkfg2\" (UID: \"24d8623b-0810-4e1a-8ab2-83e734e71cb3\") " pod="openstack/keystone-7d75564955-lkfg2" Jan 31 07:42:24 crc kubenswrapper[4908]: I0131 07:42:24.991386 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d8623b-0810-4e1a-8ab2-83e734e71cb3-public-tls-certs\") pod \"keystone-7d75564955-lkfg2\" (UID: \"24d8623b-0810-4e1a-8ab2-83e734e71cb3\") " pod="openstack/keystone-7d75564955-lkfg2" Jan 31 07:42:24 crc kubenswrapper[4908]: I0131 07:42:24.991430 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24d8623b-0810-4e1a-8ab2-83e734e71cb3-scripts\") pod \"keystone-7d75564955-lkfg2\" (UID: \"24d8623b-0810-4e1a-8ab2-83e734e71cb3\") " pod="openstack/keystone-7d75564955-lkfg2" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.092598 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d8623b-0810-4e1a-8ab2-83e734e71cb3-config-data\") pod \"keystone-7d75564955-lkfg2\" (UID: \"24d8623b-0810-4e1a-8ab2-83e734e71cb3\") " pod="openstack/keystone-7d75564955-lkfg2" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.092649 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d8623b-0810-4e1a-8ab2-83e734e71cb3-public-tls-certs\") pod \"keystone-7d75564955-lkfg2\" (UID: \"24d8623b-0810-4e1a-8ab2-83e734e71cb3\") " pod="openstack/keystone-7d75564955-lkfg2" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.092696 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24d8623b-0810-4e1a-8ab2-83e734e71cb3-scripts\") pod \"keystone-7d75564955-lkfg2\" (UID: \"24d8623b-0810-4e1a-8ab2-83e734e71cb3\") " pod="openstack/keystone-7d75564955-lkfg2" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.092744 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/24d8623b-0810-4e1a-8ab2-83e734e71cb3-fernet-keys\") pod \"keystone-7d75564955-lkfg2\" (UID: \"24d8623b-0810-4e1a-8ab2-83e734e71cb3\") " pod="openstack/keystone-7d75564955-lkfg2" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.092776 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/24d8623b-0810-4e1a-8ab2-83e734e71cb3-credential-keys\") pod \"keystone-7d75564955-lkfg2\" (UID: \"24d8623b-0810-4e1a-8ab2-83e734e71cb3\") " pod="openstack/keystone-7d75564955-lkfg2" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.092812 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngqpj\" (UniqueName: \"kubernetes.io/projected/24d8623b-0810-4e1a-8ab2-83e734e71cb3-kube-api-access-ngqpj\") pod \"keystone-7d75564955-lkfg2\" (UID: \"24d8623b-0810-4e1a-8ab2-83e734e71cb3\") " pod="openstack/keystone-7d75564955-lkfg2" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.092870 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d8623b-0810-4e1a-8ab2-83e734e71cb3-internal-tls-certs\") pod \"keystone-7d75564955-lkfg2\" (UID: \"24d8623b-0810-4e1a-8ab2-83e734e71cb3\") " pod="openstack/keystone-7d75564955-lkfg2" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.092905 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d8623b-0810-4e1a-8ab2-83e734e71cb3-combined-ca-bundle\") pod \"keystone-7d75564955-lkfg2\" (UID: \"24d8623b-0810-4e1a-8ab2-83e734e71cb3\") " pod="openstack/keystone-7d75564955-lkfg2" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.098808 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24d8623b-0810-4e1a-8ab2-83e734e71cb3-scripts\") pod \"keystone-7d75564955-lkfg2\" (UID: \"24d8623b-0810-4e1a-8ab2-83e734e71cb3\") " pod="openstack/keystone-7d75564955-lkfg2" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.099320 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d8623b-0810-4e1a-8ab2-83e734e71cb3-public-tls-certs\") pod \"keystone-7d75564955-lkfg2\" (UID: \"24d8623b-0810-4e1a-8ab2-83e734e71cb3\") " pod="openstack/keystone-7d75564955-lkfg2" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.099429 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/24d8623b-0810-4e1a-8ab2-83e734e71cb3-fernet-keys\") pod \"keystone-7d75564955-lkfg2\" (UID: \"24d8623b-0810-4e1a-8ab2-83e734e71cb3\") " pod="openstack/keystone-7d75564955-lkfg2" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.099370 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d8623b-0810-4e1a-8ab2-83e734e71cb3-internal-tls-certs\") pod \"keystone-7d75564955-lkfg2\" (UID: \"24d8623b-0810-4e1a-8ab2-83e734e71cb3\") " pod="openstack/keystone-7d75564955-lkfg2" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.099744 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d8623b-0810-4e1a-8ab2-83e734e71cb3-config-data\") pod \"keystone-7d75564955-lkfg2\" (UID: \"24d8623b-0810-4e1a-8ab2-83e734e71cb3\") " pod="openstack/keystone-7d75564955-lkfg2" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.099812 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d8623b-0810-4e1a-8ab2-83e734e71cb3-combined-ca-bundle\") pod \"keystone-7d75564955-lkfg2\" (UID: \"24d8623b-0810-4e1a-8ab2-83e734e71cb3\") " pod="openstack/keystone-7d75564955-lkfg2" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.110580 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/24d8623b-0810-4e1a-8ab2-83e734e71cb3-credential-keys\") pod \"keystone-7d75564955-lkfg2\" (UID: \"24d8623b-0810-4e1a-8ab2-83e734e71cb3\") " pod="openstack/keystone-7d75564955-lkfg2" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.113306 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngqpj\" (UniqueName: \"kubernetes.io/projected/24d8623b-0810-4e1a-8ab2-83e734e71cb3-kube-api-access-ngqpj\") pod \"keystone-7d75564955-lkfg2\" (UID: \"24d8623b-0810-4e1a-8ab2-83e734e71cb3\") " pod="openstack/keystone-7d75564955-lkfg2" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.150503 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7d75564955-lkfg2" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.532148 4908 scope.go:117] "RemoveContainer" containerID="7f7c6afdbcc44ffd8afad0e835a7c5e55cd422049835838287de2b23120a5d27" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.616097 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6db75f7db5-rvdn8" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.624562 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c67dbb4cc-s5dk4" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.703854 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d523cbb4-de42-4a3f-9636-ec5b89ad51b7-horizon-secret-key\") pod \"d523cbb4-de42-4a3f-9636-ec5b89ad51b7\" (UID: \"d523cbb4-de42-4a3f-9636-ec5b89ad51b7\") " Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.703961 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnwxv\" (UniqueName: \"kubernetes.io/projected/7c25925c-5085-4002-970e-65facdd68148-kube-api-access-vnwxv\") pod \"7c25925c-5085-4002-970e-65facdd68148\" (UID: \"7c25925c-5085-4002-970e-65facdd68148\") " Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.704016 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d523cbb4-de42-4a3f-9636-ec5b89ad51b7-scripts\") pod \"d523cbb4-de42-4a3f-9636-ec5b89ad51b7\" (UID: \"d523cbb4-de42-4a3f-9636-ec5b89ad51b7\") " Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.704129 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28xrd\" (UniqueName: \"kubernetes.io/projected/d523cbb4-de42-4a3f-9636-ec5b89ad51b7-kube-api-access-28xrd\") pod \"d523cbb4-de42-4a3f-9636-ec5b89ad51b7\" (UID: \"d523cbb4-de42-4a3f-9636-ec5b89ad51b7\") " Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.704399 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7c25925c-5085-4002-970e-65facdd68148-horizon-secret-key\") pod \"7c25925c-5085-4002-970e-65facdd68148\" (UID: \"7c25925c-5085-4002-970e-65facdd68148\") " Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.704439 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c25925c-5085-4002-970e-65facdd68148-scripts\") pod \"7c25925c-5085-4002-970e-65facdd68148\" (UID: \"7c25925c-5085-4002-970e-65facdd68148\") " Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.704489 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c25925c-5085-4002-970e-65facdd68148-config-data\") pod \"7c25925c-5085-4002-970e-65facdd68148\" (UID: \"7c25925c-5085-4002-970e-65facdd68148\") " Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.704516 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c25925c-5085-4002-970e-65facdd68148-logs\") pod \"7c25925c-5085-4002-970e-65facdd68148\" (UID: \"7c25925c-5085-4002-970e-65facdd68148\") " Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.704589 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d523cbb4-de42-4a3f-9636-ec5b89ad51b7-config-data\") pod \"d523cbb4-de42-4a3f-9636-ec5b89ad51b7\" (UID: \"d523cbb4-de42-4a3f-9636-ec5b89ad51b7\") " Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.704645 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d523cbb4-de42-4a3f-9636-ec5b89ad51b7-logs\") pod \"d523cbb4-de42-4a3f-9636-ec5b89ad51b7\" (UID: \"d523cbb4-de42-4a3f-9636-ec5b89ad51b7\") " Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.705591 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d523cbb4-de42-4a3f-9636-ec5b89ad51b7-logs" (OuterVolumeSpecName: "logs") pod "d523cbb4-de42-4a3f-9636-ec5b89ad51b7" (UID: "d523cbb4-de42-4a3f-9636-ec5b89ad51b7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.706670 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c25925c-5085-4002-970e-65facdd68148-logs" (OuterVolumeSpecName: "logs") pod "7c25925c-5085-4002-970e-65facdd68148" (UID: "7c25925c-5085-4002-970e-65facdd68148"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.709454 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c25925c-5085-4002-970e-65facdd68148-kube-api-access-vnwxv" (OuterVolumeSpecName: "kube-api-access-vnwxv") pod "7c25925c-5085-4002-970e-65facdd68148" (UID: "7c25925c-5085-4002-970e-65facdd68148"). InnerVolumeSpecName "kube-api-access-vnwxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.710593 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d523cbb4-de42-4a3f-9636-ec5b89ad51b7-kube-api-access-28xrd" (OuterVolumeSpecName: "kube-api-access-28xrd") pod "d523cbb4-de42-4a3f-9636-ec5b89ad51b7" (UID: "d523cbb4-de42-4a3f-9636-ec5b89ad51b7"). InnerVolumeSpecName "kube-api-access-28xrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.711186 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c25925c-5085-4002-970e-65facdd68148-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7c25925c-5085-4002-970e-65facdd68148" (UID: "7c25925c-5085-4002-970e-65facdd68148"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.711288 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d523cbb4-de42-4a3f-9636-ec5b89ad51b7-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d523cbb4-de42-4a3f-9636-ec5b89ad51b7" (UID: "d523cbb4-de42-4a3f-9636-ec5b89ad51b7"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.729202 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d523cbb4-de42-4a3f-9636-ec5b89ad51b7-config-data" (OuterVolumeSpecName: "config-data") pod "d523cbb4-de42-4a3f-9636-ec5b89ad51b7" (UID: "d523cbb4-de42-4a3f-9636-ec5b89ad51b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.733461 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c25925c-5085-4002-970e-65facdd68148-scripts" (OuterVolumeSpecName: "scripts") pod "7c25925c-5085-4002-970e-65facdd68148" (UID: "7c25925c-5085-4002-970e-65facdd68148"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.733717 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c25925c-5085-4002-970e-65facdd68148-config-data" (OuterVolumeSpecName: "config-data") pod "7c25925c-5085-4002-970e-65facdd68148" (UID: "7c25925c-5085-4002-970e-65facdd68148"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.738299 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d523cbb4-de42-4a3f-9636-ec5b89ad51b7-scripts" (OuterVolumeSpecName: "scripts") pod "d523cbb4-de42-4a3f-9636-ec5b89ad51b7" (UID: "d523cbb4-de42-4a3f-9636-ec5b89ad51b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.743767 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-bdv78" podUID="269111c9-ab7c-4600-a09a-d542812483ba" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: i/o timeout" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.806629 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnwxv\" (UniqueName: \"kubernetes.io/projected/7c25925c-5085-4002-970e-65facdd68148-kube-api-access-vnwxv\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.806669 4908 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d523cbb4-de42-4a3f-9636-ec5b89ad51b7-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.806682 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28xrd\" (UniqueName: \"kubernetes.io/projected/d523cbb4-de42-4a3f-9636-ec5b89ad51b7-kube-api-access-28xrd\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.806692 4908 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7c25925c-5085-4002-970e-65facdd68148-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.806700 4908 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c25925c-5085-4002-970e-65facdd68148-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.806708 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c25925c-5085-4002-970e-65facdd68148-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.806717 4908 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c25925c-5085-4002-970e-65facdd68148-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.806725 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d523cbb4-de42-4a3f-9636-ec5b89ad51b7-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.806732 4908 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d523cbb4-de42-4a3f-9636-ec5b89ad51b7-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.806740 4908 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d523cbb4-de42-4a3f-9636-ec5b89ad51b7-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:25 crc kubenswrapper[4908]: I0131 07:42:25.998850 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="269111c9-ab7c-4600-a09a-d542812483ba" path="/var/lib/kubelet/pods/269111c9-ab7c-4600-a09a-d542812483ba/volumes" Jan 31 07:42:26 crc kubenswrapper[4908]: E0131 07:42:26.098326 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 31 07:42:26 crc kubenswrapper[4908]: E0131 07:42:26.098465 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wjxvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-8bx9v_openstack(ee8dbc71-e43b-49a6-9d68-78b987f39b89): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:42:26 crc kubenswrapper[4908]: E0131 07:42:26.099991 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-8bx9v" podUID="ee8dbc71-e43b-49a6-9d68-78b987f39b89" Jan 31 07:42:26 crc kubenswrapper[4908]: I0131 07:42:26.409072 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6db75f7db5-rvdn8" Jan 31 07:42:26 crc kubenswrapper[4908]: I0131 07:42:26.409088 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6db75f7db5-rvdn8" event={"ID":"d523cbb4-de42-4a3f-9636-ec5b89ad51b7","Type":"ContainerDied","Data":"c0153662ce181a89091f6f642f6558a9789fef19a793e7c0631e048ea94699bf"} Jan 31 07:42:26 crc kubenswrapper[4908]: I0131 07:42:26.411320 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c67dbb4cc-s5dk4" event={"ID":"7c25925c-5085-4002-970e-65facdd68148","Type":"ContainerDied","Data":"ef2ae5d05d76cd9c53912fef3252291fd9b528244a1fc792b733993a3e071763"} Jan 31 07:42:26 crc kubenswrapper[4908]: I0131 07:42:26.411349 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c67dbb4cc-s5dk4" Jan 31 07:42:26 crc kubenswrapper[4908]: E0131 07:42:26.415283 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-8bx9v" podUID="ee8dbc71-e43b-49a6-9d68-78b987f39b89" Jan 31 07:42:26 crc kubenswrapper[4908]: I0131 07:42:26.431757 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6db75f7db5-rvdn8"] Jan 31 07:42:26 crc kubenswrapper[4908]: I0131 07:42:26.440099 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6db75f7db5-rvdn8"] Jan 31 07:42:26 crc kubenswrapper[4908]: I0131 07:42:26.448101 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c67dbb4cc-s5dk4"] Jan 31 07:42:26 crc kubenswrapper[4908]: I0131 07:42:26.456410 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5c67dbb4cc-s5dk4"] Jan 31 07:42:26 crc kubenswrapper[4908]: I0131 07:42:26.522143 4908 scope.go:117] "RemoveContainer" containerID="125278caa89fd2dc3b60745fbe8bb7be2bb0eb9be00f697a873b643993ea71a0" Jan 31 07:42:26 crc kubenswrapper[4908]: I0131 07:42:26.570801 4908 scope.go:117] "RemoveContainer" containerID="d4f42d63e581d82b6957567ac64af5a484da60b48cde1cf6866b235fca7500ce" Jan 31 07:42:26 crc kubenswrapper[4908]: E0131 07:42:26.775803 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified" Jan 31 07:42:26 crc kubenswrapper[4908]: E0131 07:42:26.776022 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-notification-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndch55dhcdh54bh674h66h56h576hb8h74hbh55bhb8h567h666h57h664h8dhdch55h564h5dbh644h99hd6h6dh5ch598h585hdh8bhfbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-notification-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7lhkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/notificationhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9f90a57b-f1f8-4fc5-ac97-bdd418912544): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:42:26 crc kubenswrapper[4908]: I0131 07:42:26.800934 4908 scope.go:117] "RemoveContainer" containerID="086ab50e85bd3ec0467afd4182200a16399686c83a1be2a0280b12ee18936994" Jan 31 07:42:26 crc kubenswrapper[4908]: I0131 07:42:26.817784 4908 scope.go:117] "RemoveContainer" containerID="bdd197811d1f07feefe7ee9f4993d1ef69a24abc07b7c09f1cd6bd135540a6f0" Jan 31 07:42:26 crc kubenswrapper[4908]: I0131 07:42:26.955632 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7d75564955-lkfg2"] Jan 31 07:42:26 crc kubenswrapper[4908]: I0131 07:42:26.981211 4908 scope.go:117] "RemoveContainer" containerID="70266d995b55dfdd9c4a891ce83853aa414d5e7e73155ffd800d3b894014fa96" Jan 31 07:42:26 crc kubenswrapper[4908]: W0131 07:42:26.983788 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24d8623b_0810_4e1a_8ab2_83e734e71cb3.slice/crio-2838da88b447539d05e40866de28e5c7a2a1fe6028ba0d673bda48f05cea7d2d WatchSource:0}: Error finding container 2838da88b447539d05e40866de28e5c7a2a1fe6028ba0d673bda48f05cea7d2d: Status 404 returned error can't find the container with id 2838da88b447539d05e40866de28e5c7a2a1fe6028ba0d673bda48f05cea7d2d Jan 31 07:42:27 crc kubenswrapper[4908]: I0131 07:42:27.427310 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerStarted","Data":"eba7927261e32ea7acb8227699daacd0fc29c715f0d6c37c890b4d99dd751ec0"} Jan 31 07:42:27 crc kubenswrapper[4908]: I0131 07:42:27.428903 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7d75564955-lkfg2" event={"ID":"24d8623b-0810-4e1a-8ab2-83e734e71cb3","Type":"ContainerStarted","Data":"6364641e1ed1a25cd304d0774610c65e5a1a31f3511a20bfad6a224d898f8472"} Jan 31 07:42:27 crc kubenswrapper[4908]: I0131 07:42:27.428939 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7d75564955-lkfg2" event={"ID":"24d8623b-0810-4e1a-8ab2-83e734e71cb3","Type":"ContainerStarted","Data":"2838da88b447539d05e40866de28e5c7a2a1fe6028ba0d673bda48f05cea7d2d"} Jan 31 07:42:27 crc kubenswrapper[4908]: I0131 07:42:27.949615 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c25925c-5085-4002-970e-65facdd68148" path="/var/lib/kubelet/pods/7c25925c-5085-4002-970e-65facdd68148/volumes" Jan 31 07:42:27 crc kubenswrapper[4908]: I0131 07:42:27.950618 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d523cbb4-de42-4a3f-9636-ec5b89ad51b7" path="/var/lib/kubelet/pods/d523cbb4-de42-4a3f-9636-ec5b89ad51b7/volumes" Jan 31 07:42:28 crc kubenswrapper[4908]: I0131 07:42:28.438302 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-krd5r" event={"ID":"3b59881d-759b-492a-b475-c27f092660c6","Type":"ContainerStarted","Data":"2bd36076791b4526cd4ca77aa1139f6079b805edcaabb95501b08d9e06dfe87e"} Jan 31 07:42:28 crc kubenswrapper[4908]: I0131 07:42:28.464008 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7d75564955-lkfg2" podStartSLOduration=4.463969665 podStartE2EDuration="4.463969665s" podCreationTimestamp="2026-01-31 07:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:42:28.458501272 +0000 UTC m=+1255.074445926" watchObservedRunningTime="2026-01-31 07:42:28.463969665 +0000 UTC m=+1255.079914319" Jan 31 07:42:30 crc kubenswrapper[4908]: I0131 07:42:30.609465 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5575f74d-z4ns7" podUID="d4638785-cdd7-4526-ba1e-4e1772877042" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.141:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.141:8443: connect: connection refused" Jan 31 07:42:30 crc kubenswrapper[4908]: I0131 07:42:30.610114 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5575f74d-z4ns7" Jan 31 07:42:30 crc kubenswrapper[4908]: I0131 07:42:30.632534 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-krd5r" podStartSLOduration=17.920748981 podStartE2EDuration="1m14.632494415s" podCreationTimestamp="2026-01-31 07:41:16 +0000 UTC" firstStartedPulling="2026-01-31 07:41:31.259603006 +0000 UTC m=+1197.875547660" lastFinishedPulling="2026-01-31 07:42:27.97134844 +0000 UTC m=+1254.587293094" observedRunningTime="2026-01-31 07:42:28.481459332 +0000 UTC m=+1255.097403986" watchObservedRunningTime="2026-01-31 07:42:30.632494415 +0000 UTC m=+1257.248439069" Jan 31 07:42:32 crc kubenswrapper[4908]: I0131 07:42:32.491007 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f90a57b-f1f8-4fc5-ac97-bdd418912544","Type":"ContainerStarted","Data":"b41d076fedad59e70a2b00172a056f0d2995e70aaa66e5733805dc8a4331cb63"} Jan 31 07:42:32 crc kubenswrapper[4908]: I0131 07:42:32.493652 4908 generic.go:334] "Generic (PLEG): container finished" podID="3b59881d-759b-492a-b475-c27f092660c6" containerID="2bd36076791b4526cd4ca77aa1139f6079b805edcaabb95501b08d9e06dfe87e" exitCode=0 Jan 31 07:42:32 crc kubenswrapper[4908]: I0131 07:42:32.493686 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-krd5r" event={"ID":"3b59881d-759b-492a-b475-c27f092660c6","Type":"ContainerDied","Data":"2bd36076791b4526cd4ca77aa1139f6079b805edcaabb95501b08d9e06dfe87e"} Jan 31 07:42:33 crc kubenswrapper[4908]: I0131 07:42:33.805930 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-krd5r" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.005771 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b59881d-759b-492a-b475-c27f092660c6-logs\") pod \"3b59881d-759b-492a-b475-c27f092660c6\" (UID: \"3b59881d-759b-492a-b475-c27f092660c6\") " Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.006137 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b59881d-759b-492a-b475-c27f092660c6-logs" (OuterVolumeSpecName: "logs") pod "3b59881d-759b-492a-b475-c27f092660c6" (UID: "3b59881d-759b-492a-b475-c27f092660c6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.006458 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b59881d-759b-492a-b475-c27f092660c6-combined-ca-bundle\") pod \"3b59881d-759b-492a-b475-c27f092660c6\" (UID: \"3b59881d-759b-492a-b475-c27f092660c6\") " Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.006521 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b59881d-759b-492a-b475-c27f092660c6-config-data\") pod \"3b59881d-759b-492a-b475-c27f092660c6\" (UID: \"3b59881d-759b-492a-b475-c27f092660c6\") " Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.006590 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b59881d-759b-492a-b475-c27f092660c6-scripts\") pod \"3b59881d-759b-492a-b475-c27f092660c6\" (UID: \"3b59881d-759b-492a-b475-c27f092660c6\") " Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.006630 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lwzh\" (UniqueName: \"kubernetes.io/projected/3b59881d-759b-492a-b475-c27f092660c6-kube-api-access-2lwzh\") pod \"3b59881d-759b-492a-b475-c27f092660c6\" (UID: \"3b59881d-759b-492a-b475-c27f092660c6\") " Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.009902 4908 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b59881d-759b-492a-b475-c27f092660c6-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.012055 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b59881d-759b-492a-b475-c27f092660c6-kube-api-access-2lwzh" (OuterVolumeSpecName: "kube-api-access-2lwzh") pod "3b59881d-759b-492a-b475-c27f092660c6" (UID: "3b59881d-759b-492a-b475-c27f092660c6"). InnerVolumeSpecName "kube-api-access-2lwzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.012688 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b59881d-759b-492a-b475-c27f092660c6-scripts" (OuterVolumeSpecName: "scripts") pod "3b59881d-759b-492a-b475-c27f092660c6" (UID: "3b59881d-759b-492a-b475-c27f092660c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.030279 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b59881d-759b-492a-b475-c27f092660c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b59881d-759b-492a-b475-c27f092660c6" (UID: "3b59881d-759b-492a-b475-c27f092660c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.049389 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b59881d-759b-492a-b475-c27f092660c6-config-data" (OuterVolumeSpecName: "config-data") pod "3b59881d-759b-492a-b475-c27f092660c6" (UID: "3b59881d-759b-492a-b475-c27f092660c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.111718 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lwzh\" (UniqueName: \"kubernetes.io/projected/3b59881d-759b-492a-b475-c27f092660c6-kube-api-access-2lwzh\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.111750 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b59881d-759b-492a-b475-c27f092660c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.111759 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b59881d-759b-492a-b475-c27f092660c6-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.111770 4908 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b59881d-759b-492a-b475-c27f092660c6-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.509489 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-krd5r" event={"ID":"3b59881d-759b-492a-b475-c27f092660c6","Type":"ContainerDied","Data":"8561bd6d73e2c2f3a973d0eb03e083a54fc635d325c546c39553be6b69c9c487"} Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.509526 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8561bd6d73e2c2f3a973d0eb03e083a54fc635d325c546c39553be6b69c9c487" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.509620 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-krd5r" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.600573 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5b4c4c4576-4d8kj"] Jan 31 07:42:34 crc kubenswrapper[4908]: E0131 07:42:34.600994 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d523cbb4-de42-4a3f-9636-ec5b89ad51b7" containerName="horizon-log" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.601015 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="d523cbb4-de42-4a3f-9636-ec5b89ad51b7" containerName="horizon-log" Jan 31 07:42:34 crc kubenswrapper[4908]: E0131 07:42:34.601047 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c25925c-5085-4002-970e-65facdd68148" containerName="horizon-log" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.601054 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c25925c-5085-4002-970e-65facdd68148" containerName="horizon-log" Jan 31 07:42:34 crc kubenswrapper[4908]: E0131 07:42:34.601063 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d523cbb4-de42-4a3f-9636-ec5b89ad51b7" containerName="horizon" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.601069 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="d523cbb4-de42-4a3f-9636-ec5b89ad51b7" containerName="horizon" Jan 31 07:42:34 crc kubenswrapper[4908]: E0131 07:42:34.601079 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b59881d-759b-492a-b475-c27f092660c6" containerName="placement-db-sync" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.601084 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b59881d-759b-492a-b475-c27f092660c6" containerName="placement-db-sync" Jan 31 07:42:34 crc kubenswrapper[4908]: E0131 07:42:34.601092 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c25925c-5085-4002-970e-65facdd68148" containerName="horizon" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.601098 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c25925c-5085-4002-970e-65facdd68148" containerName="horizon" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.601235 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="d523cbb4-de42-4a3f-9636-ec5b89ad51b7" containerName="horizon" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.601245 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c25925c-5085-4002-970e-65facdd68148" containerName="horizon-log" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.601252 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c25925c-5085-4002-970e-65facdd68148" containerName="horizon" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.601262 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b59881d-759b-492a-b475-c27f092660c6" containerName="placement-db-sync" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.601276 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="d523cbb4-de42-4a3f-9636-ec5b89ad51b7" containerName="horizon-log" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.602139 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5b4c4c4576-4d8kj" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.605110 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-9gcrz" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.605277 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.605396 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.605585 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.606114 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.612647 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5b4c4c4576-4d8kj"] Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.617627 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c09e85d-b07b-4be2-b947-4777bcdd977a-public-tls-certs\") pod \"placement-5b4c4c4576-4d8kj\" (UID: \"2c09e85d-b07b-4be2-b947-4777bcdd977a\") " pod="openstack/placement-5b4c4c4576-4d8kj" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.617792 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c09e85d-b07b-4be2-b947-4777bcdd977a-logs\") pod \"placement-5b4c4c4576-4d8kj\" (UID: \"2c09e85d-b07b-4be2-b947-4777bcdd977a\") " pod="openstack/placement-5b4c4c4576-4d8kj" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.617991 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wndqm\" (UniqueName: \"kubernetes.io/projected/2c09e85d-b07b-4be2-b947-4777bcdd977a-kube-api-access-wndqm\") pod \"placement-5b4c4c4576-4d8kj\" (UID: \"2c09e85d-b07b-4be2-b947-4777bcdd977a\") " pod="openstack/placement-5b4c4c4576-4d8kj" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.618103 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c09e85d-b07b-4be2-b947-4777bcdd977a-scripts\") pod \"placement-5b4c4c4576-4d8kj\" (UID: \"2c09e85d-b07b-4be2-b947-4777bcdd977a\") " pod="openstack/placement-5b4c4c4576-4d8kj" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.618152 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c09e85d-b07b-4be2-b947-4777bcdd977a-combined-ca-bundle\") pod \"placement-5b4c4c4576-4d8kj\" (UID: \"2c09e85d-b07b-4be2-b947-4777bcdd977a\") " pod="openstack/placement-5b4c4c4576-4d8kj" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.618217 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c09e85d-b07b-4be2-b947-4777bcdd977a-internal-tls-certs\") pod \"placement-5b4c4c4576-4d8kj\" (UID: \"2c09e85d-b07b-4be2-b947-4777bcdd977a\") " pod="openstack/placement-5b4c4c4576-4d8kj" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.618360 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c09e85d-b07b-4be2-b947-4777bcdd977a-config-data\") pod \"placement-5b4c4c4576-4d8kj\" (UID: \"2c09e85d-b07b-4be2-b947-4777bcdd977a\") " pod="openstack/placement-5b4c4c4576-4d8kj" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.719838 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c09e85d-b07b-4be2-b947-4777bcdd977a-scripts\") pod \"placement-5b4c4c4576-4d8kj\" (UID: \"2c09e85d-b07b-4be2-b947-4777bcdd977a\") " pod="openstack/placement-5b4c4c4576-4d8kj" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.719889 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c09e85d-b07b-4be2-b947-4777bcdd977a-combined-ca-bundle\") pod \"placement-5b4c4c4576-4d8kj\" (UID: \"2c09e85d-b07b-4be2-b947-4777bcdd977a\") " pod="openstack/placement-5b4c4c4576-4d8kj" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.719921 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c09e85d-b07b-4be2-b947-4777bcdd977a-internal-tls-certs\") pod \"placement-5b4c4c4576-4d8kj\" (UID: \"2c09e85d-b07b-4be2-b947-4777bcdd977a\") " pod="openstack/placement-5b4c4c4576-4d8kj" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.719961 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c09e85d-b07b-4be2-b947-4777bcdd977a-config-data\") pod \"placement-5b4c4c4576-4d8kj\" (UID: \"2c09e85d-b07b-4be2-b947-4777bcdd977a\") " pod="openstack/placement-5b4c4c4576-4d8kj" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.720017 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c09e85d-b07b-4be2-b947-4777bcdd977a-public-tls-certs\") pod \"placement-5b4c4c4576-4d8kj\" (UID: \"2c09e85d-b07b-4be2-b947-4777bcdd977a\") " pod="openstack/placement-5b4c4c4576-4d8kj" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.720041 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c09e85d-b07b-4be2-b947-4777bcdd977a-logs\") pod \"placement-5b4c4c4576-4d8kj\" (UID: \"2c09e85d-b07b-4be2-b947-4777bcdd977a\") " pod="openstack/placement-5b4c4c4576-4d8kj" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.720086 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wndqm\" (UniqueName: \"kubernetes.io/projected/2c09e85d-b07b-4be2-b947-4777bcdd977a-kube-api-access-wndqm\") pod \"placement-5b4c4c4576-4d8kj\" (UID: \"2c09e85d-b07b-4be2-b947-4777bcdd977a\") " pod="openstack/placement-5b4c4c4576-4d8kj" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.720901 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c09e85d-b07b-4be2-b947-4777bcdd977a-logs\") pod \"placement-5b4c4c4576-4d8kj\" (UID: \"2c09e85d-b07b-4be2-b947-4777bcdd977a\") " pod="openstack/placement-5b4c4c4576-4d8kj" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.726250 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c09e85d-b07b-4be2-b947-4777bcdd977a-combined-ca-bundle\") pod \"placement-5b4c4c4576-4d8kj\" (UID: \"2c09e85d-b07b-4be2-b947-4777bcdd977a\") " pod="openstack/placement-5b4c4c4576-4d8kj" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.728581 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c09e85d-b07b-4be2-b947-4777bcdd977a-public-tls-certs\") pod \"placement-5b4c4c4576-4d8kj\" (UID: \"2c09e85d-b07b-4be2-b947-4777bcdd977a\") " pod="openstack/placement-5b4c4c4576-4d8kj" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.728756 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c09e85d-b07b-4be2-b947-4777bcdd977a-internal-tls-certs\") pod \"placement-5b4c4c4576-4d8kj\" (UID: \"2c09e85d-b07b-4be2-b947-4777bcdd977a\") " pod="openstack/placement-5b4c4c4576-4d8kj" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.728943 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c09e85d-b07b-4be2-b947-4777bcdd977a-config-data\") pod \"placement-5b4c4c4576-4d8kj\" (UID: \"2c09e85d-b07b-4be2-b947-4777bcdd977a\") " pod="openstack/placement-5b4c4c4576-4d8kj" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.740427 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c09e85d-b07b-4be2-b947-4777bcdd977a-scripts\") pod \"placement-5b4c4c4576-4d8kj\" (UID: \"2c09e85d-b07b-4be2-b947-4777bcdd977a\") " pod="openstack/placement-5b4c4c4576-4d8kj" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.740943 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wndqm\" (UniqueName: \"kubernetes.io/projected/2c09e85d-b07b-4be2-b947-4777bcdd977a-kube-api-access-wndqm\") pod \"placement-5b4c4c4576-4d8kj\" (UID: \"2c09e85d-b07b-4be2-b947-4777bcdd977a\") " pod="openstack/placement-5b4c4c4576-4d8kj" Jan 31 07:42:34 crc kubenswrapper[4908]: E0131 07:42:34.781362 4908 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4638785_cdd7_4526_ba1e_4e1772877042.slice/crio-conmon-58d59291e69542654281566570683863121d3ebd52a0446fbb9eb03de6979ac0.scope\": RecentStats: unable to find data in memory cache]" Jan 31 07:42:34 crc kubenswrapper[4908]: I0131 07:42:34.951223 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5b4c4c4576-4d8kj" Jan 31 07:42:35 crc kubenswrapper[4908]: I0131 07:42:35.403796 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5b4c4c4576-4d8kj"] Jan 31 07:42:35 crc kubenswrapper[4908]: I0131 07:42:35.520097 4908 generic.go:334] "Generic (PLEG): container finished" podID="d4638785-cdd7-4526-ba1e-4e1772877042" containerID="58d59291e69542654281566570683863121d3ebd52a0446fbb9eb03de6979ac0" exitCode=137 Jan 31 07:42:35 crc kubenswrapper[4908]: I0131 07:42:35.520173 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5575f74d-z4ns7" event={"ID":"d4638785-cdd7-4526-ba1e-4e1772877042","Type":"ContainerDied","Data":"58d59291e69542654281566570683863121d3ebd52a0446fbb9eb03de6979ac0"} Jan 31 07:42:35 crc kubenswrapper[4908]: I0131 07:42:35.521278 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b4c4c4576-4d8kj" event={"ID":"2c09e85d-b07b-4be2-b947-4777bcdd977a","Type":"ContainerStarted","Data":"5bd5e13375d84cfa12c0fae908df1a476c6a057318cd191e413f6aea6a2f28d6"} Jan 31 07:42:36 crc kubenswrapper[4908]: I0131 07:42:36.530112 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b4c4c4576-4d8kj" event={"ID":"2c09e85d-b07b-4be2-b947-4777bcdd977a","Type":"ContainerStarted","Data":"781fb5b81e5799b94b72021e4a127a40d3c109388e5a375863902c9aafc1c822"} Jan 31 07:42:37 crc kubenswrapper[4908]: I0131 07:42:37.545845 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b4c4c4576-4d8kj" event={"ID":"2c09e85d-b07b-4be2-b947-4777bcdd977a","Type":"ContainerStarted","Data":"c88760166fe1684dba5afc7cd8b2d068c9cd6ec2f6fcbd4a02568f4dd838174d"} Jan 31 07:42:37 crc kubenswrapper[4908]: I0131 07:42:37.546112 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5b4c4c4576-4d8kj" Jan 31 07:42:37 crc kubenswrapper[4908]: I0131 07:42:37.546126 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5b4c4c4576-4d8kj" Jan 31 07:42:40 crc kubenswrapper[4908]: I0131 07:42:40.962807 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5b4c4c4576-4d8kj" podStartSLOduration=6.962783738 podStartE2EDuration="6.962783738s" podCreationTimestamp="2026-01-31 07:42:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:42:37.574595451 +0000 UTC m=+1264.190540105" watchObservedRunningTime="2026-01-31 07:42:40.962783738 +0000 UTC m=+1267.578728412" Jan 31 07:42:45 crc kubenswrapper[4908]: I0131 07:42:45.610267 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5575f74d-z4ns7" podUID="d4638785-cdd7-4526-ba1e-4e1772877042" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.141:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 07:42:49 crc kubenswrapper[4908]: I0131 07:42:49.592289 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5575f74d-z4ns7" Jan 31 07:42:49 crc kubenswrapper[4908]: I0131 07:42:49.677923 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4638785-cdd7-4526-ba1e-4e1772877042-logs\") pod \"d4638785-cdd7-4526-ba1e-4e1772877042\" (UID: \"d4638785-cdd7-4526-ba1e-4e1772877042\") " Jan 31 07:42:49 crc kubenswrapper[4908]: I0131 07:42:49.678291 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4638785-cdd7-4526-ba1e-4e1772877042-scripts\") pod \"d4638785-cdd7-4526-ba1e-4e1772877042\" (UID: \"d4638785-cdd7-4526-ba1e-4e1772877042\") " Jan 31 07:42:49 crc kubenswrapper[4908]: I0131 07:42:49.678346 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4638785-cdd7-4526-ba1e-4e1772877042-logs" (OuterVolumeSpecName: "logs") pod "d4638785-cdd7-4526-ba1e-4e1772877042" (UID: "d4638785-cdd7-4526-ba1e-4e1772877042"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:42:49 crc kubenswrapper[4908]: I0131 07:42:49.678388 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4638785-cdd7-4526-ba1e-4e1772877042-config-data\") pod \"d4638785-cdd7-4526-ba1e-4e1772877042\" (UID: \"d4638785-cdd7-4526-ba1e-4e1772877042\") " Jan 31 07:42:49 crc kubenswrapper[4908]: I0131 07:42:49.679043 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5575f74d-z4ns7" event={"ID":"d4638785-cdd7-4526-ba1e-4e1772877042","Type":"ContainerDied","Data":"37ae495fceadba4a8baff3789dd55dbbfb331d086181232b9633dab47f10741c"} Jan 31 07:42:49 crc kubenswrapper[4908]: I0131 07:42:49.679088 4908 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4638785-cdd7-4526-ba1e-4e1772877042-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:49 crc kubenswrapper[4908]: I0131 07:42:49.679102 4908 scope.go:117] "RemoveContainer" containerID="3f02a725a5cf0f911eb45588a65e49dc4b79959c69f08f892b82fdd352b40798" Jan 31 07:42:49 crc kubenswrapper[4908]: I0131 07:42:49.679120 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5575f74d-z4ns7" Jan 31 07:42:49 crc kubenswrapper[4908]: I0131 07:42:49.703549 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4638785-cdd7-4526-ba1e-4e1772877042-scripts" (OuterVolumeSpecName: "scripts") pod "d4638785-cdd7-4526-ba1e-4e1772877042" (UID: "d4638785-cdd7-4526-ba1e-4e1772877042"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:42:49 crc kubenswrapper[4908]: I0131 07:42:49.708608 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4638785-cdd7-4526-ba1e-4e1772877042-config-data" (OuterVolumeSpecName: "config-data") pod "d4638785-cdd7-4526-ba1e-4e1772877042" (UID: "d4638785-cdd7-4526-ba1e-4e1772877042"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:42:49 crc kubenswrapper[4908]: I0131 07:42:49.779844 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4638785-cdd7-4526-ba1e-4e1772877042-horizon-tls-certs\") pod \"d4638785-cdd7-4526-ba1e-4e1772877042\" (UID: \"d4638785-cdd7-4526-ba1e-4e1772877042\") " Jan 31 07:42:49 crc kubenswrapper[4908]: I0131 07:42:49.780230 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4638785-cdd7-4526-ba1e-4e1772877042-combined-ca-bundle\") pod \"d4638785-cdd7-4526-ba1e-4e1772877042\" (UID: \"d4638785-cdd7-4526-ba1e-4e1772877042\") " Jan 31 07:42:49 crc kubenswrapper[4908]: I0131 07:42:49.780274 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssnbb\" (UniqueName: \"kubernetes.io/projected/d4638785-cdd7-4526-ba1e-4e1772877042-kube-api-access-ssnbb\") pod \"d4638785-cdd7-4526-ba1e-4e1772877042\" (UID: \"d4638785-cdd7-4526-ba1e-4e1772877042\") " Jan 31 07:42:49 crc kubenswrapper[4908]: I0131 07:42:49.780297 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d4638785-cdd7-4526-ba1e-4e1772877042-horizon-secret-key\") pod \"d4638785-cdd7-4526-ba1e-4e1772877042\" (UID: \"d4638785-cdd7-4526-ba1e-4e1772877042\") " Jan 31 07:42:49 crc kubenswrapper[4908]: I0131 07:42:49.780627 4908 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4638785-cdd7-4526-ba1e-4e1772877042-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:49 crc kubenswrapper[4908]: I0131 07:42:49.780639 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4638785-cdd7-4526-ba1e-4e1772877042-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:49 crc kubenswrapper[4908]: I0131 07:42:49.783234 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4638785-cdd7-4526-ba1e-4e1772877042-kube-api-access-ssnbb" (OuterVolumeSpecName: "kube-api-access-ssnbb") pod "d4638785-cdd7-4526-ba1e-4e1772877042" (UID: "d4638785-cdd7-4526-ba1e-4e1772877042"). InnerVolumeSpecName "kube-api-access-ssnbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:42:49 crc kubenswrapper[4908]: I0131 07:42:49.783560 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4638785-cdd7-4526-ba1e-4e1772877042-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d4638785-cdd7-4526-ba1e-4e1772877042" (UID: "d4638785-cdd7-4526-ba1e-4e1772877042"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:42:49 crc kubenswrapper[4908]: I0131 07:42:49.802884 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4638785-cdd7-4526-ba1e-4e1772877042-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4638785-cdd7-4526-ba1e-4e1772877042" (UID: "d4638785-cdd7-4526-ba1e-4e1772877042"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:42:49 crc kubenswrapper[4908]: I0131 07:42:49.818481 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4638785-cdd7-4526-ba1e-4e1772877042-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "d4638785-cdd7-4526-ba1e-4e1772877042" (UID: "d4638785-cdd7-4526-ba1e-4e1772877042"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:42:49 crc kubenswrapper[4908]: I0131 07:42:49.882504 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4638785-cdd7-4526-ba1e-4e1772877042-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:49 crc kubenswrapper[4908]: I0131 07:42:49.882570 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssnbb\" (UniqueName: \"kubernetes.io/projected/d4638785-cdd7-4526-ba1e-4e1772877042-kube-api-access-ssnbb\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:49 crc kubenswrapper[4908]: I0131 07:42:49.882596 4908 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d4638785-cdd7-4526-ba1e-4e1772877042-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:49 crc kubenswrapper[4908]: I0131 07:42:49.882620 4908 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4638785-cdd7-4526-ba1e-4e1772877042-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:50 crc kubenswrapper[4908]: I0131 07:42:50.006251 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5575f74d-z4ns7"] Jan 31 07:42:50 crc kubenswrapper[4908]: I0131 07:42:50.012591 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5575f74d-z4ns7"] Jan 31 07:42:50 crc kubenswrapper[4908]: E0131 07:42:50.105595 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 31 07:42:50 crc kubenswrapper[4908]: E0131 07:42:50.105774 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7lhkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9f90a57b-f1f8-4fc5-ac97-bdd418912544): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 07:42:50 crc kubenswrapper[4908]: E0131 07:42:50.106959 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"ceilometer-notification-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="9f90a57b-f1f8-4fc5-ac97-bdd418912544" Jan 31 07:42:50 crc kubenswrapper[4908]: I0131 07:42:50.217819 4908 scope.go:117] "RemoveContainer" containerID="58d59291e69542654281566570683863121d3ebd52a0446fbb9eb03de6979ac0" Jan 31 07:42:50 crc kubenswrapper[4908]: I0131 07:42:50.700443 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f90a57b-f1f8-4fc5-ac97-bdd418912544" containerName="sg-core" containerID="cri-o://b41d076fedad59e70a2b00172a056f0d2995e70aaa66e5733805dc8a4331cb63" gracePeriod=30 Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.159250 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.322045 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f90a57b-f1f8-4fc5-ac97-bdd418912544-scripts\") pod \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\" (UID: \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\") " Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.322289 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f90a57b-f1f8-4fc5-ac97-bdd418912544-sg-core-conf-yaml\") pod \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\" (UID: \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\") " Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.322313 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lhkv\" (UniqueName: \"kubernetes.io/projected/9f90a57b-f1f8-4fc5-ac97-bdd418912544-kube-api-access-7lhkv\") pod \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\" (UID: \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\") " Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.322332 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f90a57b-f1f8-4fc5-ac97-bdd418912544-log-httpd\") pod \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\" (UID: \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\") " Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.322382 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f90a57b-f1f8-4fc5-ac97-bdd418912544-config-data\") pod \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\" (UID: \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\") " Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.322824 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f90a57b-f1f8-4fc5-ac97-bdd418912544-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9f90a57b-f1f8-4fc5-ac97-bdd418912544" (UID: "9f90a57b-f1f8-4fc5-ac97-bdd418912544"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.322953 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f90a57b-f1f8-4fc5-ac97-bdd418912544-run-httpd\") pod \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\" (UID: \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\") " Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.323317 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f90a57b-f1f8-4fc5-ac97-bdd418912544-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9f90a57b-f1f8-4fc5-ac97-bdd418912544" (UID: "9f90a57b-f1f8-4fc5-ac97-bdd418912544"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.323722 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f90a57b-f1f8-4fc5-ac97-bdd418912544-combined-ca-bundle\") pod \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\" (UID: \"9f90a57b-f1f8-4fc5-ac97-bdd418912544\") " Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.324129 4908 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f90a57b-f1f8-4fc5-ac97-bdd418912544-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.324146 4908 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f90a57b-f1f8-4fc5-ac97-bdd418912544-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.327713 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f90a57b-f1f8-4fc5-ac97-bdd418912544-scripts" (OuterVolumeSpecName: "scripts") pod "9f90a57b-f1f8-4fc5-ac97-bdd418912544" (UID: "9f90a57b-f1f8-4fc5-ac97-bdd418912544"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.328309 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f90a57b-f1f8-4fc5-ac97-bdd418912544-kube-api-access-7lhkv" (OuterVolumeSpecName: "kube-api-access-7lhkv") pod "9f90a57b-f1f8-4fc5-ac97-bdd418912544" (UID: "9f90a57b-f1f8-4fc5-ac97-bdd418912544"). InnerVolumeSpecName "kube-api-access-7lhkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.328637 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f90a57b-f1f8-4fc5-ac97-bdd418912544-config-data" (OuterVolumeSpecName: "config-data") pod "9f90a57b-f1f8-4fc5-ac97-bdd418912544" (UID: "9f90a57b-f1f8-4fc5-ac97-bdd418912544"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.328794 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f90a57b-f1f8-4fc5-ac97-bdd418912544-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f90a57b-f1f8-4fc5-ac97-bdd418912544" (UID: "9f90a57b-f1f8-4fc5-ac97-bdd418912544"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.345787 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f90a57b-f1f8-4fc5-ac97-bdd418912544-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9f90a57b-f1f8-4fc5-ac97-bdd418912544" (UID: "9f90a57b-f1f8-4fc5-ac97-bdd418912544"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.426104 4908 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f90a57b-f1f8-4fc5-ac97-bdd418912544-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.426142 4908 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f90a57b-f1f8-4fc5-ac97-bdd418912544-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.426156 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lhkv\" (UniqueName: \"kubernetes.io/projected/9f90a57b-f1f8-4fc5-ac97-bdd418912544-kube-api-access-7lhkv\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.426164 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f90a57b-f1f8-4fc5-ac97-bdd418912544-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.426173 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f90a57b-f1f8-4fc5-ac97-bdd418912544-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.708948 4908 generic.go:334] "Generic (PLEG): container finished" podID="9f90a57b-f1f8-4fc5-ac97-bdd418912544" containerID="b41d076fedad59e70a2b00172a056f0d2995e70aaa66e5733805dc8a4331cb63" exitCode=2 Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.709065 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.710105 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f90a57b-f1f8-4fc5-ac97-bdd418912544","Type":"ContainerDied","Data":"b41d076fedad59e70a2b00172a056f0d2995e70aaa66e5733805dc8a4331cb63"} Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.710279 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f90a57b-f1f8-4fc5-ac97-bdd418912544","Type":"ContainerDied","Data":"eff0a2029a325efd2710c0a7e74c2ce9dbbc6c580c7de075f2092f2c05d19278"} Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.710329 4908 scope.go:117] "RemoveContainer" containerID="b41d076fedad59e70a2b00172a056f0d2995e70aaa66e5733805dc8a4331cb63" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.710509 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8bx9v" event={"ID":"ee8dbc71-e43b-49a6-9d68-78b987f39b89","Type":"ContainerStarted","Data":"5a86a9cecf68c69686e197d5cb7dd2280893873fbf065193f53ae3dd8613e2fe"} Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.712272 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bmhth" event={"ID":"a9f3cf33-1d4c-4fae-ac0e-54d917d43325","Type":"ContainerStarted","Data":"bc2fd22b47fe8a4ed84c4721fb8f25d493a18ca5a11a3e6022c1f0f4e1b3be8f"} Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.739946 4908 scope.go:117] "RemoveContainer" containerID="b41d076fedad59e70a2b00172a056f0d2995e70aaa66e5733805dc8a4331cb63" Jan 31 07:42:51 crc kubenswrapper[4908]: E0131 07:42:51.740543 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b41d076fedad59e70a2b00172a056f0d2995e70aaa66e5733805dc8a4331cb63\": container with ID starting with b41d076fedad59e70a2b00172a056f0d2995e70aaa66e5733805dc8a4331cb63 not found: ID does not exist" containerID="b41d076fedad59e70a2b00172a056f0d2995e70aaa66e5733805dc8a4331cb63" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.740599 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b41d076fedad59e70a2b00172a056f0d2995e70aaa66e5733805dc8a4331cb63"} err="failed to get container status \"b41d076fedad59e70a2b00172a056f0d2995e70aaa66e5733805dc8a4331cb63\": rpc error: code = NotFound desc = could not find container \"b41d076fedad59e70a2b00172a056f0d2995e70aaa66e5733805dc8a4331cb63\": container with ID starting with b41d076fedad59e70a2b00172a056f0d2995e70aaa66e5733805dc8a4331cb63 not found: ID does not exist" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.747395 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-8bx9v" podStartSLOduration=16.340445556 podStartE2EDuration="1m35.747374722s" podCreationTimestamp="2026-01-31 07:41:16 +0000 UTC" firstStartedPulling="2026-01-31 07:41:31.252181514 +0000 UTC m=+1197.868126168" lastFinishedPulling="2026-01-31 07:42:50.65911068 +0000 UTC m=+1277.275055334" observedRunningTime="2026-01-31 07:42:51.740104655 +0000 UTC m=+1278.356049329" watchObservedRunningTime="2026-01-31 07:42:51.747374722 +0000 UTC m=+1278.363319376" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.754623 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-bmhth" podStartSLOduration=15.356926459 podStartE2EDuration="1m34.754605519s" podCreationTimestamp="2026-01-31 07:41:17 +0000 UTC" firstStartedPulling="2026-01-31 07:41:31.259936144 +0000 UTC m=+1197.875880798" lastFinishedPulling="2026-01-31 07:42:50.657615204 +0000 UTC m=+1277.273559858" observedRunningTime="2026-01-31 07:42:51.752454456 +0000 UTC m=+1278.368399120" watchObservedRunningTime="2026-01-31 07:42:51.754605519 +0000 UTC m=+1278.370550173" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.817360 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.827101 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.843641 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:42:51 crc kubenswrapper[4908]: E0131 07:42:51.844105 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4638785-cdd7-4526-ba1e-4e1772877042" containerName="horizon-log" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.844121 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4638785-cdd7-4526-ba1e-4e1772877042" containerName="horizon-log" Jan 31 07:42:51 crc kubenswrapper[4908]: E0131 07:42:51.844145 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f90a57b-f1f8-4fc5-ac97-bdd418912544" containerName="sg-core" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.844154 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f90a57b-f1f8-4fc5-ac97-bdd418912544" containerName="sg-core" Jan 31 07:42:51 crc kubenswrapper[4908]: E0131 07:42:51.844182 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4638785-cdd7-4526-ba1e-4e1772877042" containerName="horizon" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.844192 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4638785-cdd7-4526-ba1e-4e1772877042" containerName="horizon" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.844419 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f90a57b-f1f8-4fc5-ac97-bdd418912544" containerName="sg-core" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.844435 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4638785-cdd7-4526-ba1e-4e1772877042" containerName="horizon-log" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.844452 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4638785-cdd7-4526-ba1e-4e1772877042" containerName="horizon" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.846205 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.848409 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.848601 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.852428 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.915486 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:42:51 crc kubenswrapper[4908]: E0131 07:42:51.916182 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-w2vkc log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[combined-ca-bundle config-data kube-api-access-w2vkc log-httpd run-httpd scripts sg-core-conf-yaml]: context canceled" pod="openstack/ceilometer-0" podUID="dbb53c31-eb71-4185-9a01-1d442868ef22" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.950484 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f90a57b-f1f8-4fc5-ac97-bdd418912544" path="/var/lib/kubelet/pods/9f90a57b-f1f8-4fc5-ac97-bdd418912544/volumes" Jan 31 07:42:51 crc kubenswrapper[4908]: I0131 07:42:51.951361 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4638785-cdd7-4526-ba1e-4e1772877042" path="/var/lib/kubelet/pods/d4638785-cdd7-4526-ba1e-4e1772877042/volumes" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.035666 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dbb53c31-eb71-4185-9a01-1d442868ef22-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dbb53c31-eb71-4185-9a01-1d442868ef22\") " pod="openstack/ceilometer-0" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.036095 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2vkc\" (UniqueName: \"kubernetes.io/projected/dbb53c31-eb71-4185-9a01-1d442868ef22-kube-api-access-w2vkc\") pod \"ceilometer-0\" (UID: \"dbb53c31-eb71-4185-9a01-1d442868ef22\") " pod="openstack/ceilometer-0" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.036159 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbb53c31-eb71-4185-9a01-1d442868ef22-scripts\") pod \"ceilometer-0\" (UID: \"dbb53c31-eb71-4185-9a01-1d442868ef22\") " pod="openstack/ceilometer-0" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.036188 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbb53c31-eb71-4185-9a01-1d442868ef22-config-data\") pod \"ceilometer-0\" (UID: \"dbb53c31-eb71-4185-9a01-1d442868ef22\") " pod="openstack/ceilometer-0" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.036228 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb53c31-eb71-4185-9a01-1d442868ef22-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dbb53c31-eb71-4185-9a01-1d442868ef22\") " pod="openstack/ceilometer-0" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.036340 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dbb53c31-eb71-4185-9a01-1d442868ef22-run-httpd\") pod \"ceilometer-0\" (UID: \"dbb53c31-eb71-4185-9a01-1d442868ef22\") " pod="openstack/ceilometer-0" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.036482 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dbb53c31-eb71-4185-9a01-1d442868ef22-log-httpd\") pod \"ceilometer-0\" (UID: \"dbb53c31-eb71-4185-9a01-1d442868ef22\") " pod="openstack/ceilometer-0" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.137966 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dbb53c31-eb71-4185-9a01-1d442868ef22-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dbb53c31-eb71-4185-9a01-1d442868ef22\") " pod="openstack/ceilometer-0" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.138052 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2vkc\" (UniqueName: \"kubernetes.io/projected/dbb53c31-eb71-4185-9a01-1d442868ef22-kube-api-access-w2vkc\") pod \"ceilometer-0\" (UID: \"dbb53c31-eb71-4185-9a01-1d442868ef22\") " pod="openstack/ceilometer-0" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.138092 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbb53c31-eb71-4185-9a01-1d442868ef22-scripts\") pod \"ceilometer-0\" (UID: \"dbb53c31-eb71-4185-9a01-1d442868ef22\") " pod="openstack/ceilometer-0" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.138117 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbb53c31-eb71-4185-9a01-1d442868ef22-config-data\") pod \"ceilometer-0\" (UID: \"dbb53c31-eb71-4185-9a01-1d442868ef22\") " pod="openstack/ceilometer-0" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.138160 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb53c31-eb71-4185-9a01-1d442868ef22-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dbb53c31-eb71-4185-9a01-1d442868ef22\") " pod="openstack/ceilometer-0" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.138216 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dbb53c31-eb71-4185-9a01-1d442868ef22-run-httpd\") pod \"ceilometer-0\" (UID: \"dbb53c31-eb71-4185-9a01-1d442868ef22\") " pod="openstack/ceilometer-0" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.138250 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dbb53c31-eb71-4185-9a01-1d442868ef22-log-httpd\") pod \"ceilometer-0\" (UID: \"dbb53c31-eb71-4185-9a01-1d442868ef22\") " pod="openstack/ceilometer-0" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.139296 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dbb53c31-eb71-4185-9a01-1d442868ef22-run-httpd\") pod \"ceilometer-0\" (UID: \"dbb53c31-eb71-4185-9a01-1d442868ef22\") " pod="openstack/ceilometer-0" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.139430 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dbb53c31-eb71-4185-9a01-1d442868ef22-log-httpd\") pod \"ceilometer-0\" (UID: \"dbb53c31-eb71-4185-9a01-1d442868ef22\") " pod="openstack/ceilometer-0" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.145729 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbb53c31-eb71-4185-9a01-1d442868ef22-config-data\") pod \"ceilometer-0\" (UID: \"dbb53c31-eb71-4185-9a01-1d442868ef22\") " pod="openstack/ceilometer-0" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.155802 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dbb53c31-eb71-4185-9a01-1d442868ef22-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dbb53c31-eb71-4185-9a01-1d442868ef22\") " pod="openstack/ceilometer-0" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.156046 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb53c31-eb71-4185-9a01-1d442868ef22-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dbb53c31-eb71-4185-9a01-1d442868ef22\") " pod="openstack/ceilometer-0" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.156240 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbb53c31-eb71-4185-9a01-1d442868ef22-scripts\") pod \"ceilometer-0\" (UID: \"dbb53c31-eb71-4185-9a01-1d442868ef22\") " pod="openstack/ceilometer-0" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.156497 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2vkc\" (UniqueName: \"kubernetes.io/projected/dbb53c31-eb71-4185-9a01-1d442868ef22-kube-api-access-w2vkc\") pod \"ceilometer-0\" (UID: \"dbb53c31-eb71-4185-9a01-1d442868ef22\") " pod="openstack/ceilometer-0" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.721094 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.731429 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.746970 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb53c31-eb71-4185-9a01-1d442868ef22-combined-ca-bundle\") pod \"dbb53c31-eb71-4185-9a01-1d442868ef22\" (UID: \"dbb53c31-eb71-4185-9a01-1d442868ef22\") " Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.747242 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dbb53c31-eb71-4185-9a01-1d442868ef22-log-httpd\") pod \"dbb53c31-eb71-4185-9a01-1d442868ef22\" (UID: \"dbb53c31-eb71-4185-9a01-1d442868ef22\") " Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.747328 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dbb53c31-eb71-4185-9a01-1d442868ef22-run-httpd\") pod \"dbb53c31-eb71-4185-9a01-1d442868ef22\" (UID: \"dbb53c31-eb71-4185-9a01-1d442868ef22\") " Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.747437 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2vkc\" (UniqueName: \"kubernetes.io/projected/dbb53c31-eb71-4185-9a01-1d442868ef22-kube-api-access-w2vkc\") pod \"dbb53c31-eb71-4185-9a01-1d442868ef22\" (UID: \"dbb53c31-eb71-4185-9a01-1d442868ef22\") " Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.747521 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbb53c31-eb71-4185-9a01-1d442868ef22-scripts\") pod \"dbb53c31-eb71-4185-9a01-1d442868ef22\" (UID: \"dbb53c31-eb71-4185-9a01-1d442868ef22\") " Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.747591 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dbb53c31-eb71-4185-9a01-1d442868ef22-sg-core-conf-yaml\") pod \"dbb53c31-eb71-4185-9a01-1d442868ef22\" (UID: \"dbb53c31-eb71-4185-9a01-1d442868ef22\") " Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.747698 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbb53c31-eb71-4185-9a01-1d442868ef22-config-data\") pod \"dbb53c31-eb71-4185-9a01-1d442868ef22\" (UID: \"dbb53c31-eb71-4185-9a01-1d442868ef22\") " Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.747508 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbb53c31-eb71-4185-9a01-1d442868ef22-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dbb53c31-eb71-4185-9a01-1d442868ef22" (UID: "dbb53c31-eb71-4185-9a01-1d442868ef22"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.747805 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbb53c31-eb71-4185-9a01-1d442868ef22-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dbb53c31-eb71-4185-9a01-1d442868ef22" (UID: "dbb53c31-eb71-4185-9a01-1d442868ef22"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.748911 4908 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dbb53c31-eb71-4185-9a01-1d442868ef22-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.748944 4908 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dbb53c31-eb71-4185-9a01-1d442868ef22-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.752037 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbb53c31-eb71-4185-9a01-1d442868ef22-kube-api-access-w2vkc" (OuterVolumeSpecName: "kube-api-access-w2vkc") pod "dbb53c31-eb71-4185-9a01-1d442868ef22" (UID: "dbb53c31-eb71-4185-9a01-1d442868ef22"). InnerVolumeSpecName "kube-api-access-w2vkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.752826 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbb53c31-eb71-4185-9a01-1d442868ef22-scripts" (OuterVolumeSpecName: "scripts") pod "dbb53c31-eb71-4185-9a01-1d442868ef22" (UID: "dbb53c31-eb71-4185-9a01-1d442868ef22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.755103 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbb53c31-eb71-4185-9a01-1d442868ef22-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dbb53c31-eb71-4185-9a01-1d442868ef22" (UID: "dbb53c31-eb71-4185-9a01-1d442868ef22"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.755141 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbb53c31-eb71-4185-9a01-1d442868ef22-config-data" (OuterVolumeSpecName: "config-data") pod "dbb53c31-eb71-4185-9a01-1d442868ef22" (UID: "dbb53c31-eb71-4185-9a01-1d442868ef22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.755179 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbb53c31-eb71-4185-9a01-1d442868ef22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbb53c31-eb71-4185-9a01-1d442868ef22" (UID: "dbb53c31-eb71-4185-9a01-1d442868ef22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.850455 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbb53c31-eb71-4185-9a01-1d442868ef22-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.850487 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb53c31-eb71-4185-9a01-1d442868ef22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.850499 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2vkc\" (UniqueName: \"kubernetes.io/projected/dbb53c31-eb71-4185-9a01-1d442868ef22-kube-api-access-w2vkc\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.850507 4908 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbb53c31-eb71-4185-9a01-1d442868ef22-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:52 crc kubenswrapper[4908]: I0131 07:42:52.850515 4908 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dbb53c31-eb71-4185-9a01-1d442868ef22-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:53 crc kubenswrapper[4908]: I0131 07:42:53.729202 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:42:53 crc kubenswrapper[4908]: I0131 07:42:53.807551 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:42:53 crc kubenswrapper[4908]: I0131 07:42:53.821469 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:42:53 crc kubenswrapper[4908]: I0131 07:42:53.832252 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:42:53 crc kubenswrapper[4908]: I0131 07:42:53.834393 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:42:53 crc kubenswrapper[4908]: I0131 07:42:53.837100 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 07:42:53 crc kubenswrapper[4908]: I0131 07:42:53.840209 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:42:53 crc kubenswrapper[4908]: I0131 07:42:53.842368 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 07:42:53 crc kubenswrapper[4908]: I0131 07:42:53.866733 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-log-httpd\") pod \"ceilometer-0\" (UID: \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\") " pod="openstack/ceilometer-0" Jan 31 07:42:53 crc kubenswrapper[4908]: I0131 07:42:53.866777 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-run-httpd\") pod \"ceilometer-0\" (UID: \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\") " pod="openstack/ceilometer-0" Jan 31 07:42:53 crc kubenswrapper[4908]: I0131 07:42:53.866832 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-scripts\") pod \"ceilometer-0\" (UID: \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\") " pod="openstack/ceilometer-0" Jan 31 07:42:53 crc kubenswrapper[4908]: I0131 07:42:53.866850 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\") " pod="openstack/ceilometer-0" Jan 31 07:42:53 crc kubenswrapper[4908]: I0131 07:42:53.866869 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scct4\" (UniqueName: \"kubernetes.io/projected/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-kube-api-access-scct4\") pod \"ceilometer-0\" (UID: \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\") " pod="openstack/ceilometer-0" Jan 31 07:42:53 crc kubenswrapper[4908]: I0131 07:42:53.866929 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-config-data\") pod \"ceilometer-0\" (UID: \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\") " pod="openstack/ceilometer-0" Jan 31 07:42:53 crc kubenswrapper[4908]: I0131 07:42:53.866951 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\") " pod="openstack/ceilometer-0" Jan 31 07:42:53 crc kubenswrapper[4908]: I0131 07:42:53.950871 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbb53c31-eb71-4185-9a01-1d442868ef22" path="/var/lib/kubelet/pods/dbb53c31-eb71-4185-9a01-1d442868ef22/volumes" Jan 31 07:42:53 crc kubenswrapper[4908]: I0131 07:42:53.968149 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-config-data\") pod \"ceilometer-0\" (UID: \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\") " pod="openstack/ceilometer-0" Jan 31 07:42:53 crc kubenswrapper[4908]: I0131 07:42:53.968194 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\") " pod="openstack/ceilometer-0" Jan 31 07:42:53 crc kubenswrapper[4908]: I0131 07:42:53.968232 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-log-httpd\") pod \"ceilometer-0\" (UID: \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\") " pod="openstack/ceilometer-0" Jan 31 07:42:53 crc kubenswrapper[4908]: I0131 07:42:53.968249 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-run-httpd\") pod \"ceilometer-0\" (UID: \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\") " pod="openstack/ceilometer-0" Jan 31 07:42:53 crc kubenswrapper[4908]: I0131 07:42:53.968329 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-scripts\") pod \"ceilometer-0\" (UID: \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\") " pod="openstack/ceilometer-0" Jan 31 07:42:53 crc kubenswrapper[4908]: I0131 07:42:53.968346 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\") " pod="openstack/ceilometer-0" Jan 31 07:42:53 crc kubenswrapper[4908]: I0131 07:42:53.968361 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scct4\" (UniqueName: \"kubernetes.io/projected/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-kube-api-access-scct4\") pod \"ceilometer-0\" (UID: \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\") " pod="openstack/ceilometer-0" Jan 31 07:42:53 crc kubenswrapper[4908]: I0131 07:42:53.969083 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-log-httpd\") pod \"ceilometer-0\" (UID: \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\") " pod="openstack/ceilometer-0" Jan 31 07:42:53 crc kubenswrapper[4908]: I0131 07:42:53.969111 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-run-httpd\") pod \"ceilometer-0\" (UID: \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\") " pod="openstack/ceilometer-0" Jan 31 07:42:53 crc kubenswrapper[4908]: I0131 07:42:53.972531 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\") " pod="openstack/ceilometer-0" Jan 31 07:42:53 crc kubenswrapper[4908]: I0131 07:42:53.973003 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-scripts\") pod \"ceilometer-0\" (UID: \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\") " pod="openstack/ceilometer-0" Jan 31 07:42:53 crc kubenswrapper[4908]: I0131 07:42:53.974509 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-config-data\") pod \"ceilometer-0\" (UID: \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\") " pod="openstack/ceilometer-0" Jan 31 07:42:53 crc kubenswrapper[4908]: I0131 07:42:53.975037 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\") " pod="openstack/ceilometer-0" Jan 31 07:42:53 crc kubenswrapper[4908]: I0131 07:42:53.989689 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scct4\" (UniqueName: \"kubernetes.io/projected/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-kube-api-access-scct4\") pod \"ceilometer-0\" (UID: \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\") " pod="openstack/ceilometer-0" Jan 31 07:42:54 crc kubenswrapper[4908]: I0131 07:42:54.164389 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:42:54 crc kubenswrapper[4908]: W0131 07:42:54.618052 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bbf9c70_7cc9_4b9b_8904_2fb6eb8a9877.slice/crio-3a641f7cb376b21dd524199e7a5736f1fa8dc76b93c1e80776246c3357959a2a WatchSource:0}: Error finding container 3a641f7cb376b21dd524199e7a5736f1fa8dc76b93c1e80776246c3357959a2a: Status 404 returned error can't find the container with id 3a641f7cb376b21dd524199e7a5736f1fa8dc76b93c1e80776246c3357959a2a Jan 31 07:42:54 crc kubenswrapper[4908]: I0131 07:42:54.618100 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:42:54 crc kubenswrapper[4908]: I0131 07:42:54.737736 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877","Type":"ContainerStarted","Data":"3a641f7cb376b21dd524199e7a5736f1fa8dc76b93c1e80776246c3357959a2a"} Jan 31 07:42:55 crc kubenswrapper[4908]: I0131 07:42:55.151408 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7d75564955-lkfg2" Jan 31 07:42:55 crc kubenswrapper[4908]: I0131 07:42:55.757511 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877","Type":"ContainerStarted","Data":"3263d6d2cca98c761fe9fb84525928a313d07292efa54f50bbaea8b0b8f48062"} Jan 31 07:42:56 crc kubenswrapper[4908]: I0131 07:42:56.778925 4908 generic.go:334] "Generic (PLEG): container finished" podID="a9f3cf33-1d4c-4fae-ac0e-54d917d43325" containerID="bc2fd22b47fe8a4ed84c4721fb8f25d493a18ca5a11a3e6022c1f0f4e1b3be8f" exitCode=0 Jan 31 07:42:56 crc kubenswrapper[4908]: I0131 07:42:56.779371 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bmhth" event={"ID":"a9f3cf33-1d4c-4fae-ac0e-54d917d43325","Type":"ContainerDied","Data":"bc2fd22b47fe8a4ed84c4721fb8f25d493a18ca5a11a3e6022c1f0f4e1b3be8f"} Jan 31 07:42:56 crc kubenswrapper[4908]: I0131 07:42:56.787719 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877","Type":"ContainerStarted","Data":"ab279a4d27e23586fc2bbf678ee7c538fb60bc6f396db9b0236e70b7ad1fef29"} Jan 31 07:42:56 crc kubenswrapper[4908]: I0131 07:42:56.787762 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877","Type":"ContainerStarted","Data":"c8333e89a22bd867aa4aaa6e63f9a0f6ea5c4156ca658c95794a5e0be2d2c255"} Jan 31 07:42:56 crc kubenswrapper[4908]: I0131 07:42:56.919926 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7d75564955-lkfg2" Jan 31 07:42:58 crc kubenswrapper[4908]: I0131 07:42:58.117428 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bmhth" Jan 31 07:42:58 crc kubenswrapper[4908]: I0131 07:42:58.165803 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9f3cf33-1d4c-4fae-ac0e-54d917d43325-combined-ca-bundle\") pod \"a9f3cf33-1d4c-4fae-ac0e-54d917d43325\" (UID: \"a9f3cf33-1d4c-4fae-ac0e-54d917d43325\") " Jan 31 07:42:58 crc kubenswrapper[4908]: I0131 07:42:58.166022 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5jb9\" (UniqueName: \"kubernetes.io/projected/a9f3cf33-1d4c-4fae-ac0e-54d917d43325-kube-api-access-s5jb9\") pod \"a9f3cf33-1d4c-4fae-ac0e-54d917d43325\" (UID: \"a9f3cf33-1d4c-4fae-ac0e-54d917d43325\") " Jan 31 07:42:58 crc kubenswrapper[4908]: I0131 07:42:58.166116 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a9f3cf33-1d4c-4fae-ac0e-54d917d43325-db-sync-config-data\") pod \"a9f3cf33-1d4c-4fae-ac0e-54d917d43325\" (UID: \"a9f3cf33-1d4c-4fae-ac0e-54d917d43325\") " Jan 31 07:42:58 crc kubenswrapper[4908]: I0131 07:42:58.171219 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9f3cf33-1d4c-4fae-ac0e-54d917d43325-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a9f3cf33-1d4c-4fae-ac0e-54d917d43325" (UID: "a9f3cf33-1d4c-4fae-ac0e-54d917d43325"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:42:58 crc kubenswrapper[4908]: I0131 07:42:58.172529 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9f3cf33-1d4c-4fae-ac0e-54d917d43325-kube-api-access-s5jb9" (OuterVolumeSpecName: "kube-api-access-s5jb9") pod "a9f3cf33-1d4c-4fae-ac0e-54d917d43325" (UID: "a9f3cf33-1d4c-4fae-ac0e-54d917d43325"). InnerVolumeSpecName "kube-api-access-s5jb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:42:58 crc kubenswrapper[4908]: I0131 07:42:58.202432 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9f3cf33-1d4c-4fae-ac0e-54d917d43325-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9f3cf33-1d4c-4fae-ac0e-54d917d43325" (UID: "a9f3cf33-1d4c-4fae-ac0e-54d917d43325"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:42:58 crc kubenswrapper[4908]: I0131 07:42:58.268479 4908 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a9f3cf33-1d4c-4fae-ac0e-54d917d43325-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:58 crc kubenswrapper[4908]: I0131 07:42:58.268521 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9f3cf33-1d4c-4fae-ac0e-54d917d43325-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:58 crc kubenswrapper[4908]: I0131 07:42:58.268530 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5jb9\" (UniqueName: \"kubernetes.io/projected/a9f3cf33-1d4c-4fae-ac0e-54d917d43325-kube-api-access-s5jb9\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:58 crc kubenswrapper[4908]: I0131 07:42:58.812246 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bmhth" event={"ID":"a9f3cf33-1d4c-4fae-ac0e-54d917d43325","Type":"ContainerDied","Data":"c1d2d8c20cd17fa159ab12e9301c65a4893e82407e653fdd123d28af9dfe83e0"} Jan 31 07:42:58 crc kubenswrapper[4908]: I0131 07:42:58.812554 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1d2d8c20cd17fa159ab12e9301c65a4893e82407e653fdd123d28af9dfe83e0" Jan 31 07:42:58 crc kubenswrapper[4908]: I0131 07:42:58.812468 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bmhth" Jan 31 07:42:58 crc kubenswrapper[4908]: I0131 07:42:58.983938 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 31 07:42:58 crc kubenswrapper[4908]: E0131 07:42:58.984375 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f3cf33-1d4c-4fae-ac0e-54d917d43325" containerName="barbican-db-sync" Jan 31 07:42:58 crc kubenswrapper[4908]: I0131 07:42:58.984396 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f3cf33-1d4c-4fae-ac0e-54d917d43325" containerName="barbican-db-sync" Jan 31 07:42:58 crc kubenswrapper[4908]: I0131 07:42:58.984584 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9f3cf33-1d4c-4fae-ac0e-54d917d43325" containerName="barbican-db-sync" Jan 31 07:42:58 crc kubenswrapper[4908]: I0131 07:42:58.985237 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 31 07:42:58 crc kubenswrapper[4908]: I0131 07:42:58.986921 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 31 07:42:58 crc kubenswrapper[4908]: I0131 07:42:58.987128 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 31 07:42:58 crc kubenswrapper[4908]: I0131 07:42:58.987324 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-d6kp6" Jan 31 07:42:58 crc kubenswrapper[4908]: I0131 07:42:58.993315 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.078390 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-68b6964dc9-tqj78"] Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.079627 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68b6964dc9-tqj78" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.082958 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d30b67-d125-4065-bb37-e91a0ba45b29-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f0d30b67-d125-4065-bb37-e91a0ba45b29\") " pod="openstack/openstackclient" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.083104 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f0d30b67-d125-4065-bb37-e91a0ba45b29-openstack-config-secret\") pod \"openstackclient\" (UID: \"f0d30b67-d125-4065-bb37-e91a0ba45b29\") " pod="openstack/openstackclient" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.083186 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f0d30b67-d125-4065-bb37-e91a0ba45b29-openstack-config\") pod \"openstackclient\" (UID: \"f0d30b67-d125-4065-bb37-e91a0ba45b29\") " pod="openstack/openstackclient" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.083211 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46td2\" (UniqueName: \"kubernetes.io/projected/f0d30b67-d125-4065-bb37-e91a0ba45b29-kube-api-access-46td2\") pod \"openstackclient\" (UID: \"f0d30b67-d125-4065-bb37-e91a0ba45b29\") " pod="openstack/openstackclient" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.083684 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.083891 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-sfjjd" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.084070 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.125050 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-68b6964dc9-tqj78"] Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.138095 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5d54fb7d5b-4554k"] Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.139654 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5d54fb7d5b-4554k" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.144874 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.185927 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckgtb\" (UniqueName: \"kubernetes.io/projected/1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0-kube-api-access-ckgtb\") pod \"barbican-worker-68b6964dc9-tqj78\" (UID: \"1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0\") " pod="openstack/barbican-worker-68b6964dc9-tqj78" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.186050 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0-combined-ca-bundle\") pod \"barbican-worker-68b6964dc9-tqj78\" (UID: \"1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0\") " pod="openstack/barbican-worker-68b6964dc9-tqj78" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.186117 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0-config-data-custom\") pod \"barbican-worker-68b6964dc9-tqj78\" (UID: \"1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0\") " pod="openstack/barbican-worker-68b6964dc9-tqj78" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.186159 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f0d30b67-d125-4065-bb37-e91a0ba45b29-openstack-config-secret\") pod \"openstackclient\" (UID: \"f0d30b67-d125-4065-bb37-e91a0ba45b29\") " pod="openstack/openstackclient" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.186202 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a90a3a29-b5c4-4af3-a192-0d897b673ae7-config-data-custom\") pod \"barbican-keystone-listener-5d54fb7d5b-4554k\" (UID: \"a90a3a29-b5c4-4af3-a192-0d897b673ae7\") " pod="openstack/barbican-keystone-listener-5d54fb7d5b-4554k" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.186233 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a90a3a29-b5c4-4af3-a192-0d897b673ae7-combined-ca-bundle\") pod \"barbican-keystone-listener-5d54fb7d5b-4554k\" (UID: \"a90a3a29-b5c4-4af3-a192-0d897b673ae7\") " pod="openstack/barbican-keystone-listener-5d54fb7d5b-4554k" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.186267 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0-logs\") pod \"barbican-worker-68b6964dc9-tqj78\" (UID: \"1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0\") " pod="openstack/barbican-worker-68b6964dc9-tqj78" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.186287 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a90a3a29-b5c4-4af3-a192-0d897b673ae7-logs\") pod \"barbican-keystone-listener-5d54fb7d5b-4554k\" (UID: \"a90a3a29-b5c4-4af3-a192-0d897b673ae7\") " pod="openstack/barbican-keystone-listener-5d54fb7d5b-4554k" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.186318 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f0d30b67-d125-4065-bb37-e91a0ba45b29-openstack-config\") pod \"openstackclient\" (UID: \"f0d30b67-d125-4065-bb37-e91a0ba45b29\") " pod="openstack/openstackclient" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.186338 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46td2\" (UniqueName: \"kubernetes.io/projected/f0d30b67-d125-4065-bb37-e91a0ba45b29-kube-api-access-46td2\") pod \"openstackclient\" (UID: \"f0d30b67-d125-4065-bb37-e91a0ba45b29\") " pod="openstack/openstackclient" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.186362 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0-config-data\") pod \"barbican-worker-68b6964dc9-tqj78\" (UID: \"1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0\") " pod="openstack/barbican-worker-68b6964dc9-tqj78" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.186406 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a90a3a29-b5c4-4af3-a192-0d897b673ae7-config-data\") pod \"barbican-keystone-listener-5d54fb7d5b-4554k\" (UID: \"a90a3a29-b5c4-4af3-a192-0d897b673ae7\") " pod="openstack/barbican-keystone-listener-5d54fb7d5b-4554k" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.186436 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d30b67-d125-4065-bb37-e91a0ba45b29-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f0d30b67-d125-4065-bb37-e91a0ba45b29\") " pod="openstack/openstackclient" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.186459 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjlv8\" (UniqueName: \"kubernetes.io/projected/a90a3a29-b5c4-4af3-a192-0d897b673ae7-kube-api-access-bjlv8\") pod \"barbican-keystone-listener-5d54fb7d5b-4554k\" (UID: \"a90a3a29-b5c4-4af3-a192-0d897b673ae7\") " pod="openstack/barbican-keystone-listener-5d54fb7d5b-4554k" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.189329 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f0d30b67-d125-4065-bb37-e91a0ba45b29-openstack-config\") pod \"openstackclient\" (UID: \"f0d30b67-d125-4065-bb37-e91a0ba45b29\") " pod="openstack/openstackclient" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.189960 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5d54fb7d5b-4554k"] Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.192168 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f0d30b67-d125-4065-bb37-e91a0ba45b29-openstack-config-secret\") pod \"openstackclient\" (UID: \"f0d30b67-d125-4065-bb37-e91a0ba45b29\") " pod="openstack/openstackclient" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.193197 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d30b67-d125-4065-bb37-e91a0ba45b29-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f0d30b67-d125-4065-bb37-e91a0ba45b29\") " pod="openstack/openstackclient" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.207679 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46td2\" (UniqueName: \"kubernetes.io/projected/f0d30b67-d125-4065-bb37-e91a0ba45b29-kube-api-access-46td2\") pod \"openstackclient\" (UID: \"f0d30b67-d125-4065-bb37-e91a0ba45b29\") " pod="openstack/openstackclient" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.291427 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a90a3a29-b5c4-4af3-a192-0d897b673ae7-config-data-custom\") pod \"barbican-keystone-listener-5d54fb7d5b-4554k\" (UID: \"a90a3a29-b5c4-4af3-a192-0d897b673ae7\") " pod="openstack/barbican-keystone-listener-5d54fb7d5b-4554k" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.291658 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a90a3a29-b5c4-4af3-a192-0d897b673ae7-combined-ca-bundle\") pod \"barbican-keystone-listener-5d54fb7d5b-4554k\" (UID: \"a90a3a29-b5c4-4af3-a192-0d897b673ae7\") " pod="openstack/barbican-keystone-listener-5d54fb7d5b-4554k" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.291688 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0-logs\") pod \"barbican-worker-68b6964dc9-tqj78\" (UID: \"1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0\") " pod="openstack/barbican-worker-68b6964dc9-tqj78" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.291702 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a90a3a29-b5c4-4af3-a192-0d897b673ae7-logs\") pod \"barbican-keystone-listener-5d54fb7d5b-4554k\" (UID: \"a90a3a29-b5c4-4af3-a192-0d897b673ae7\") " pod="openstack/barbican-keystone-listener-5d54fb7d5b-4554k" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.291724 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0-config-data\") pod \"barbican-worker-68b6964dc9-tqj78\" (UID: \"1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0\") " pod="openstack/barbican-worker-68b6964dc9-tqj78" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.291756 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a90a3a29-b5c4-4af3-a192-0d897b673ae7-config-data\") pod \"barbican-keystone-listener-5d54fb7d5b-4554k\" (UID: \"a90a3a29-b5c4-4af3-a192-0d897b673ae7\") " pod="openstack/barbican-keystone-listener-5d54fb7d5b-4554k" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.291781 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjlv8\" (UniqueName: \"kubernetes.io/projected/a90a3a29-b5c4-4af3-a192-0d897b673ae7-kube-api-access-bjlv8\") pod \"barbican-keystone-listener-5d54fb7d5b-4554k\" (UID: \"a90a3a29-b5c4-4af3-a192-0d897b673ae7\") " pod="openstack/barbican-keystone-listener-5d54fb7d5b-4554k" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.291810 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckgtb\" (UniqueName: \"kubernetes.io/projected/1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0-kube-api-access-ckgtb\") pod \"barbican-worker-68b6964dc9-tqj78\" (UID: \"1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0\") " pod="openstack/barbican-worker-68b6964dc9-tqj78" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.291832 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0-combined-ca-bundle\") pod \"barbican-worker-68b6964dc9-tqj78\" (UID: \"1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0\") " pod="openstack/barbican-worker-68b6964dc9-tqj78" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.291865 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0-config-data-custom\") pod \"barbican-worker-68b6964dc9-tqj78\" (UID: \"1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0\") " pod="openstack/barbican-worker-68b6964dc9-tqj78" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.298733 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0-logs\") pod \"barbican-worker-68b6964dc9-tqj78\" (UID: \"1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0\") " pod="openstack/barbican-worker-68b6964dc9-tqj78" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.299550 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0-config-data\") pod \"barbican-worker-68b6964dc9-tqj78\" (UID: \"1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0\") " pod="openstack/barbican-worker-68b6964dc9-tqj78" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.300130 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a90a3a29-b5c4-4af3-a192-0d897b673ae7-logs\") pod \"barbican-keystone-listener-5d54fb7d5b-4554k\" (UID: \"a90a3a29-b5c4-4af3-a192-0d897b673ae7\") " pod="openstack/barbican-keystone-listener-5d54fb7d5b-4554k" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.300171 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-hcj2h"] Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.302543 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-hcj2h" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.305519 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.309506 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0-config-data-custom\") pod \"barbican-worker-68b6964dc9-tqj78\" (UID: \"1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0\") " pod="openstack/barbican-worker-68b6964dc9-tqj78" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.311609 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0-combined-ca-bundle\") pod \"barbican-worker-68b6964dc9-tqj78\" (UID: \"1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0\") " pod="openstack/barbican-worker-68b6964dc9-tqj78" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.314131 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a90a3a29-b5c4-4af3-a192-0d897b673ae7-config-data\") pod \"barbican-keystone-listener-5d54fb7d5b-4554k\" (UID: \"a90a3a29-b5c4-4af3-a192-0d897b673ae7\") " pod="openstack/barbican-keystone-listener-5d54fb7d5b-4554k" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.325452 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a90a3a29-b5c4-4af3-a192-0d897b673ae7-combined-ca-bundle\") pod \"barbican-keystone-listener-5d54fb7d5b-4554k\" (UID: \"a90a3a29-b5c4-4af3-a192-0d897b673ae7\") " pod="openstack/barbican-keystone-listener-5d54fb7d5b-4554k" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.327657 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a90a3a29-b5c4-4af3-a192-0d897b673ae7-config-data-custom\") pod \"barbican-keystone-listener-5d54fb7d5b-4554k\" (UID: \"a90a3a29-b5c4-4af3-a192-0d897b673ae7\") " pod="openstack/barbican-keystone-listener-5d54fb7d5b-4554k" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.330958 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckgtb\" (UniqueName: \"kubernetes.io/projected/1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0-kube-api-access-ckgtb\") pod \"barbican-worker-68b6964dc9-tqj78\" (UID: \"1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0\") " pod="openstack/barbican-worker-68b6964dc9-tqj78" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.362050 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-hcj2h"] Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.380211 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjlv8\" (UniqueName: \"kubernetes.io/projected/a90a3a29-b5c4-4af3-a192-0d897b673ae7-kube-api-access-bjlv8\") pod \"barbican-keystone-listener-5d54fb7d5b-4554k\" (UID: \"a90a3a29-b5c4-4af3-a192-0d897b673ae7\") " pod="openstack/barbican-keystone-listener-5d54fb7d5b-4554k" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.393820 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6184a33a-84dd-4024-984f-df5bb5cdf978-ovsdbserver-nb\") pod \"dnsmasq-dns-7f46f79845-hcj2h\" (UID: \"6184a33a-84dd-4024-984f-df5bb5cdf978\") " pod="openstack/dnsmasq-dns-7f46f79845-hcj2h" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.393878 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxc4j\" (UniqueName: \"kubernetes.io/projected/6184a33a-84dd-4024-984f-df5bb5cdf978-kube-api-access-xxc4j\") pod \"dnsmasq-dns-7f46f79845-hcj2h\" (UID: \"6184a33a-84dd-4024-984f-df5bb5cdf978\") " pod="openstack/dnsmasq-dns-7f46f79845-hcj2h" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.393910 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6184a33a-84dd-4024-984f-df5bb5cdf978-ovsdbserver-sb\") pod \"dnsmasq-dns-7f46f79845-hcj2h\" (UID: \"6184a33a-84dd-4024-984f-df5bb5cdf978\") " pod="openstack/dnsmasq-dns-7f46f79845-hcj2h" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.393945 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6184a33a-84dd-4024-984f-df5bb5cdf978-dns-svc\") pod \"dnsmasq-dns-7f46f79845-hcj2h\" (UID: \"6184a33a-84dd-4024-984f-df5bb5cdf978\") " pod="openstack/dnsmasq-dns-7f46f79845-hcj2h" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.394004 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6184a33a-84dd-4024-984f-df5bb5cdf978-config\") pod \"dnsmasq-dns-7f46f79845-hcj2h\" (UID: \"6184a33a-84dd-4024-984f-df5bb5cdf978\") " pod="openstack/dnsmasq-dns-7f46f79845-hcj2h" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.395624 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68b6964dc9-tqj78" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.404100 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-8448fb66b6-c7xrs"] Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.405482 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8448fb66b6-c7xrs" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.413268 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.490885 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8448fb66b6-c7xrs"] Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.497345 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b99eb244-3bee-4f92-9bb2-097fc753c198-config-data\") pod \"barbican-api-8448fb66b6-c7xrs\" (UID: \"b99eb244-3bee-4f92-9bb2-097fc753c198\") " pod="openstack/barbican-api-8448fb66b6-c7xrs" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.497445 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b99eb244-3bee-4f92-9bb2-097fc753c198-config-data-custom\") pod \"barbican-api-8448fb66b6-c7xrs\" (UID: \"b99eb244-3bee-4f92-9bb2-097fc753c198\") " pod="openstack/barbican-api-8448fb66b6-c7xrs" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.497552 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6184a33a-84dd-4024-984f-df5bb5cdf978-ovsdbserver-nb\") pod \"dnsmasq-dns-7f46f79845-hcj2h\" (UID: \"6184a33a-84dd-4024-984f-df5bb5cdf978\") " pod="openstack/dnsmasq-dns-7f46f79845-hcj2h" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.497614 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b99eb244-3bee-4f92-9bb2-097fc753c198-combined-ca-bundle\") pod \"barbican-api-8448fb66b6-c7xrs\" (UID: \"b99eb244-3bee-4f92-9bb2-097fc753c198\") " pod="openstack/barbican-api-8448fb66b6-c7xrs" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.497714 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxc4j\" (UniqueName: \"kubernetes.io/projected/6184a33a-84dd-4024-984f-df5bb5cdf978-kube-api-access-xxc4j\") pod \"dnsmasq-dns-7f46f79845-hcj2h\" (UID: \"6184a33a-84dd-4024-984f-df5bb5cdf978\") " pod="openstack/dnsmasq-dns-7f46f79845-hcj2h" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.497781 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkkf5\" (UniqueName: \"kubernetes.io/projected/b99eb244-3bee-4f92-9bb2-097fc753c198-kube-api-access-vkkf5\") pod \"barbican-api-8448fb66b6-c7xrs\" (UID: \"b99eb244-3bee-4f92-9bb2-097fc753c198\") " pod="openstack/barbican-api-8448fb66b6-c7xrs" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.497820 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b99eb244-3bee-4f92-9bb2-097fc753c198-logs\") pod \"barbican-api-8448fb66b6-c7xrs\" (UID: \"b99eb244-3bee-4f92-9bb2-097fc753c198\") " pod="openstack/barbican-api-8448fb66b6-c7xrs" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.497889 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6184a33a-84dd-4024-984f-df5bb5cdf978-ovsdbserver-sb\") pod \"dnsmasq-dns-7f46f79845-hcj2h\" (UID: \"6184a33a-84dd-4024-984f-df5bb5cdf978\") " pod="openstack/dnsmasq-dns-7f46f79845-hcj2h" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.497960 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6184a33a-84dd-4024-984f-df5bb5cdf978-dns-svc\") pod \"dnsmasq-dns-7f46f79845-hcj2h\" (UID: \"6184a33a-84dd-4024-984f-df5bb5cdf978\") " pod="openstack/dnsmasq-dns-7f46f79845-hcj2h" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.498307 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6184a33a-84dd-4024-984f-df5bb5cdf978-config\") pod \"dnsmasq-dns-7f46f79845-hcj2h\" (UID: \"6184a33a-84dd-4024-984f-df5bb5cdf978\") " pod="openstack/dnsmasq-dns-7f46f79845-hcj2h" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.499301 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6184a33a-84dd-4024-984f-df5bb5cdf978-dns-svc\") pod \"dnsmasq-dns-7f46f79845-hcj2h\" (UID: \"6184a33a-84dd-4024-984f-df5bb5cdf978\") " pod="openstack/dnsmasq-dns-7f46f79845-hcj2h" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.499348 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6184a33a-84dd-4024-984f-df5bb5cdf978-config\") pod \"dnsmasq-dns-7f46f79845-hcj2h\" (UID: \"6184a33a-84dd-4024-984f-df5bb5cdf978\") " pod="openstack/dnsmasq-dns-7f46f79845-hcj2h" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.500114 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6184a33a-84dd-4024-984f-df5bb5cdf978-ovsdbserver-sb\") pod \"dnsmasq-dns-7f46f79845-hcj2h\" (UID: \"6184a33a-84dd-4024-984f-df5bb5cdf978\") " pod="openstack/dnsmasq-dns-7f46f79845-hcj2h" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.501639 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6184a33a-84dd-4024-984f-df5bb5cdf978-ovsdbserver-nb\") pod \"dnsmasq-dns-7f46f79845-hcj2h\" (UID: \"6184a33a-84dd-4024-984f-df5bb5cdf978\") " pod="openstack/dnsmasq-dns-7f46f79845-hcj2h" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.533796 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxc4j\" (UniqueName: \"kubernetes.io/projected/6184a33a-84dd-4024-984f-df5bb5cdf978-kube-api-access-xxc4j\") pod \"dnsmasq-dns-7f46f79845-hcj2h\" (UID: \"6184a33a-84dd-4024-984f-df5bb5cdf978\") " pod="openstack/dnsmasq-dns-7f46f79845-hcj2h" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.591283 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5d54fb7d5b-4554k" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.599497 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b99eb244-3bee-4f92-9bb2-097fc753c198-combined-ca-bundle\") pod \"barbican-api-8448fb66b6-c7xrs\" (UID: \"b99eb244-3bee-4f92-9bb2-097fc753c198\") " pod="openstack/barbican-api-8448fb66b6-c7xrs" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.599567 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkkf5\" (UniqueName: \"kubernetes.io/projected/b99eb244-3bee-4f92-9bb2-097fc753c198-kube-api-access-vkkf5\") pod \"barbican-api-8448fb66b6-c7xrs\" (UID: \"b99eb244-3bee-4f92-9bb2-097fc753c198\") " pod="openstack/barbican-api-8448fb66b6-c7xrs" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.599592 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b99eb244-3bee-4f92-9bb2-097fc753c198-logs\") pod \"barbican-api-8448fb66b6-c7xrs\" (UID: \"b99eb244-3bee-4f92-9bb2-097fc753c198\") " pod="openstack/barbican-api-8448fb66b6-c7xrs" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.599676 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b99eb244-3bee-4f92-9bb2-097fc753c198-config-data\") pod \"barbican-api-8448fb66b6-c7xrs\" (UID: \"b99eb244-3bee-4f92-9bb2-097fc753c198\") " pod="openstack/barbican-api-8448fb66b6-c7xrs" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.599708 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b99eb244-3bee-4f92-9bb2-097fc753c198-config-data-custom\") pod \"barbican-api-8448fb66b6-c7xrs\" (UID: \"b99eb244-3bee-4f92-9bb2-097fc753c198\") " pod="openstack/barbican-api-8448fb66b6-c7xrs" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.600457 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b99eb244-3bee-4f92-9bb2-097fc753c198-logs\") pod \"barbican-api-8448fb66b6-c7xrs\" (UID: \"b99eb244-3bee-4f92-9bb2-097fc753c198\") " pod="openstack/barbican-api-8448fb66b6-c7xrs" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.603389 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b99eb244-3bee-4f92-9bb2-097fc753c198-config-data-custom\") pod \"barbican-api-8448fb66b6-c7xrs\" (UID: \"b99eb244-3bee-4f92-9bb2-097fc753c198\") " pod="openstack/barbican-api-8448fb66b6-c7xrs" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.603587 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b99eb244-3bee-4f92-9bb2-097fc753c198-combined-ca-bundle\") pod \"barbican-api-8448fb66b6-c7xrs\" (UID: \"b99eb244-3bee-4f92-9bb2-097fc753c198\") " pod="openstack/barbican-api-8448fb66b6-c7xrs" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.604094 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b99eb244-3bee-4f92-9bb2-097fc753c198-config-data\") pod \"barbican-api-8448fb66b6-c7xrs\" (UID: \"b99eb244-3bee-4f92-9bb2-097fc753c198\") " pod="openstack/barbican-api-8448fb66b6-c7xrs" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.618240 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkkf5\" (UniqueName: \"kubernetes.io/projected/b99eb244-3bee-4f92-9bb2-097fc753c198-kube-api-access-vkkf5\") pod \"barbican-api-8448fb66b6-c7xrs\" (UID: \"b99eb244-3bee-4f92-9bb2-097fc753c198\") " pod="openstack/barbican-api-8448fb66b6-c7xrs" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.768698 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-hcj2h" Jan 31 07:42:59 crc kubenswrapper[4908]: I0131 07:42:59.775147 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8448fb66b6-c7xrs" Jan 31 07:43:01 crc kubenswrapper[4908]: I0131 07:43:01.111653 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-hcj2h"] Jan 31 07:43:01 crc kubenswrapper[4908]: W0131 07:43:01.111812 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6184a33a_84dd_4024_984f_df5bb5cdf978.slice/crio-dc7652cce0f04affbba21e035f3f2b3f53e2234e320e16f392168eb2f6907c32 WatchSource:0}: Error finding container dc7652cce0f04affbba21e035f3f2b3f53e2234e320e16f392168eb2f6907c32: Status 404 returned error can't find the container with id dc7652cce0f04affbba21e035f3f2b3f53e2234e320e16f392168eb2f6907c32 Jan 31 07:43:01 crc kubenswrapper[4908]: I0131 07:43:01.255413 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 31 07:43:01 crc kubenswrapper[4908]: W0131 07:43:01.260657 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0d30b67_d125_4065_bb37_e91a0ba45b29.slice/crio-3501502faac943ba646632e6a299e6cf944773c9bf227b7fa2360f7da40e5815 WatchSource:0}: Error finding container 3501502faac943ba646632e6a299e6cf944773c9bf227b7fa2360f7da40e5815: Status 404 returned error can't find the container with id 3501502faac943ba646632e6a299e6cf944773c9bf227b7fa2360f7da40e5815 Jan 31 07:43:01 crc kubenswrapper[4908]: I0131 07:43:01.272563 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8448fb66b6-c7xrs"] Jan 31 07:43:01 crc kubenswrapper[4908]: W0131 07:43:01.285838 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb99eb244_3bee_4f92_9bb2_097fc753c198.slice/crio-500c4c783f413107856183116027d3fecd45a7cd025872163d724e3b0c504230 WatchSource:0}: Error finding container 500c4c783f413107856183116027d3fecd45a7cd025872163d724e3b0c504230: Status 404 returned error can't find the container with id 500c4c783f413107856183116027d3fecd45a7cd025872163d724e3b0c504230 Jan 31 07:43:01 crc kubenswrapper[4908]: I0131 07:43:01.288321 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-68b6964dc9-tqj78"] Jan 31 07:43:01 crc kubenswrapper[4908]: I0131 07:43:01.371542 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5d54fb7d5b-4554k"] Jan 31 07:43:01 crc kubenswrapper[4908]: W0131 07:43:01.379755 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda90a3a29_b5c4_4af3_a192_0d897b673ae7.slice/crio-725de21dccd8b04dc28edb2c9138eda69c22e67648dfc66cf74f4865fe4a4394 WatchSource:0}: Error finding container 725de21dccd8b04dc28edb2c9138eda69c22e67648dfc66cf74f4865fe4a4394: Status 404 returned error can't find the container with id 725de21dccd8b04dc28edb2c9138eda69c22e67648dfc66cf74f4865fe4a4394 Jan 31 07:43:01 crc kubenswrapper[4908]: I0131 07:43:01.853199 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-hcj2h" event={"ID":"6184a33a-84dd-4024-984f-df5bb5cdf978","Type":"ContainerStarted","Data":"dc7652cce0f04affbba21e035f3f2b3f53e2234e320e16f392168eb2f6907c32"} Jan 31 07:43:01 crc kubenswrapper[4908]: I0131 07:43:01.854542 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d54fb7d5b-4554k" event={"ID":"a90a3a29-b5c4-4af3-a192-0d897b673ae7","Type":"ContainerStarted","Data":"725de21dccd8b04dc28edb2c9138eda69c22e67648dfc66cf74f4865fe4a4394"} Jan 31 07:43:01 crc kubenswrapper[4908]: I0131 07:43:01.855953 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f0d30b67-d125-4065-bb37-e91a0ba45b29","Type":"ContainerStarted","Data":"3501502faac943ba646632e6a299e6cf944773c9bf227b7fa2360f7da40e5815"} Jan 31 07:43:01 crc kubenswrapper[4908]: I0131 07:43:01.856963 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68b6964dc9-tqj78" event={"ID":"1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0","Type":"ContainerStarted","Data":"5b71872cabbb8d74adc20a10a82bf5fb57afa95f7236012e6623d1f600fc4c0c"} Jan 31 07:43:01 crc kubenswrapper[4908]: I0131 07:43:01.860315 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877","Type":"ContainerStarted","Data":"5d510d55801ce0ffd53b69f6673295171e139627b6c4f319ac48ffb8e60e7988"} Jan 31 07:43:01 crc kubenswrapper[4908]: I0131 07:43:01.861748 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8448fb66b6-c7xrs" event={"ID":"b99eb244-3bee-4f92-9bb2-097fc753c198","Type":"ContainerStarted","Data":"500c4c783f413107856183116027d3fecd45a7cd025872163d724e3b0c504230"} Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.198192 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6b879d7b7d-bdrjp"] Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.199447 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b879d7b7d-bdrjp" Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.204364 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.204521 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.218116 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b879d7b7d-bdrjp"] Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.246405 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6c7z\" (UniqueName: \"kubernetes.io/projected/f507dd3e-9063-413d-9549-88d6a9298d28-kube-api-access-b6c7z\") pod \"barbican-api-6b879d7b7d-bdrjp\" (UID: \"f507dd3e-9063-413d-9549-88d6a9298d28\") " pod="openstack/barbican-api-6b879d7b7d-bdrjp" Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.246478 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f507dd3e-9063-413d-9549-88d6a9298d28-internal-tls-certs\") pod \"barbican-api-6b879d7b7d-bdrjp\" (UID: \"f507dd3e-9063-413d-9549-88d6a9298d28\") " pod="openstack/barbican-api-6b879d7b7d-bdrjp" Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.246578 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f507dd3e-9063-413d-9549-88d6a9298d28-combined-ca-bundle\") pod \"barbican-api-6b879d7b7d-bdrjp\" (UID: \"f507dd3e-9063-413d-9549-88d6a9298d28\") " pod="openstack/barbican-api-6b879d7b7d-bdrjp" Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.246676 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f507dd3e-9063-413d-9549-88d6a9298d28-public-tls-certs\") pod \"barbican-api-6b879d7b7d-bdrjp\" (UID: \"f507dd3e-9063-413d-9549-88d6a9298d28\") " pod="openstack/barbican-api-6b879d7b7d-bdrjp" Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.246711 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f507dd3e-9063-413d-9549-88d6a9298d28-config-data\") pod \"barbican-api-6b879d7b7d-bdrjp\" (UID: \"f507dd3e-9063-413d-9549-88d6a9298d28\") " pod="openstack/barbican-api-6b879d7b7d-bdrjp" Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.246752 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f507dd3e-9063-413d-9549-88d6a9298d28-config-data-custom\") pod \"barbican-api-6b879d7b7d-bdrjp\" (UID: \"f507dd3e-9063-413d-9549-88d6a9298d28\") " pod="openstack/barbican-api-6b879d7b7d-bdrjp" Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.246784 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f507dd3e-9063-413d-9549-88d6a9298d28-logs\") pod \"barbican-api-6b879d7b7d-bdrjp\" (UID: \"f507dd3e-9063-413d-9549-88d6a9298d28\") " pod="openstack/barbican-api-6b879d7b7d-bdrjp" Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.347813 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f507dd3e-9063-413d-9549-88d6a9298d28-internal-tls-certs\") pod \"barbican-api-6b879d7b7d-bdrjp\" (UID: \"f507dd3e-9063-413d-9549-88d6a9298d28\") " pod="openstack/barbican-api-6b879d7b7d-bdrjp" Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.347863 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f507dd3e-9063-413d-9549-88d6a9298d28-combined-ca-bundle\") pod \"barbican-api-6b879d7b7d-bdrjp\" (UID: \"f507dd3e-9063-413d-9549-88d6a9298d28\") " pod="openstack/barbican-api-6b879d7b7d-bdrjp" Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.347901 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f507dd3e-9063-413d-9549-88d6a9298d28-public-tls-certs\") pod \"barbican-api-6b879d7b7d-bdrjp\" (UID: \"f507dd3e-9063-413d-9549-88d6a9298d28\") " pod="openstack/barbican-api-6b879d7b7d-bdrjp" Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.347934 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f507dd3e-9063-413d-9549-88d6a9298d28-config-data\") pod \"barbican-api-6b879d7b7d-bdrjp\" (UID: \"f507dd3e-9063-413d-9549-88d6a9298d28\") " pod="openstack/barbican-api-6b879d7b7d-bdrjp" Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.347971 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f507dd3e-9063-413d-9549-88d6a9298d28-config-data-custom\") pod \"barbican-api-6b879d7b7d-bdrjp\" (UID: \"f507dd3e-9063-413d-9549-88d6a9298d28\") " pod="openstack/barbican-api-6b879d7b7d-bdrjp" Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.348012 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f507dd3e-9063-413d-9549-88d6a9298d28-logs\") pod \"barbican-api-6b879d7b7d-bdrjp\" (UID: \"f507dd3e-9063-413d-9549-88d6a9298d28\") " pod="openstack/barbican-api-6b879d7b7d-bdrjp" Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.348058 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6c7z\" (UniqueName: \"kubernetes.io/projected/f507dd3e-9063-413d-9549-88d6a9298d28-kube-api-access-b6c7z\") pod \"barbican-api-6b879d7b7d-bdrjp\" (UID: \"f507dd3e-9063-413d-9549-88d6a9298d28\") " pod="openstack/barbican-api-6b879d7b7d-bdrjp" Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.348804 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f507dd3e-9063-413d-9549-88d6a9298d28-logs\") pod \"barbican-api-6b879d7b7d-bdrjp\" (UID: \"f507dd3e-9063-413d-9549-88d6a9298d28\") " pod="openstack/barbican-api-6b879d7b7d-bdrjp" Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.361907 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f507dd3e-9063-413d-9549-88d6a9298d28-internal-tls-certs\") pod \"barbican-api-6b879d7b7d-bdrjp\" (UID: \"f507dd3e-9063-413d-9549-88d6a9298d28\") " pod="openstack/barbican-api-6b879d7b7d-bdrjp" Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.361923 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f507dd3e-9063-413d-9549-88d6a9298d28-config-data-custom\") pod \"barbican-api-6b879d7b7d-bdrjp\" (UID: \"f507dd3e-9063-413d-9549-88d6a9298d28\") " pod="openstack/barbican-api-6b879d7b7d-bdrjp" Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.362368 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f507dd3e-9063-413d-9549-88d6a9298d28-combined-ca-bundle\") pod \"barbican-api-6b879d7b7d-bdrjp\" (UID: \"f507dd3e-9063-413d-9549-88d6a9298d28\") " pod="openstack/barbican-api-6b879d7b7d-bdrjp" Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.362575 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f507dd3e-9063-413d-9549-88d6a9298d28-config-data\") pod \"barbican-api-6b879d7b7d-bdrjp\" (UID: \"f507dd3e-9063-413d-9549-88d6a9298d28\") " pod="openstack/barbican-api-6b879d7b7d-bdrjp" Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.363000 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f507dd3e-9063-413d-9549-88d6a9298d28-public-tls-certs\") pod \"barbican-api-6b879d7b7d-bdrjp\" (UID: \"f507dd3e-9063-413d-9549-88d6a9298d28\") " pod="openstack/barbican-api-6b879d7b7d-bdrjp" Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.368751 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6c7z\" (UniqueName: \"kubernetes.io/projected/f507dd3e-9063-413d-9549-88d6a9298d28-kube-api-access-b6c7z\") pod \"barbican-api-6b879d7b7d-bdrjp\" (UID: \"f507dd3e-9063-413d-9549-88d6a9298d28\") " pod="openstack/barbican-api-6b879d7b7d-bdrjp" Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.518621 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b879d7b7d-bdrjp" Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.874049 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8448fb66b6-c7xrs" event={"ID":"b99eb244-3bee-4f92-9bb2-097fc753c198","Type":"ContainerStarted","Data":"f0571bc4bb452de79deeceba020fe82ca70fda64aa675bddfbea62cbe1ddbd0c"} Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.875375 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-hcj2h" event={"ID":"6184a33a-84dd-4024-984f-df5bb5cdf978","Type":"ContainerStarted","Data":"5b41118779a30c449f3618a3743cfecc18a85987091c3a4c180e94b4fbcc6bf6"} Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.875532 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 07:43:02 crc kubenswrapper[4908]: I0131 07:43:02.897872 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.973154518 podStartE2EDuration="9.897812309s" podCreationTimestamp="2026-01-31 07:42:53 +0000 UTC" firstStartedPulling="2026-01-31 07:42:54.621765278 +0000 UTC m=+1281.237709932" lastFinishedPulling="2026-01-31 07:43:00.546423059 +0000 UTC m=+1287.162367723" observedRunningTime="2026-01-31 07:43:02.893692178 +0000 UTC m=+1289.509636842" watchObservedRunningTime="2026-01-31 07:43:02.897812309 +0000 UTC m=+1289.513756963" Jan 31 07:43:03 crc kubenswrapper[4908]: I0131 07:43:03.028838 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b879d7b7d-bdrjp"] Jan 31 07:43:03 crc kubenswrapper[4908]: I0131 07:43:03.883708 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b879d7b7d-bdrjp" event={"ID":"f507dd3e-9063-413d-9549-88d6a9298d28","Type":"ContainerStarted","Data":"6a244dc9cf0870f89ef1031974dfe8bac93062ce95f39b523dc4efa2723fa28c"} Jan 31 07:43:06 crc kubenswrapper[4908]: I0131 07:43:06.698186 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5b4c4c4576-4d8kj" Jan 31 07:43:06 crc kubenswrapper[4908]: I0131 07:43:06.791062 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5b4c4c4576-4d8kj" Jan 31 07:43:06 crc kubenswrapper[4908]: I0131 07:43:06.981251 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b879d7b7d-bdrjp" event={"ID":"f507dd3e-9063-413d-9549-88d6a9298d28","Type":"ContainerStarted","Data":"b78fed1d3d4dae9956fa4cf9457f29d9112eee572af058d16b6f9bedca725843"} Jan 31 07:43:06 crc kubenswrapper[4908]: I0131 07:43:06.989415 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8448fb66b6-c7xrs" event={"ID":"b99eb244-3bee-4f92-9bb2-097fc753c198","Type":"ContainerStarted","Data":"c9ef71c4a6ae08297c3e1af97eb961da0bd431dc2d4629cd9840e24851348682"} Jan 31 07:43:06 crc kubenswrapper[4908]: I0131 07:43:06.990217 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8448fb66b6-c7xrs" Jan 31 07:43:06 crc kubenswrapper[4908]: I0131 07:43:06.990261 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8448fb66b6-c7xrs" Jan 31 07:43:06 crc kubenswrapper[4908]: I0131 07:43:06.993338 4908 generic.go:334] "Generic (PLEG): container finished" podID="6184a33a-84dd-4024-984f-df5bb5cdf978" containerID="5b41118779a30c449f3618a3743cfecc18a85987091c3a4c180e94b4fbcc6bf6" exitCode=0 Jan 31 07:43:06 crc kubenswrapper[4908]: I0131 07:43:06.994084 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-hcj2h" event={"ID":"6184a33a-84dd-4024-984f-df5bb5cdf978","Type":"ContainerDied","Data":"5b41118779a30c449f3618a3743cfecc18a85987091c3a4c180e94b4fbcc6bf6"} Jan 31 07:43:07 crc kubenswrapper[4908]: I0131 07:43:07.014075 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-8448fb66b6-c7xrs" podStartSLOduration=8.013967581 podStartE2EDuration="8.013967581s" podCreationTimestamp="2026-01-31 07:42:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:43:07.012643989 +0000 UTC m=+1293.628588663" watchObservedRunningTime="2026-01-31 07:43:07.013967581 +0000 UTC m=+1293.629912235" Jan 31 07:43:08 crc kubenswrapper[4908]: I0131 07:43:08.018422 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b879d7b7d-bdrjp" event={"ID":"f507dd3e-9063-413d-9549-88d6a9298d28","Type":"ContainerStarted","Data":"8928d9ec859e4bb4e0242d6a0573fbf7d656ac93b4ebf9d39ae359091ef9a65d"} Jan 31 07:43:08 crc kubenswrapper[4908]: I0131 07:43:08.019018 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b879d7b7d-bdrjp" Jan 31 07:43:08 crc kubenswrapper[4908]: I0131 07:43:08.019115 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b879d7b7d-bdrjp" Jan 31 07:43:08 crc kubenswrapper[4908]: I0131 07:43:08.022528 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-hcj2h" event={"ID":"6184a33a-84dd-4024-984f-df5bb5cdf978","Type":"ContainerStarted","Data":"2f7f460e293e7255a4eb8556f8aed3d84c8f2b6defe7b3915b55cee35d4e990e"} Jan 31 07:43:08 crc kubenswrapper[4908]: I0131 07:43:08.022741 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f46f79845-hcj2h" Jan 31 07:43:08 crc kubenswrapper[4908]: I0131 07:43:08.106775 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6b879d7b7d-bdrjp" podStartSLOduration=6.106744434 podStartE2EDuration="6.106744434s" podCreationTimestamp="2026-01-31 07:43:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:43:08.081498168 +0000 UTC m=+1294.697442842" watchObservedRunningTime="2026-01-31 07:43:08.106744434 +0000 UTC m=+1294.722689088" Jan 31 07:43:08 crc kubenswrapper[4908]: I0131 07:43:08.108354 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f46f79845-hcj2h" podStartSLOduration=9.108347843 podStartE2EDuration="9.108347843s" podCreationTimestamp="2026-01-31 07:42:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:43:08.101274691 +0000 UTC m=+1294.717219365" watchObservedRunningTime="2026-01-31 07:43:08.108347843 +0000 UTC m=+1294.724292497" Jan 31 07:43:09 crc kubenswrapper[4908]: I0131 07:43:09.035045 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d54fb7d5b-4554k" event={"ID":"a90a3a29-b5c4-4af3-a192-0d897b673ae7","Type":"ContainerStarted","Data":"d59c8f93dab1541b559ac1c7bf81f2eafcfd55a61f30ad0aec9f632fa1dfdb8e"} Jan 31 07:43:09 crc kubenswrapper[4908]: I0131 07:43:09.039941 4908 generic.go:334] "Generic (PLEG): container finished" podID="ee8dbc71-e43b-49a6-9d68-78b987f39b89" containerID="5a86a9cecf68c69686e197d5cb7dd2280893873fbf065193f53ae3dd8613e2fe" exitCode=0 Jan 31 07:43:09 crc kubenswrapper[4908]: I0131 07:43:09.040033 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8bx9v" event={"ID":"ee8dbc71-e43b-49a6-9d68-78b987f39b89","Type":"ContainerDied","Data":"5a86a9cecf68c69686e197d5cb7dd2280893873fbf065193f53ae3dd8613e2fe"} Jan 31 07:43:09 crc kubenswrapper[4908]: I0131 07:43:09.054433 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68b6964dc9-tqj78" event={"ID":"1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0","Type":"ContainerStarted","Data":"38a82bde01cd1b52c69630c4cbab21101c019fcf3859a5a1772a12de631498d9"} Jan 31 07:43:09 crc kubenswrapper[4908]: I0131 07:43:09.922361 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8448fb66b6-c7xrs" Jan 31 07:43:10 crc kubenswrapper[4908]: I0131 07:43:10.068676 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d54fb7d5b-4554k" event={"ID":"a90a3a29-b5c4-4af3-a192-0d897b673ae7","Type":"ContainerStarted","Data":"e2ccf6711f05077325f1c7895abdce7e1b9fb3dafacb88c3ae3d81d99ebc00ab"} Jan 31 07:43:10 crc kubenswrapper[4908]: I0131 07:43:10.073509 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68b6964dc9-tqj78" event={"ID":"1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0","Type":"ContainerStarted","Data":"4601afab4823193cafe1acdef00dbe933045d34b9057de63148d82ee0f698bc6"} Jan 31 07:43:10 crc kubenswrapper[4908]: I0131 07:43:10.093846 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5d54fb7d5b-4554k" podStartSLOduration=3.74124768 podStartE2EDuration="11.093822188s" podCreationTimestamp="2026-01-31 07:42:59 +0000 UTC" firstStartedPulling="2026-01-31 07:43:01.384424698 +0000 UTC m=+1288.000369352" lastFinishedPulling="2026-01-31 07:43:08.736999206 +0000 UTC m=+1295.352943860" observedRunningTime="2026-01-31 07:43:10.08693542 +0000 UTC m=+1296.702880074" watchObservedRunningTime="2026-01-31 07:43:10.093822188 +0000 UTC m=+1296.709766852" Jan 31 07:43:10 crc kubenswrapper[4908]: I0131 07:43:10.102640 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-68b6964dc9-tqj78" podStartSLOduration=3.611479395 podStartE2EDuration="11.102625333s" podCreationTimestamp="2026-01-31 07:42:59 +0000 UTC" firstStartedPulling="2026-01-31 07:43:01.284757807 +0000 UTC m=+1287.900702461" lastFinishedPulling="2026-01-31 07:43:08.775903745 +0000 UTC m=+1295.391848399" observedRunningTime="2026-01-31 07:43:10.100722016 +0000 UTC m=+1296.716666670" watchObservedRunningTime="2026-01-31 07:43:10.102625333 +0000 UTC m=+1296.718569987" Jan 31 07:43:10 crc kubenswrapper[4908]: I0131 07:43:10.512806 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8bx9v" Jan 31 07:43:10 crc kubenswrapper[4908]: I0131 07:43:10.650496 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee8dbc71-e43b-49a6-9d68-78b987f39b89-scripts\") pod \"ee8dbc71-e43b-49a6-9d68-78b987f39b89\" (UID: \"ee8dbc71-e43b-49a6-9d68-78b987f39b89\") " Jan 31 07:43:10 crc kubenswrapper[4908]: I0131 07:43:10.650833 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ee8dbc71-e43b-49a6-9d68-78b987f39b89-db-sync-config-data\") pod \"ee8dbc71-e43b-49a6-9d68-78b987f39b89\" (UID: \"ee8dbc71-e43b-49a6-9d68-78b987f39b89\") " Jan 31 07:43:10 crc kubenswrapper[4908]: I0131 07:43:10.650892 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjxvn\" (UniqueName: \"kubernetes.io/projected/ee8dbc71-e43b-49a6-9d68-78b987f39b89-kube-api-access-wjxvn\") pod \"ee8dbc71-e43b-49a6-9d68-78b987f39b89\" (UID: \"ee8dbc71-e43b-49a6-9d68-78b987f39b89\") " Jan 31 07:43:10 crc kubenswrapper[4908]: I0131 07:43:10.650963 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee8dbc71-e43b-49a6-9d68-78b987f39b89-config-data\") pod \"ee8dbc71-e43b-49a6-9d68-78b987f39b89\" (UID: \"ee8dbc71-e43b-49a6-9d68-78b987f39b89\") " Jan 31 07:43:10 crc kubenswrapper[4908]: I0131 07:43:10.651078 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee8dbc71-e43b-49a6-9d68-78b987f39b89-combined-ca-bundle\") pod \"ee8dbc71-e43b-49a6-9d68-78b987f39b89\" (UID: \"ee8dbc71-e43b-49a6-9d68-78b987f39b89\") " Jan 31 07:43:10 crc kubenswrapper[4908]: I0131 07:43:10.651196 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ee8dbc71-e43b-49a6-9d68-78b987f39b89-etc-machine-id\") pod \"ee8dbc71-e43b-49a6-9d68-78b987f39b89\" (UID: \"ee8dbc71-e43b-49a6-9d68-78b987f39b89\") " Jan 31 07:43:10 crc kubenswrapper[4908]: I0131 07:43:10.651475 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee8dbc71-e43b-49a6-9d68-78b987f39b89-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ee8dbc71-e43b-49a6-9d68-78b987f39b89" (UID: "ee8dbc71-e43b-49a6-9d68-78b987f39b89"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:43:10 crc kubenswrapper[4908]: I0131 07:43:10.652774 4908 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ee8dbc71-e43b-49a6-9d68-78b987f39b89-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:10 crc kubenswrapper[4908]: I0131 07:43:10.659141 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee8dbc71-e43b-49a6-9d68-78b987f39b89-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ee8dbc71-e43b-49a6-9d68-78b987f39b89" (UID: "ee8dbc71-e43b-49a6-9d68-78b987f39b89"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:43:10 crc kubenswrapper[4908]: I0131 07:43:10.679118 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee8dbc71-e43b-49a6-9d68-78b987f39b89-scripts" (OuterVolumeSpecName: "scripts") pod "ee8dbc71-e43b-49a6-9d68-78b987f39b89" (UID: "ee8dbc71-e43b-49a6-9d68-78b987f39b89"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:43:10 crc kubenswrapper[4908]: I0131 07:43:10.686168 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee8dbc71-e43b-49a6-9d68-78b987f39b89-kube-api-access-wjxvn" (OuterVolumeSpecName: "kube-api-access-wjxvn") pod "ee8dbc71-e43b-49a6-9d68-78b987f39b89" (UID: "ee8dbc71-e43b-49a6-9d68-78b987f39b89"). InnerVolumeSpecName "kube-api-access-wjxvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:43:10 crc kubenswrapper[4908]: I0131 07:43:10.716182 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee8dbc71-e43b-49a6-9d68-78b987f39b89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee8dbc71-e43b-49a6-9d68-78b987f39b89" (UID: "ee8dbc71-e43b-49a6-9d68-78b987f39b89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:43:10 crc kubenswrapper[4908]: I0131 07:43:10.721195 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee8dbc71-e43b-49a6-9d68-78b987f39b89-config-data" (OuterVolumeSpecName: "config-data") pod "ee8dbc71-e43b-49a6-9d68-78b987f39b89" (UID: "ee8dbc71-e43b-49a6-9d68-78b987f39b89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:43:10 crc kubenswrapper[4908]: I0131 07:43:10.757062 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjxvn\" (UniqueName: \"kubernetes.io/projected/ee8dbc71-e43b-49a6-9d68-78b987f39b89-kube-api-access-wjxvn\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:10 crc kubenswrapper[4908]: I0131 07:43:10.757092 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee8dbc71-e43b-49a6-9d68-78b987f39b89-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:10 crc kubenswrapper[4908]: I0131 07:43:10.757101 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee8dbc71-e43b-49a6-9d68-78b987f39b89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:10 crc kubenswrapper[4908]: I0131 07:43:10.757110 4908 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee8dbc71-e43b-49a6-9d68-78b987f39b89-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:10 crc kubenswrapper[4908]: I0131 07:43:10.757118 4908 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ee8dbc71-e43b-49a6-9d68-78b987f39b89-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.091578 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8bx9v" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.091724 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8bx9v" event={"ID":"ee8dbc71-e43b-49a6-9d68-78b987f39b89","Type":"ContainerDied","Data":"a6e8ef87ca1a02976b2e75ea54b18d453b8994c3d5cda64dd0f8011116a3ae95"} Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.091759 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6e8ef87ca1a02976b2e75ea54b18d453b8994c3d5cda64dd0f8011116a3ae95" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.337489 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 07:43:11 crc kubenswrapper[4908]: E0131 07:43:11.337946 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee8dbc71-e43b-49a6-9d68-78b987f39b89" containerName="cinder-db-sync" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.337965 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee8dbc71-e43b-49a6-9d68-78b987f39b89" containerName="cinder-db-sync" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.338152 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee8dbc71-e43b-49a6-9d68-78b987f39b89" containerName="cinder-db-sync" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.339046 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.347611 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.347993 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.348118 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.348178 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-cpqgs" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.384173 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.455529 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-hcj2h"] Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.455788 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f46f79845-hcj2h" podUID="6184a33a-84dd-4024-984f-df5bb5cdf978" containerName="dnsmasq-dns" containerID="cri-o://2f7f460e293e7255a4eb8556f8aed3d84c8f2b6defe7b3915b55cee35d4e990e" gracePeriod=10 Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.492755 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25723372-e8bd-4939-a894-5d994e36b942-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"25723372-e8bd-4939-a894-5d994e36b942\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.492826 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrmrv\" (UniqueName: \"kubernetes.io/projected/25723372-e8bd-4939-a894-5d994e36b942-kube-api-access-zrmrv\") pod \"cinder-scheduler-0\" (UID: \"25723372-e8bd-4939-a894-5d994e36b942\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.492849 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25723372-e8bd-4939-a894-5d994e36b942-config-data\") pod \"cinder-scheduler-0\" (UID: \"25723372-e8bd-4939-a894-5d994e36b942\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.492948 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25723372-e8bd-4939-a894-5d994e36b942-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"25723372-e8bd-4939-a894-5d994e36b942\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.493187 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25723372-e8bd-4939-a894-5d994e36b942-scripts\") pod \"cinder-scheduler-0\" (UID: \"25723372-e8bd-4939-a894-5d994e36b942\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.493240 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25723372-e8bd-4939-a894-5d994e36b942-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"25723372-e8bd-4939-a894-5d994e36b942\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.513474 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2"] Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.515526 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.548342 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2"] Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.598186 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.600711 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.607294 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x97l2\" (UniqueName: \"kubernetes.io/projected/3e69c80a-41fd-4df0-8758-e105a45521bf-kube-api-access-x97l2\") pod \"dnsmasq-dns-5f7f9f7cbf-xlrg2\" (UID: \"3e69c80a-41fd-4df0-8758-e105a45521bf\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.607395 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fczsj\" (UniqueName: \"kubernetes.io/projected/01e6536a-a222-489b-90cc-47995bf7c8ec-kube-api-access-fczsj\") pod \"cinder-api-0\" (UID: \"01e6536a-a222-489b-90cc-47995bf7c8ec\") " pod="openstack/cinder-api-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.607433 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25723372-e8bd-4939-a894-5d994e36b942-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"25723372-e8bd-4939-a894-5d994e36b942\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.607494 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrmrv\" (UniqueName: \"kubernetes.io/projected/25723372-e8bd-4939-a894-5d994e36b942-kube-api-access-zrmrv\") pod \"cinder-scheduler-0\" (UID: \"25723372-e8bd-4939-a894-5d994e36b942\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.607522 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25723372-e8bd-4939-a894-5d994e36b942-config-data\") pod \"cinder-scheduler-0\" (UID: \"25723372-e8bd-4939-a894-5d994e36b942\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.607587 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/01e6536a-a222-489b-90cc-47995bf7c8ec-etc-machine-id\") pod \"cinder-api-0\" (UID: \"01e6536a-a222-489b-90cc-47995bf7c8ec\") " pod="openstack/cinder-api-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.607640 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e69c80a-41fd-4df0-8758-e105a45521bf-dns-svc\") pod \"dnsmasq-dns-5f7f9f7cbf-xlrg2\" (UID: \"3e69c80a-41fd-4df0-8758-e105a45521bf\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.607665 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e69c80a-41fd-4df0-8758-e105a45521bf-ovsdbserver-sb\") pod \"dnsmasq-dns-5f7f9f7cbf-xlrg2\" (UID: \"3e69c80a-41fd-4df0-8758-e105a45521bf\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.607733 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25723372-e8bd-4939-a894-5d994e36b942-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"25723372-e8bd-4939-a894-5d994e36b942\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.607777 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01e6536a-a222-489b-90cc-47995bf7c8ec-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"01e6536a-a222-489b-90cc-47995bf7c8ec\") " pod="openstack/cinder-api-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.607855 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e69c80a-41fd-4df0-8758-e105a45521bf-config\") pod \"dnsmasq-dns-5f7f9f7cbf-xlrg2\" (UID: \"3e69c80a-41fd-4df0-8758-e105a45521bf\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.607909 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01e6536a-a222-489b-90cc-47995bf7c8ec-logs\") pod \"cinder-api-0\" (UID: \"01e6536a-a222-489b-90cc-47995bf7c8ec\") " pod="openstack/cinder-api-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.607947 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25723372-e8bd-4939-a894-5d994e36b942-scripts\") pod \"cinder-scheduler-0\" (UID: \"25723372-e8bd-4939-a894-5d994e36b942\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.608014 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25723372-e8bd-4939-a894-5d994e36b942-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"25723372-e8bd-4939-a894-5d994e36b942\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.608078 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e69c80a-41fd-4df0-8758-e105a45521bf-ovsdbserver-nb\") pod \"dnsmasq-dns-5f7f9f7cbf-xlrg2\" (UID: \"3e69c80a-41fd-4df0-8758-e105a45521bf\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.608193 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01e6536a-a222-489b-90cc-47995bf7c8ec-scripts\") pod \"cinder-api-0\" (UID: \"01e6536a-a222-489b-90cc-47995bf7c8ec\") " pod="openstack/cinder-api-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.608246 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01e6536a-a222-489b-90cc-47995bf7c8ec-config-data\") pod \"cinder-api-0\" (UID: \"01e6536a-a222-489b-90cc-47995bf7c8ec\") " pod="openstack/cinder-api-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.608269 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01e6536a-a222-489b-90cc-47995bf7c8ec-config-data-custom\") pod \"cinder-api-0\" (UID: \"01e6536a-a222-489b-90cc-47995bf7c8ec\") " pod="openstack/cinder-api-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.611960 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25723372-e8bd-4939-a894-5d994e36b942-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"25723372-e8bd-4939-a894-5d994e36b942\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.617642 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.617926 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25723372-e8bd-4939-a894-5d994e36b942-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"25723372-e8bd-4939-a894-5d994e36b942\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.629380 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25723372-e8bd-4939-a894-5d994e36b942-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"25723372-e8bd-4939-a894-5d994e36b942\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.630568 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25723372-e8bd-4939-a894-5d994e36b942-config-data\") pod \"cinder-scheduler-0\" (UID: \"25723372-e8bd-4939-a894-5d994e36b942\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.638470 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.655534 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25723372-e8bd-4939-a894-5d994e36b942-scripts\") pod \"cinder-scheduler-0\" (UID: \"25723372-e8bd-4939-a894-5d994e36b942\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.656511 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrmrv\" (UniqueName: \"kubernetes.io/projected/25723372-e8bd-4939-a894-5d994e36b942-kube-api-access-zrmrv\") pod \"cinder-scheduler-0\" (UID: \"25723372-e8bd-4939-a894-5d994e36b942\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.673817 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.720733 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01e6536a-a222-489b-90cc-47995bf7c8ec-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"01e6536a-a222-489b-90cc-47995bf7c8ec\") " pod="openstack/cinder-api-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.720832 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e69c80a-41fd-4df0-8758-e105a45521bf-config\") pod \"dnsmasq-dns-5f7f9f7cbf-xlrg2\" (UID: \"3e69c80a-41fd-4df0-8758-e105a45521bf\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.720864 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01e6536a-a222-489b-90cc-47995bf7c8ec-logs\") pod \"cinder-api-0\" (UID: \"01e6536a-a222-489b-90cc-47995bf7c8ec\") " pod="openstack/cinder-api-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.720951 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e69c80a-41fd-4df0-8758-e105a45521bf-ovsdbserver-nb\") pod \"dnsmasq-dns-5f7f9f7cbf-xlrg2\" (UID: \"3e69c80a-41fd-4df0-8758-e105a45521bf\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.721054 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01e6536a-a222-489b-90cc-47995bf7c8ec-scripts\") pod \"cinder-api-0\" (UID: \"01e6536a-a222-489b-90cc-47995bf7c8ec\") " pod="openstack/cinder-api-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.721082 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01e6536a-a222-489b-90cc-47995bf7c8ec-config-data\") pod \"cinder-api-0\" (UID: \"01e6536a-a222-489b-90cc-47995bf7c8ec\") " pod="openstack/cinder-api-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.721104 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01e6536a-a222-489b-90cc-47995bf7c8ec-config-data-custom\") pod \"cinder-api-0\" (UID: \"01e6536a-a222-489b-90cc-47995bf7c8ec\") " pod="openstack/cinder-api-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.721151 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x97l2\" (UniqueName: \"kubernetes.io/projected/3e69c80a-41fd-4df0-8758-e105a45521bf-kube-api-access-x97l2\") pod \"dnsmasq-dns-5f7f9f7cbf-xlrg2\" (UID: \"3e69c80a-41fd-4df0-8758-e105a45521bf\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.721204 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fczsj\" (UniqueName: \"kubernetes.io/projected/01e6536a-a222-489b-90cc-47995bf7c8ec-kube-api-access-fczsj\") pod \"cinder-api-0\" (UID: \"01e6536a-a222-489b-90cc-47995bf7c8ec\") " pod="openstack/cinder-api-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.721252 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/01e6536a-a222-489b-90cc-47995bf7c8ec-etc-machine-id\") pod \"cinder-api-0\" (UID: \"01e6536a-a222-489b-90cc-47995bf7c8ec\") " pod="openstack/cinder-api-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.721279 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e69c80a-41fd-4df0-8758-e105a45521bf-dns-svc\") pod \"dnsmasq-dns-5f7f9f7cbf-xlrg2\" (UID: \"3e69c80a-41fd-4df0-8758-e105a45521bf\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.721300 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e69c80a-41fd-4df0-8758-e105a45521bf-ovsdbserver-sb\") pod \"dnsmasq-dns-5f7f9f7cbf-xlrg2\" (UID: \"3e69c80a-41fd-4df0-8758-e105a45521bf\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.722543 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e69c80a-41fd-4df0-8758-e105a45521bf-ovsdbserver-sb\") pod \"dnsmasq-dns-5f7f9f7cbf-xlrg2\" (UID: \"3e69c80a-41fd-4df0-8758-e105a45521bf\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.722714 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e69c80a-41fd-4df0-8758-e105a45521bf-config\") pod \"dnsmasq-dns-5f7f9f7cbf-xlrg2\" (UID: \"3e69c80a-41fd-4df0-8758-e105a45521bf\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.727249 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01e6536a-a222-489b-90cc-47995bf7c8ec-logs\") pod \"cinder-api-0\" (UID: \"01e6536a-a222-489b-90cc-47995bf7c8ec\") " pod="openstack/cinder-api-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.727703 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/01e6536a-a222-489b-90cc-47995bf7c8ec-etc-machine-id\") pod \"cinder-api-0\" (UID: \"01e6536a-a222-489b-90cc-47995bf7c8ec\") " pod="openstack/cinder-api-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.728566 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e69c80a-41fd-4df0-8758-e105a45521bf-dns-svc\") pod \"dnsmasq-dns-5f7f9f7cbf-xlrg2\" (UID: \"3e69c80a-41fd-4df0-8758-e105a45521bf\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.734927 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01e6536a-a222-489b-90cc-47995bf7c8ec-config-data-custom\") pod \"cinder-api-0\" (UID: \"01e6536a-a222-489b-90cc-47995bf7c8ec\") " pod="openstack/cinder-api-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.750482 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01e6536a-a222-489b-90cc-47995bf7c8ec-config-data\") pod \"cinder-api-0\" (UID: \"01e6536a-a222-489b-90cc-47995bf7c8ec\") " pod="openstack/cinder-api-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.754464 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fczsj\" (UniqueName: \"kubernetes.io/projected/01e6536a-a222-489b-90cc-47995bf7c8ec-kube-api-access-fczsj\") pod \"cinder-api-0\" (UID: \"01e6536a-a222-489b-90cc-47995bf7c8ec\") " pod="openstack/cinder-api-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.757946 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x97l2\" (UniqueName: \"kubernetes.io/projected/3e69c80a-41fd-4df0-8758-e105a45521bf-kube-api-access-x97l2\") pod \"dnsmasq-dns-5f7f9f7cbf-xlrg2\" (UID: \"3e69c80a-41fd-4df0-8758-e105a45521bf\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.770540 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01e6536a-a222-489b-90cc-47995bf7c8ec-scripts\") pod \"cinder-api-0\" (UID: \"01e6536a-a222-489b-90cc-47995bf7c8ec\") " pod="openstack/cinder-api-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.771677 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e69c80a-41fd-4df0-8758-e105a45521bf-ovsdbserver-nb\") pod \"dnsmasq-dns-5f7f9f7cbf-xlrg2\" (UID: \"3e69c80a-41fd-4df0-8758-e105a45521bf\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.774296 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01e6536a-a222-489b-90cc-47995bf7c8ec-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"01e6536a-a222-489b-90cc-47995bf7c8ec\") " pod="openstack/cinder-api-0" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.822912 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2" Jan 31 07:43:11 crc kubenswrapper[4908]: I0131 07:43:11.835835 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.078171 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-hcj2h" Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.107321 4908 generic.go:334] "Generic (PLEG): container finished" podID="6184a33a-84dd-4024-984f-df5bb5cdf978" containerID="2f7f460e293e7255a4eb8556f8aed3d84c8f2b6defe7b3915b55cee35d4e990e" exitCode=0 Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.107374 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-hcj2h" event={"ID":"6184a33a-84dd-4024-984f-df5bb5cdf978","Type":"ContainerDied","Data":"2f7f460e293e7255a4eb8556f8aed3d84c8f2b6defe7b3915b55cee35d4e990e"} Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.107403 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-hcj2h" event={"ID":"6184a33a-84dd-4024-984f-df5bb5cdf978","Type":"ContainerDied","Data":"dc7652cce0f04affbba21e035f3f2b3f53e2234e320e16f392168eb2f6907c32"} Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.107422 4908 scope.go:117] "RemoveContainer" containerID="2f7f460e293e7255a4eb8556f8aed3d84c8f2b6defe7b3915b55cee35d4e990e" Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.107503 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-hcj2h" Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.129264 4908 scope.go:117] "RemoveContainer" containerID="5b41118779a30c449f3618a3743cfecc18a85987091c3a4c180e94b4fbcc6bf6" Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.159079 4908 scope.go:117] "RemoveContainer" containerID="2f7f460e293e7255a4eb8556f8aed3d84c8f2b6defe7b3915b55cee35d4e990e" Jan 31 07:43:12 crc kubenswrapper[4908]: E0131 07:43:12.159470 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f7f460e293e7255a4eb8556f8aed3d84c8f2b6defe7b3915b55cee35d4e990e\": container with ID starting with 2f7f460e293e7255a4eb8556f8aed3d84c8f2b6defe7b3915b55cee35d4e990e not found: ID does not exist" containerID="2f7f460e293e7255a4eb8556f8aed3d84c8f2b6defe7b3915b55cee35d4e990e" Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.159512 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f7f460e293e7255a4eb8556f8aed3d84c8f2b6defe7b3915b55cee35d4e990e"} err="failed to get container status \"2f7f460e293e7255a4eb8556f8aed3d84c8f2b6defe7b3915b55cee35d4e990e\": rpc error: code = NotFound desc = could not find container \"2f7f460e293e7255a4eb8556f8aed3d84c8f2b6defe7b3915b55cee35d4e990e\": container with ID starting with 2f7f460e293e7255a4eb8556f8aed3d84c8f2b6defe7b3915b55cee35d4e990e not found: ID does not exist" Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.159538 4908 scope.go:117] "RemoveContainer" containerID="5b41118779a30c449f3618a3743cfecc18a85987091c3a4c180e94b4fbcc6bf6" Jan 31 07:43:12 crc kubenswrapper[4908]: E0131 07:43:12.159786 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b41118779a30c449f3618a3743cfecc18a85987091c3a4c180e94b4fbcc6bf6\": container with ID starting with 5b41118779a30c449f3618a3743cfecc18a85987091c3a4c180e94b4fbcc6bf6 not found: ID does not exist" containerID="5b41118779a30c449f3618a3743cfecc18a85987091c3a4c180e94b4fbcc6bf6" Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.159815 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b41118779a30c449f3618a3743cfecc18a85987091c3a4c180e94b4fbcc6bf6"} err="failed to get container status \"5b41118779a30c449f3618a3743cfecc18a85987091c3a4c180e94b4fbcc6bf6\": rpc error: code = NotFound desc = could not find container \"5b41118779a30c449f3618a3743cfecc18a85987091c3a4c180e94b4fbcc6bf6\": container with ID starting with 5b41118779a30c449f3618a3743cfecc18a85987091c3a4c180e94b4fbcc6bf6 not found: ID does not exist" Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.237233 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6184a33a-84dd-4024-984f-df5bb5cdf978-ovsdbserver-nb\") pod \"6184a33a-84dd-4024-984f-df5bb5cdf978\" (UID: \"6184a33a-84dd-4024-984f-df5bb5cdf978\") " Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.237336 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6184a33a-84dd-4024-984f-df5bb5cdf978-config\") pod \"6184a33a-84dd-4024-984f-df5bb5cdf978\" (UID: \"6184a33a-84dd-4024-984f-df5bb5cdf978\") " Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.237462 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxc4j\" (UniqueName: \"kubernetes.io/projected/6184a33a-84dd-4024-984f-df5bb5cdf978-kube-api-access-xxc4j\") pod \"6184a33a-84dd-4024-984f-df5bb5cdf978\" (UID: \"6184a33a-84dd-4024-984f-df5bb5cdf978\") " Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.237510 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6184a33a-84dd-4024-984f-df5bb5cdf978-dns-svc\") pod \"6184a33a-84dd-4024-984f-df5bb5cdf978\" (UID: \"6184a33a-84dd-4024-984f-df5bb5cdf978\") " Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.237562 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6184a33a-84dd-4024-984f-df5bb5cdf978-ovsdbserver-sb\") pod \"6184a33a-84dd-4024-984f-df5bb5cdf978\" (UID: \"6184a33a-84dd-4024-984f-df5bb5cdf978\") " Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.242887 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6184a33a-84dd-4024-984f-df5bb5cdf978-kube-api-access-xxc4j" (OuterVolumeSpecName: "kube-api-access-xxc4j") pod "6184a33a-84dd-4024-984f-df5bb5cdf978" (UID: "6184a33a-84dd-4024-984f-df5bb5cdf978"). InnerVolumeSpecName "kube-api-access-xxc4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.288007 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6184a33a-84dd-4024-984f-df5bb5cdf978-config" (OuterVolumeSpecName: "config") pod "6184a33a-84dd-4024-984f-df5bb5cdf978" (UID: "6184a33a-84dd-4024-984f-df5bb5cdf978"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.302042 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6184a33a-84dd-4024-984f-df5bb5cdf978-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6184a33a-84dd-4024-984f-df5bb5cdf978" (UID: "6184a33a-84dd-4024-984f-df5bb5cdf978"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.311493 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6184a33a-84dd-4024-984f-df5bb5cdf978-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6184a33a-84dd-4024-984f-df5bb5cdf978" (UID: "6184a33a-84dd-4024-984f-df5bb5cdf978"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.324079 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6184a33a-84dd-4024-984f-df5bb5cdf978-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6184a33a-84dd-4024-984f-df5bb5cdf978" (UID: "6184a33a-84dd-4024-984f-df5bb5cdf978"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.340222 4908 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6184a33a-84dd-4024-984f-df5bb5cdf978-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.340255 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6184a33a-84dd-4024-984f-df5bb5cdf978-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.340272 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxc4j\" (UniqueName: \"kubernetes.io/projected/6184a33a-84dd-4024-984f-df5bb5cdf978-kube-api-access-xxc4j\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.340283 4908 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6184a33a-84dd-4024-984f-df5bb5cdf978-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.340291 4908 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6184a33a-84dd-4024-984f-df5bb5cdf978-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.354749 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 07:43:12 crc kubenswrapper[4908]: W0131 07:43:12.374509 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25723372_e8bd_4939_a894_5d994e36b942.slice/crio-a2c269e3087d08f0dcb891098bf41d1c2695fde384cba3f01200984914e91ea5 WatchSource:0}: Error finding container a2c269e3087d08f0dcb891098bf41d1c2695fde384cba3f01200984914e91ea5: Status 404 returned error can't find the container with id a2c269e3087d08f0dcb891098bf41d1c2695fde384cba3f01200984914e91ea5 Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.455898 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-hcj2h"] Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.465878 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-hcj2h"] Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.525357 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2"] Jan 31 07:43:12 crc kubenswrapper[4908]: I0131 07:43:12.541378 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 07:43:12 crc kubenswrapper[4908]: W0131 07:43:12.562129 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01e6536a_a222_489b_90cc_47995bf7c8ec.slice/crio-15f1716a3b98b0ab3d22074b8e82c0b35d7e30e78b2b23e934536f86304147e9 WatchSource:0}: Error finding container 15f1716a3b98b0ab3d22074b8e82c0b35d7e30e78b2b23e934536f86304147e9: Status 404 returned error can't find the container with id 15f1716a3b98b0ab3d22074b8e82c0b35d7e30e78b2b23e934536f86304147e9 Jan 31 07:43:13 crc kubenswrapper[4908]: I0131 07:43:13.130359 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"01e6536a-a222-489b-90cc-47995bf7c8ec","Type":"ContainerStarted","Data":"15f1716a3b98b0ab3d22074b8e82c0b35d7e30e78b2b23e934536f86304147e9"} Jan 31 07:43:13 crc kubenswrapper[4908]: I0131 07:43:13.131587 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"25723372-e8bd-4939-a894-5d994e36b942","Type":"ContainerStarted","Data":"a2c269e3087d08f0dcb891098bf41d1c2695fde384cba3f01200984914e91ea5"} Jan 31 07:43:13 crc kubenswrapper[4908]: I0131 07:43:13.146511 4908 generic.go:334] "Generic (PLEG): container finished" podID="3e69c80a-41fd-4df0-8758-e105a45521bf" containerID="2b73e3ee9b1bc2f283566c27f4b7a0c3e824b8a3528ba91d3d7310935de2b26a" exitCode=0 Jan 31 07:43:13 crc kubenswrapper[4908]: I0131 07:43:13.146544 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2" event={"ID":"3e69c80a-41fd-4df0-8758-e105a45521bf","Type":"ContainerDied","Data":"2b73e3ee9b1bc2f283566c27f4b7a0c3e824b8a3528ba91d3d7310935de2b26a"} Jan 31 07:43:13 crc kubenswrapper[4908]: I0131 07:43:13.146562 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2" event={"ID":"3e69c80a-41fd-4df0-8758-e105a45521bf","Type":"ContainerStarted","Data":"500b4afc91ab0235cb29cc43ad8f7dbeed13910b3f8ddec769fb9cb4dffb9b10"} Jan 31 07:43:13 crc kubenswrapper[4908]: I0131 07:43:13.957313 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6184a33a-84dd-4024-984f-df5bb5cdf978" path="/var/lib/kubelet/pods/6184a33a-84dd-4024-984f-df5bb5cdf978/volumes" Jan 31 07:43:13 crc kubenswrapper[4908]: I0131 07:43:13.973932 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 31 07:43:14 crc kubenswrapper[4908]: I0131 07:43:14.170203 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"01e6536a-a222-489b-90cc-47995bf7c8ec","Type":"ContainerStarted","Data":"ee1adf4bb511e91ee4f8ea4b6f82d999586026167cacaa45dc685b794551fe88"} Jan 31 07:43:14 crc kubenswrapper[4908]: I0131 07:43:14.172832 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2" event={"ID":"3e69c80a-41fd-4df0-8758-e105a45521bf","Type":"ContainerStarted","Data":"fea40fcb28b68de975c23fa52c9d11cdc99017ab4ae7be08e522bc84b9f61be1"} Jan 31 07:43:14 crc kubenswrapper[4908]: I0131 07:43:14.173790 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2" Jan 31 07:43:14 crc kubenswrapper[4908]: I0131 07:43:14.197804 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2" podStartSLOduration=3.197787393 podStartE2EDuration="3.197787393s" podCreationTimestamp="2026-01-31 07:43:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:43:14.190157667 +0000 UTC m=+1300.806102321" watchObservedRunningTime="2026-01-31 07:43:14.197787393 +0000 UTC m=+1300.813732047" Jan 31 07:43:14 crc kubenswrapper[4908]: I0131 07:43:14.955661 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6b879d7b7d-bdrjp" Jan 31 07:43:15 crc kubenswrapper[4908]: I0131 07:43:15.081792 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6b879d7b7d-bdrjp" Jan 31 07:43:15 crc kubenswrapper[4908]: I0131 07:43:15.174221 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-8448fb66b6-c7xrs"] Jan 31 07:43:15 crc kubenswrapper[4908]: I0131 07:43:15.174442 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-8448fb66b6-c7xrs" podUID="b99eb244-3bee-4f92-9bb2-097fc753c198" containerName="barbican-api-log" containerID="cri-o://f0571bc4bb452de79deeceba020fe82ca70fda64aa675bddfbea62cbe1ddbd0c" gracePeriod=30 Jan 31 07:43:15 crc kubenswrapper[4908]: I0131 07:43:15.174806 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-8448fb66b6-c7xrs" podUID="b99eb244-3bee-4f92-9bb2-097fc753c198" containerName="barbican-api" containerID="cri-o://c9ef71c4a6ae08297c3e1af97eb961da0bd431dc2d4629cd9840e24851348682" gracePeriod=30 Jan 31 07:43:15 crc kubenswrapper[4908]: I0131 07:43:15.188047 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-8448fb66b6-c7xrs" podUID="b99eb244-3bee-4f92-9bb2-097fc753c198" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.152:9311/healthcheck\": EOF" Jan 31 07:43:15 crc kubenswrapper[4908]: I0131 07:43:15.188152 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-8448fb66b6-c7xrs" podUID="b99eb244-3bee-4f92-9bb2-097fc753c198" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.152:9311/healthcheck\": EOF" Jan 31 07:43:15 crc kubenswrapper[4908]: I0131 07:43:15.213492 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"01e6536a-a222-489b-90cc-47995bf7c8ec","Type":"ContainerStarted","Data":"51e0c6fab00e445eceb6949bf3428de2b05b9fcf40718991322c595dd85b12b7"} Jan 31 07:43:15 crc kubenswrapper[4908]: I0131 07:43:15.213663 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="01e6536a-a222-489b-90cc-47995bf7c8ec" containerName="cinder-api-log" containerID="cri-o://ee1adf4bb511e91ee4f8ea4b6f82d999586026167cacaa45dc685b794551fe88" gracePeriod=30 Jan 31 07:43:15 crc kubenswrapper[4908]: I0131 07:43:15.213807 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 31 07:43:15 crc kubenswrapper[4908]: I0131 07:43:15.213894 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="01e6536a-a222-489b-90cc-47995bf7c8ec" containerName="cinder-api" containerID="cri-o://51e0c6fab00e445eceb6949bf3428de2b05b9fcf40718991322c595dd85b12b7" gracePeriod=30 Jan 31 07:43:15 crc kubenswrapper[4908]: I0131 07:43:15.227700 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"25723372-e8bd-4939-a894-5d994e36b942","Type":"ContainerStarted","Data":"300959358ca079a3040b5ed490d339fa95f8ba7945fd8aec3e83cd73c1a50bd6"} Jan 31 07:43:15 crc kubenswrapper[4908]: I0131 07:43:15.243419 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.243399156 podStartE2EDuration="4.243399156s" podCreationTimestamp="2026-01-31 07:43:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:43:15.231133737 +0000 UTC m=+1301.847078391" watchObservedRunningTime="2026-01-31 07:43:15.243399156 +0000 UTC m=+1301.859343810" Jan 31 07:43:15 crc kubenswrapper[4908]: E0131 07:43:15.722596 4908 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01e6536a_a222_489b_90cc_47995bf7c8ec.slice/crio-conmon-51e0c6fab00e445eceb6949bf3428de2b05b9fcf40718991322c595dd85b12b7.scope\": RecentStats: unable to find data in memory cache]" Jan 31 07:43:16 crc kubenswrapper[4908]: I0131 07:43:16.242576 4908 generic.go:334] "Generic (PLEG): container finished" podID="b99eb244-3bee-4f92-9bb2-097fc753c198" containerID="f0571bc4bb452de79deeceba020fe82ca70fda64aa675bddfbea62cbe1ddbd0c" exitCode=143 Jan 31 07:43:16 crc kubenswrapper[4908]: I0131 07:43:16.242666 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8448fb66b6-c7xrs" event={"ID":"b99eb244-3bee-4f92-9bb2-097fc753c198","Type":"ContainerDied","Data":"f0571bc4bb452de79deeceba020fe82ca70fda64aa675bddfbea62cbe1ddbd0c"} Jan 31 07:43:16 crc kubenswrapper[4908]: I0131 07:43:16.245930 4908 generic.go:334] "Generic (PLEG): container finished" podID="01e6536a-a222-489b-90cc-47995bf7c8ec" containerID="51e0c6fab00e445eceb6949bf3428de2b05b9fcf40718991322c595dd85b12b7" exitCode=0 Jan 31 07:43:16 crc kubenswrapper[4908]: I0131 07:43:16.245963 4908 generic.go:334] "Generic (PLEG): container finished" podID="01e6536a-a222-489b-90cc-47995bf7c8ec" containerID="ee1adf4bb511e91ee4f8ea4b6f82d999586026167cacaa45dc685b794551fe88" exitCode=143 Jan 31 07:43:16 crc kubenswrapper[4908]: I0131 07:43:16.246036 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"01e6536a-a222-489b-90cc-47995bf7c8ec","Type":"ContainerDied","Data":"51e0c6fab00e445eceb6949bf3428de2b05b9fcf40718991322c595dd85b12b7"} Jan 31 07:43:16 crc kubenswrapper[4908]: I0131 07:43:16.246086 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"01e6536a-a222-489b-90cc-47995bf7c8ec","Type":"ContainerDied","Data":"ee1adf4bb511e91ee4f8ea4b6f82d999586026167cacaa45dc685b794551fe88"} Jan 31 07:43:19 crc kubenswrapper[4908]: I0131 07:43:19.291539 4908 generic.go:334] "Generic (PLEG): container finished" podID="d1b5c255-9609-4fc5-a3af-10d0faf40366" containerID="51d33265242be6c5d8c594fbe23fcc24be49780a685e54cda6de66912ed24619" exitCode=0 Jan 31 07:43:19 crc kubenswrapper[4908]: I0131 07:43:19.291712 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lbm78" event={"ID":"d1b5c255-9609-4fc5-a3af-10d0faf40366","Type":"ContainerDied","Data":"51d33265242be6c5d8c594fbe23fcc24be49780a685e54cda6de66912ed24619"} Jan 31 07:43:20 crc kubenswrapper[4908]: I0131 07:43:20.229338 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-8448fb66b6-c7xrs" podUID="b99eb244-3bee-4f92-9bb2-097fc753c198" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.152:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 07:43:20 crc kubenswrapper[4908]: I0131 07:43:20.605109 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-8448fb66b6-c7xrs" podUID="b99eb244-3bee-4f92-9bb2-097fc753c198" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.152:9311/healthcheck\": read tcp 10.217.0.2:49316->10.217.0.152:9311: read: connection reset by peer" Jan 31 07:43:20 crc kubenswrapper[4908]: I0131 07:43:20.605150 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-8448fb66b6-c7xrs" podUID="b99eb244-3bee-4f92-9bb2-097fc753c198" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.152:9311/healthcheck\": read tcp 10.217.0.2:49312->10.217.0.152:9311: read: connection reset by peer" Jan 31 07:43:21 crc kubenswrapper[4908]: I0131 07:43:21.221567 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:43:21 crc kubenswrapper[4908]: I0131 07:43:21.223861 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877" containerName="ceilometer-notification-agent" containerID="cri-o://c8333e89a22bd867aa4aaa6e63f9a0f6ea5c4156ca658c95794a5e0be2d2c255" gracePeriod=30 Jan 31 07:43:21 crc kubenswrapper[4908]: I0131 07:43:21.223892 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877" containerName="proxy-httpd" containerID="cri-o://5d510d55801ce0ffd53b69f6673295171e139627b6c4f319ac48ffb8e60e7988" gracePeriod=30 Jan 31 07:43:21 crc kubenswrapper[4908]: I0131 07:43:21.223862 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877" containerName="sg-core" containerID="cri-o://ab279a4d27e23586fc2bbf678ee7c538fb60bc6f396db9b0236e70b7ad1fef29" gracePeriod=30 Jan 31 07:43:21 crc kubenswrapper[4908]: I0131 07:43:21.224060 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877" containerName="ceilometer-central-agent" containerID="cri-o://3263d6d2cca98c761fe9fb84525928a313d07292efa54f50bbaea8b0b8f48062" gracePeriod=30 Jan 31 07:43:21 crc kubenswrapper[4908]: I0131 07:43:21.244686 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.147:3000/\": EOF" Jan 31 07:43:21 crc kubenswrapper[4908]: I0131 07:43:21.323843 4908 generic.go:334] "Generic (PLEG): container finished" podID="b99eb244-3bee-4f92-9bb2-097fc753c198" containerID="c9ef71c4a6ae08297c3e1af97eb961da0bd431dc2d4629cd9840e24851348682" exitCode=0 Jan 31 07:43:21 crc kubenswrapper[4908]: I0131 07:43:21.323917 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8448fb66b6-c7xrs" event={"ID":"b99eb244-3bee-4f92-9bb2-097fc753c198","Type":"ContainerDied","Data":"c9ef71c4a6ae08297c3e1af97eb961da0bd431dc2d4629cd9840e24851348682"} Jan 31 07:43:21 crc kubenswrapper[4908]: I0131 07:43:21.825188 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2" Jan 31 07:43:21 crc kubenswrapper[4908]: I0131 07:43:21.839386 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="01e6536a-a222-489b-90cc-47995bf7c8ec" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.156:8776/healthcheck\": dial tcp 10.217.0.156:8776: connect: connection refused" Jan 31 07:43:21 crc kubenswrapper[4908]: I0131 07:43:21.888495 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx"] Jan 31 07:43:21 crc kubenswrapper[4908]: I0131 07:43:21.888787 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx" podUID="c7fc3e04-968b-4c28-84b6-1cb5d95ad52c" containerName="dnsmasq-dns" containerID="cri-o://d50769a9eb35ecb483198f59a855657aab9b4dd430b68690ac7822adc4c3eef9" gracePeriod=10 Jan 31 07:43:22 crc kubenswrapper[4908]: I0131 07:43:22.347214 4908 generic.go:334] "Generic (PLEG): container finished" podID="2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877" containerID="ab279a4d27e23586fc2bbf678ee7c538fb60bc6f396db9b0236e70b7ad1fef29" exitCode=2 Jan 31 07:43:22 crc kubenswrapper[4908]: I0131 07:43:22.347252 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877","Type":"ContainerDied","Data":"ab279a4d27e23586fc2bbf678ee7c538fb60bc6f396db9b0236e70b7ad1fef29"} Jan 31 07:43:23 crc kubenswrapper[4908]: I0131 07:43:23.674865 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx" podUID="c7fc3e04-968b-4c28-84b6-1cb5d95ad52c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: connect: connection refused" Jan 31 07:43:24 crc kubenswrapper[4908]: I0131 07:43:24.165452 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.147:3000/\": dial tcp 10.217.0.147:3000: connect: connection refused" Jan 31 07:43:24 crc kubenswrapper[4908]: I0131 07:43:24.369521 4908 generic.go:334] "Generic (PLEG): container finished" podID="2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877" containerID="5d510d55801ce0ffd53b69f6673295171e139627b6c4f319ac48ffb8e60e7988" exitCode=0 Jan 31 07:43:24 crc kubenswrapper[4908]: I0131 07:43:24.369562 4908 generic.go:334] "Generic (PLEG): container finished" podID="2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877" containerID="c8333e89a22bd867aa4aaa6e63f9a0f6ea5c4156ca658c95794a5e0be2d2c255" exitCode=0 Jan 31 07:43:24 crc kubenswrapper[4908]: I0131 07:43:24.369575 4908 generic.go:334] "Generic (PLEG): container finished" podID="2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877" containerID="3263d6d2cca98c761fe9fb84525928a313d07292efa54f50bbaea8b0b8f48062" exitCode=0 Jan 31 07:43:24 crc kubenswrapper[4908]: I0131 07:43:24.369608 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877","Type":"ContainerDied","Data":"5d510d55801ce0ffd53b69f6673295171e139627b6c4f319ac48ffb8e60e7988"} Jan 31 07:43:24 crc kubenswrapper[4908]: I0131 07:43:24.369647 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877","Type":"ContainerDied","Data":"c8333e89a22bd867aa4aaa6e63f9a0f6ea5c4156ca658c95794a5e0be2d2c255"} Jan 31 07:43:24 crc kubenswrapper[4908]: I0131 07:43:24.369658 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877","Type":"ContainerDied","Data":"3263d6d2cca98c761fe9fb84525928a313d07292efa54f50bbaea8b0b8f48062"} Jan 31 07:43:24 crc kubenswrapper[4908]: I0131 07:43:24.775718 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-8448fb66b6-c7xrs" podUID="b99eb244-3bee-4f92-9bb2-097fc753c198" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.152:9311/healthcheck\": dial tcp 10.217.0.152:9311: connect: connection refused" Jan 31 07:43:24 crc kubenswrapper[4908]: I0131 07:43:24.776131 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-8448fb66b6-c7xrs" podUID="b99eb244-3bee-4f92-9bb2-097fc753c198" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.152:9311/healthcheck\": dial tcp 10.217.0.152:9311: connect: connection refused" Jan 31 07:43:25 crc kubenswrapper[4908]: E0131 07:43:25.291081 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Jan 31 07:43:25 crc kubenswrapper[4908]: E0131 07:43:25.291495 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59hc5h5b9h8fh589h5dfh546h54fh54hffh94h9dh57h58h5f6h8hd6hbfh649hf9h5bfhc7h8hd4h58bh59fh599hdh5c5h649h689h95q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-46td2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(f0d30b67-d125-4065-bb37-e91a0ba45b29): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:43:25 crc kubenswrapper[4908]: E0131 07:43:25.292881 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="f0d30b67-d125-4065-bb37-e91a0ba45b29" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.322847 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lbm78" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.396364 4908 generic.go:334] "Generic (PLEG): container finished" podID="c7fc3e04-968b-4c28-84b6-1cb5d95ad52c" containerID="d50769a9eb35ecb483198f59a855657aab9b4dd430b68690ac7822adc4c3eef9" exitCode=0 Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.396725 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx" event={"ID":"c7fc3e04-968b-4c28-84b6-1cb5d95ad52c","Type":"ContainerDied","Data":"d50769a9eb35ecb483198f59a855657aab9b4dd430b68690ac7822adc4c3eef9"} Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.402848 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lbm78" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.403011 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lbm78" event={"ID":"d1b5c255-9609-4fc5-a3af-10d0faf40366","Type":"ContainerDied","Data":"ff1082203cf4e751061e08d00f5194a4e6544972e1736467ac4d09e79e3aa977"} Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.403039 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff1082203cf4e751061e08d00f5194a4e6544972e1736467ac4d09e79e3aa977" Jan 31 07:43:25 crc kubenswrapper[4908]: E0131 07:43:25.421493 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="f0d30b67-d125-4065-bb37-e91a0ba45b29" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.507838 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.517124 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hp484\" (UniqueName: \"kubernetes.io/projected/d1b5c255-9609-4fc5-a3af-10d0faf40366-kube-api-access-hp484\") pod \"d1b5c255-9609-4fc5-a3af-10d0faf40366\" (UID: \"d1b5c255-9609-4fc5-a3af-10d0faf40366\") " Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.517202 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1b5c255-9609-4fc5-a3af-10d0faf40366-combined-ca-bundle\") pod \"d1b5c255-9609-4fc5-a3af-10d0faf40366\" (UID: \"d1b5c255-9609-4fc5-a3af-10d0faf40366\") " Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.517428 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d1b5c255-9609-4fc5-a3af-10d0faf40366-config\") pod \"d1b5c255-9609-4fc5-a3af-10d0faf40366\" (UID: \"d1b5c255-9609-4fc5-a3af-10d0faf40366\") " Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.526819 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1b5c255-9609-4fc5-a3af-10d0faf40366-kube-api-access-hp484" (OuterVolumeSpecName: "kube-api-access-hp484") pod "d1b5c255-9609-4fc5-a3af-10d0faf40366" (UID: "d1b5c255-9609-4fc5-a3af-10d0faf40366"). InnerVolumeSpecName "kube-api-access-hp484". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.553411 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1b5c255-9609-4fc5-a3af-10d0faf40366-config" (OuterVolumeSpecName: "config") pod "d1b5c255-9609-4fc5-a3af-10d0faf40366" (UID: "d1b5c255-9609-4fc5-a3af-10d0faf40366"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.573912 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1b5c255-9609-4fc5-a3af-10d0faf40366-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1b5c255-9609-4fc5-a3af-10d0faf40366" (UID: "d1b5c255-9609-4fc5-a3af-10d0faf40366"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.619669 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01e6536a-a222-489b-90cc-47995bf7c8ec-combined-ca-bundle\") pod \"01e6536a-a222-489b-90cc-47995bf7c8ec\" (UID: \"01e6536a-a222-489b-90cc-47995bf7c8ec\") " Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.619762 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01e6536a-a222-489b-90cc-47995bf7c8ec-config-data\") pod \"01e6536a-a222-489b-90cc-47995bf7c8ec\" (UID: \"01e6536a-a222-489b-90cc-47995bf7c8ec\") " Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.619789 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fczsj\" (UniqueName: \"kubernetes.io/projected/01e6536a-a222-489b-90cc-47995bf7c8ec-kube-api-access-fczsj\") pod \"01e6536a-a222-489b-90cc-47995bf7c8ec\" (UID: \"01e6536a-a222-489b-90cc-47995bf7c8ec\") " Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.619862 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01e6536a-a222-489b-90cc-47995bf7c8ec-logs\") pod \"01e6536a-a222-489b-90cc-47995bf7c8ec\" (UID: \"01e6536a-a222-489b-90cc-47995bf7c8ec\") " Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.619889 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01e6536a-a222-489b-90cc-47995bf7c8ec-config-data-custom\") pod \"01e6536a-a222-489b-90cc-47995bf7c8ec\" (UID: \"01e6536a-a222-489b-90cc-47995bf7c8ec\") " Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.619920 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/01e6536a-a222-489b-90cc-47995bf7c8ec-etc-machine-id\") pod \"01e6536a-a222-489b-90cc-47995bf7c8ec\" (UID: \"01e6536a-a222-489b-90cc-47995bf7c8ec\") " Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.619947 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01e6536a-a222-489b-90cc-47995bf7c8ec-scripts\") pod \"01e6536a-a222-489b-90cc-47995bf7c8ec\" (UID: \"01e6536a-a222-489b-90cc-47995bf7c8ec\") " Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.620422 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hp484\" (UniqueName: \"kubernetes.io/projected/d1b5c255-9609-4fc5-a3af-10d0faf40366-kube-api-access-hp484\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.620444 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1b5c255-9609-4fc5-a3af-10d0faf40366-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.620456 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d1b5c255-9609-4fc5-a3af-10d0faf40366-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.627110 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01e6536a-a222-489b-90cc-47995bf7c8ec-kube-api-access-fczsj" (OuterVolumeSpecName: "kube-api-access-fczsj") pod "01e6536a-a222-489b-90cc-47995bf7c8ec" (UID: "01e6536a-a222-489b-90cc-47995bf7c8ec"). InnerVolumeSpecName "kube-api-access-fczsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.627450 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01e6536a-a222-489b-90cc-47995bf7c8ec-logs" (OuterVolumeSpecName: "logs") pod "01e6536a-a222-489b-90cc-47995bf7c8ec" (UID: "01e6536a-a222-489b-90cc-47995bf7c8ec"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.627489 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e6536a-a222-489b-90cc-47995bf7c8ec-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "01e6536a-a222-489b-90cc-47995bf7c8ec" (UID: "01e6536a-a222-489b-90cc-47995bf7c8ec"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.627644 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01e6536a-a222-489b-90cc-47995bf7c8ec-scripts" (OuterVolumeSpecName: "scripts") pod "01e6536a-a222-489b-90cc-47995bf7c8ec" (UID: "01e6536a-a222-489b-90cc-47995bf7c8ec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.636219 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01e6536a-a222-489b-90cc-47995bf7c8ec-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "01e6536a-a222-489b-90cc-47995bf7c8ec" (UID: "01e6536a-a222-489b-90cc-47995bf7c8ec"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.662953 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01e6536a-a222-489b-90cc-47995bf7c8ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01e6536a-a222-489b-90cc-47995bf7c8ec" (UID: "01e6536a-a222-489b-90cc-47995bf7c8ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.725091 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fczsj\" (UniqueName: \"kubernetes.io/projected/01e6536a-a222-489b-90cc-47995bf7c8ec-kube-api-access-fczsj\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.725141 4908 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01e6536a-a222-489b-90cc-47995bf7c8ec-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.725171 4908 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01e6536a-a222-489b-90cc-47995bf7c8ec-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.725183 4908 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/01e6536a-a222-489b-90cc-47995bf7c8ec-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.725197 4908 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01e6536a-a222-489b-90cc-47995bf7c8ec-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.725207 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01e6536a-a222-489b-90cc-47995bf7c8ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.731317 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01e6536a-a222-489b-90cc-47995bf7c8ec-config-data" (OuterVolumeSpecName: "config-data") pod "01e6536a-a222-489b-90cc-47995bf7c8ec" (UID: "01e6536a-a222-489b-90cc-47995bf7c8ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.798928 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8448fb66b6-c7xrs" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.827268 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01e6536a-a222-489b-90cc-47995bf7c8ec-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.930258 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.931700 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-combined-ca-bundle\") pod \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\" (UID: \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\") " Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.931861 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-scripts\") pod \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\" (UID: \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\") " Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.932061 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b99eb244-3bee-4f92-9bb2-097fc753c198-logs\") pod \"b99eb244-3bee-4f92-9bb2-097fc753c198\" (UID: \"b99eb244-3bee-4f92-9bb2-097fc753c198\") " Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.932163 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b99eb244-3bee-4f92-9bb2-097fc753c198-combined-ca-bundle\") pod \"b99eb244-3bee-4f92-9bb2-097fc753c198\" (UID: \"b99eb244-3bee-4f92-9bb2-097fc753c198\") " Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.932310 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b99eb244-3bee-4f92-9bb2-097fc753c198-config-data\") pod \"b99eb244-3bee-4f92-9bb2-097fc753c198\" (UID: \"b99eb244-3bee-4f92-9bb2-097fc753c198\") " Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.932391 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-sg-core-conf-yaml\") pod \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\" (UID: \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\") " Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.932516 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkkf5\" (UniqueName: \"kubernetes.io/projected/b99eb244-3bee-4f92-9bb2-097fc753c198-kube-api-access-vkkf5\") pod \"b99eb244-3bee-4f92-9bb2-097fc753c198\" (UID: \"b99eb244-3bee-4f92-9bb2-097fc753c198\") " Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.932596 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-log-httpd\") pod \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\" (UID: \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\") " Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.932721 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scct4\" (UniqueName: \"kubernetes.io/projected/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-kube-api-access-scct4\") pod \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\" (UID: \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\") " Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.932806 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-run-httpd\") pod \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\" (UID: \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\") " Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.932886 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b99eb244-3bee-4f92-9bb2-097fc753c198-config-data-custom\") pod \"b99eb244-3bee-4f92-9bb2-097fc753c198\" (UID: \"b99eb244-3bee-4f92-9bb2-097fc753c198\") " Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.933046 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-config-data\") pod \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\" (UID: \"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877\") " Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.944427 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b99eb244-3bee-4f92-9bb2-097fc753c198-logs" (OuterVolumeSpecName: "logs") pod "b99eb244-3bee-4f92-9bb2-097fc753c198" (UID: "b99eb244-3bee-4f92-9bb2-097fc753c198"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.944699 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877" (UID: "2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.945022 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877" (UID: "2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.966179 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-scripts" (OuterVolumeSpecName: "scripts") pod "2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877" (UID: "2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.966365 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-kube-api-access-scct4" (OuterVolumeSpecName: "kube-api-access-scct4") pod "2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877" (UID: "2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877"). InnerVolumeSpecName "kube-api-access-scct4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.977216 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b99eb244-3bee-4f92-9bb2-097fc753c198-kube-api-access-vkkf5" (OuterVolumeSpecName: "kube-api-access-vkkf5") pod "b99eb244-3bee-4f92-9bb2-097fc753c198" (UID: "b99eb244-3bee-4f92-9bb2-097fc753c198"). InnerVolumeSpecName "kube-api-access-vkkf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:43:25 crc kubenswrapper[4908]: I0131 07:43:25.983209 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b99eb244-3bee-4f92-9bb2-097fc753c198-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b99eb244-3bee-4f92-9bb2-097fc753c198" (UID: "b99eb244-3bee-4f92-9bb2-097fc753c198"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.020338 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877" (UID: "2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.080664 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scct4\" (UniqueName: \"kubernetes.io/projected/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-kube-api-access-scct4\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.080997 4908 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.081008 4908 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b99eb244-3bee-4f92-9bb2-097fc753c198-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.081022 4908 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.081032 4908 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b99eb244-3bee-4f92-9bb2-097fc753c198-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.081041 4908 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.081049 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkkf5\" (UniqueName: \"kubernetes.io/projected/b99eb244-3bee-4f92-9bb2-097fc753c198-kube-api-access-vkkf5\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.081061 4908 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.089448 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.157136 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b99eb244-3bee-4f92-9bb2-097fc753c198-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b99eb244-3bee-4f92-9bb2-097fc753c198" (UID: "b99eb244-3bee-4f92-9bb2-097fc753c198"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.158826 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b99eb244-3bee-4f92-9bb2-097fc753c198-config-data" (OuterVolumeSpecName: "config-data") pod "b99eb244-3bee-4f92-9bb2-097fc753c198" (UID: "b99eb244-3bee-4f92-9bb2-097fc753c198"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.182842 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c-config\") pod \"c7fc3e04-968b-4c28-84b6-1cb5d95ad52c\" (UID: \"c7fc3e04-968b-4c28-84b6-1cb5d95ad52c\") " Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.182949 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c-dns-svc\") pod \"c7fc3e04-968b-4c28-84b6-1cb5d95ad52c\" (UID: \"c7fc3e04-968b-4c28-84b6-1cb5d95ad52c\") " Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.183009 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c-ovsdbserver-sb\") pod \"c7fc3e04-968b-4c28-84b6-1cb5d95ad52c\" (UID: \"c7fc3e04-968b-4c28-84b6-1cb5d95ad52c\") " Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.183613 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r69rw\" (UniqueName: \"kubernetes.io/projected/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c-kube-api-access-r69rw\") pod \"c7fc3e04-968b-4c28-84b6-1cb5d95ad52c\" (UID: \"c7fc3e04-968b-4c28-84b6-1cb5d95ad52c\") " Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.183703 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c-ovsdbserver-nb\") pod \"c7fc3e04-968b-4c28-84b6-1cb5d95ad52c\" (UID: \"c7fc3e04-968b-4c28-84b6-1cb5d95ad52c\") " Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.184559 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b99eb244-3bee-4f92-9bb2-097fc753c198-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.184577 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b99eb244-3bee-4f92-9bb2-097fc753c198-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.193046 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c-kube-api-access-r69rw" (OuterVolumeSpecName: "kube-api-access-r69rw") pod "c7fc3e04-968b-4c28-84b6-1cb5d95ad52c" (UID: "c7fc3e04-968b-4c28-84b6-1cb5d95ad52c"). InnerVolumeSpecName "kube-api-access-r69rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.224383 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877" (UID: "2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.244693 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c7fc3e04-968b-4c28-84b6-1cb5d95ad52c" (UID: "c7fc3e04-968b-4c28-84b6-1cb5d95ad52c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.248388 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c7fc3e04-968b-4c28-84b6-1cb5d95ad52c" (UID: "c7fc3e04-968b-4c28-84b6-1cb5d95ad52c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.254304 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c-config" (OuterVolumeSpecName: "config") pod "c7fc3e04-968b-4c28-84b6-1cb5d95ad52c" (UID: "c7fc3e04-968b-4c28-84b6-1cb5d95ad52c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.255080 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-config-data" (OuterVolumeSpecName: "config-data") pod "2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877" (UID: "2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.266481 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c7fc3e04-968b-4c28-84b6-1cb5d95ad52c" (UID: "c7fc3e04-968b-4c28-84b6-1cb5d95ad52c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.286091 4908 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.286127 4908 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.286138 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r69rw\" (UniqueName: \"kubernetes.io/projected/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c-kube-api-access-r69rw\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.286148 4908 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.286158 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.286166 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.286175 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.415755 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.416574 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877","Type":"ContainerDied","Data":"3a641f7cb376b21dd524199e7a5736f1fa8dc76b93c1e80776246c3357959a2a"} Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.416613 4908 scope.go:117] "RemoveContainer" containerID="5d510d55801ce0ffd53b69f6673295171e139627b6c4f319ac48ffb8e60e7988" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.420828 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8448fb66b6-c7xrs" event={"ID":"b99eb244-3bee-4f92-9bb2-097fc753c198","Type":"ContainerDied","Data":"500c4c783f413107856183116027d3fecd45a7cd025872163d724e3b0c504230"} Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.421092 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8448fb66b6-c7xrs" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.429960 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx" event={"ID":"c7fc3e04-968b-4c28-84b6-1cb5d95ad52c","Type":"ContainerDied","Data":"c6f49810702dfcd0bf07d72c1b7f17a9e424035905821d2b484694548af4766a"} Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.430170 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.441510 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"01e6536a-a222-489b-90cc-47995bf7c8ec","Type":"ContainerDied","Data":"15f1716a3b98b0ab3d22074b8e82c0b35d7e30e78b2b23e934536f86304147e9"} Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.441612 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.475388 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.482429 4908 scope.go:117] "RemoveContainer" containerID="ab279a4d27e23586fc2bbf678ee7c538fb60bc6f396db9b0236e70b7ad1fef29" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.498905 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.502881 4908 scope.go:117] "RemoveContainer" containerID="c8333e89a22bd867aa4aaa6e63f9a0f6ea5c4156ca658c95794a5e0be2d2c255" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.516702 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:43:26 crc kubenswrapper[4908]: E0131 07:43:26.517109 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01e6536a-a222-489b-90cc-47995bf7c8ec" containerName="cinder-api-log" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.517126 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="01e6536a-a222-489b-90cc-47995bf7c8ec" containerName="cinder-api-log" Jan 31 07:43:26 crc kubenswrapper[4908]: E0131 07:43:26.517135 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01e6536a-a222-489b-90cc-47995bf7c8ec" containerName="cinder-api" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.517141 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="01e6536a-a222-489b-90cc-47995bf7c8ec" containerName="cinder-api" Jan 31 07:43:26 crc kubenswrapper[4908]: E0131 07:43:26.517149 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b99eb244-3bee-4f92-9bb2-097fc753c198" containerName="barbican-api-log" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.517155 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="b99eb244-3bee-4f92-9bb2-097fc753c198" containerName="barbican-api-log" Jan 31 07:43:26 crc kubenswrapper[4908]: E0131 07:43:26.517170 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6184a33a-84dd-4024-984f-df5bb5cdf978" containerName="dnsmasq-dns" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.517176 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="6184a33a-84dd-4024-984f-df5bb5cdf978" containerName="dnsmasq-dns" Jan 31 07:43:26 crc kubenswrapper[4908]: E0131 07:43:26.517185 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7fc3e04-968b-4c28-84b6-1cb5d95ad52c" containerName="dnsmasq-dns" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.517190 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7fc3e04-968b-4c28-84b6-1cb5d95ad52c" containerName="dnsmasq-dns" Jan 31 07:43:26 crc kubenswrapper[4908]: E0131 07:43:26.517200 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b99eb244-3bee-4f92-9bb2-097fc753c198" containerName="barbican-api" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.517205 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="b99eb244-3bee-4f92-9bb2-097fc753c198" containerName="barbican-api" Jan 31 07:43:26 crc kubenswrapper[4908]: E0131 07:43:26.517215 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6184a33a-84dd-4024-984f-df5bb5cdf978" containerName="init" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.517221 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="6184a33a-84dd-4024-984f-df5bb5cdf978" containerName="init" Jan 31 07:43:26 crc kubenswrapper[4908]: E0131 07:43:26.517229 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7fc3e04-968b-4c28-84b6-1cb5d95ad52c" containerName="init" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.517237 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7fc3e04-968b-4c28-84b6-1cb5d95ad52c" containerName="init" Jan 31 07:43:26 crc kubenswrapper[4908]: E0131 07:43:26.517251 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877" containerName="sg-core" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.517256 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877" containerName="sg-core" Jan 31 07:43:26 crc kubenswrapper[4908]: E0131 07:43:26.517268 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877" containerName="ceilometer-notification-agent" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.517275 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877" containerName="ceilometer-notification-agent" Jan 31 07:43:26 crc kubenswrapper[4908]: E0131 07:43:26.517286 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877" containerName="ceilometer-central-agent" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.517292 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877" containerName="ceilometer-central-agent" Jan 31 07:43:26 crc kubenswrapper[4908]: E0131 07:43:26.517301 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1b5c255-9609-4fc5-a3af-10d0faf40366" containerName="neutron-db-sync" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.517306 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1b5c255-9609-4fc5-a3af-10d0faf40366" containerName="neutron-db-sync" Jan 31 07:43:26 crc kubenswrapper[4908]: E0131 07:43:26.517318 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877" containerName="proxy-httpd" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.517324 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877" containerName="proxy-httpd" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.517476 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7fc3e04-968b-4c28-84b6-1cb5d95ad52c" containerName="dnsmasq-dns" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.517488 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877" containerName="proxy-httpd" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.517498 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="b99eb244-3bee-4f92-9bb2-097fc753c198" containerName="barbican-api-log" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.517508 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877" containerName="ceilometer-central-agent" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.517515 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1b5c255-9609-4fc5-a3af-10d0faf40366" containerName="neutron-db-sync" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.517527 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="01e6536a-a222-489b-90cc-47995bf7c8ec" containerName="cinder-api-log" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.517533 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="b99eb244-3bee-4f92-9bb2-097fc753c198" containerName="barbican-api" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.517542 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="01e6536a-a222-489b-90cc-47995bf7c8ec" containerName="cinder-api" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.517548 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877" containerName="sg-core" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.517557 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877" containerName="ceilometer-notification-agent" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.517563 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="6184a33a-84dd-4024-984f-df5bb5cdf978" containerName="dnsmasq-dns" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.518934 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.524129 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.525319 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.552306 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-8448fb66b6-c7xrs"] Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.580158 4908 scope.go:117] "RemoveContainer" containerID="3263d6d2cca98c761fe9fb84525928a313d07292efa54f50bbaea8b0b8f48062" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.580268 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-8448fb66b6-c7xrs"] Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.587385 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.601999 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.603252 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be6d66d1-106e-4c09-a985-71a914e56a0b-run-httpd\") pod \"ceilometer-0\" (UID: \"be6d66d1-106e-4c09-a985-71a914e56a0b\") " pod="openstack/ceilometer-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.603326 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77ltf\" (UniqueName: \"kubernetes.io/projected/be6d66d1-106e-4c09-a985-71a914e56a0b-kube-api-access-77ltf\") pod \"ceilometer-0\" (UID: \"be6d66d1-106e-4c09-a985-71a914e56a0b\") " pod="openstack/ceilometer-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.603365 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be6d66d1-106e-4c09-a985-71a914e56a0b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be6d66d1-106e-4c09-a985-71a914e56a0b\") " pod="openstack/ceilometer-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.603415 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be6d66d1-106e-4c09-a985-71a914e56a0b-log-httpd\") pod \"ceilometer-0\" (UID: \"be6d66d1-106e-4c09-a985-71a914e56a0b\") " pod="openstack/ceilometer-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.603453 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be6d66d1-106e-4c09-a985-71a914e56a0b-scripts\") pod \"ceilometer-0\" (UID: \"be6d66d1-106e-4c09-a985-71a914e56a0b\") " pod="openstack/ceilometer-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.603467 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be6d66d1-106e-4c09-a985-71a914e56a0b-config-data\") pod \"ceilometer-0\" (UID: \"be6d66d1-106e-4c09-a985-71a914e56a0b\") " pod="openstack/ceilometer-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.603488 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be6d66d1-106e-4c09-a985-71a914e56a0b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be6d66d1-106e-4c09-a985-71a914e56a0b\") " pod="openstack/ceilometer-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.629634 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.656154 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx"] Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.674620 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-5zhxx"] Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.693704 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-bwjnj"] Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.695218 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-bwjnj" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.709045 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be6d66d1-106e-4c09-a985-71a914e56a0b-run-httpd\") pod \"ceilometer-0\" (UID: \"be6d66d1-106e-4c09-a985-71a914e56a0b\") " pod="openstack/ceilometer-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.709474 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be6d66d1-106e-4c09-a985-71a914e56a0b-run-httpd\") pod \"ceilometer-0\" (UID: \"be6d66d1-106e-4c09-a985-71a914e56a0b\") " pod="openstack/ceilometer-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.711795 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77ltf\" (UniqueName: \"kubernetes.io/projected/be6d66d1-106e-4c09-a985-71a914e56a0b-kube-api-access-77ltf\") pod \"ceilometer-0\" (UID: \"be6d66d1-106e-4c09-a985-71a914e56a0b\") " pod="openstack/ceilometer-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.711848 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be6d66d1-106e-4c09-a985-71a914e56a0b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be6d66d1-106e-4c09-a985-71a914e56a0b\") " pod="openstack/ceilometer-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.711917 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be6d66d1-106e-4c09-a985-71a914e56a0b-log-httpd\") pod \"ceilometer-0\" (UID: \"be6d66d1-106e-4c09-a985-71a914e56a0b\") " pod="openstack/ceilometer-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.711968 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be6d66d1-106e-4c09-a985-71a914e56a0b-scripts\") pod \"ceilometer-0\" (UID: \"be6d66d1-106e-4c09-a985-71a914e56a0b\") " pod="openstack/ceilometer-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.712031 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be6d66d1-106e-4c09-a985-71a914e56a0b-config-data\") pod \"ceilometer-0\" (UID: \"be6d66d1-106e-4c09-a985-71a914e56a0b\") " pod="openstack/ceilometer-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.712063 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be6d66d1-106e-4c09-a985-71a914e56a0b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be6d66d1-106e-4c09-a985-71a914e56a0b\") " pod="openstack/ceilometer-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.713389 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be6d66d1-106e-4c09-a985-71a914e56a0b-log-httpd\") pod \"ceilometer-0\" (UID: \"be6d66d1-106e-4c09-a985-71a914e56a0b\") " pod="openstack/ceilometer-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.723019 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be6d66d1-106e-4c09-a985-71a914e56a0b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be6d66d1-106e-4c09-a985-71a914e56a0b\") " pod="openstack/ceilometer-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.723193 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be6d66d1-106e-4c09-a985-71a914e56a0b-config-data\") pod \"ceilometer-0\" (UID: \"be6d66d1-106e-4c09-a985-71a914e56a0b\") " pod="openstack/ceilometer-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.725611 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be6d66d1-106e-4c09-a985-71a914e56a0b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be6d66d1-106e-4c09-a985-71a914e56a0b\") " pod="openstack/ceilometer-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.730512 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-bwjnj"] Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.733126 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.734480 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.738447 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.738667 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.738769 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.742353 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77ltf\" (UniqueName: \"kubernetes.io/projected/be6d66d1-106e-4c09-a985-71a914e56a0b-kube-api-access-77ltf\") pod \"ceilometer-0\" (UID: \"be6d66d1-106e-4c09-a985-71a914e56a0b\") " pod="openstack/ceilometer-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.743054 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.744681 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be6d66d1-106e-4c09-a985-71a914e56a0b-scripts\") pod \"ceilometer-0\" (UID: \"be6d66d1-106e-4c09-a985-71a914e56a0b\") " pod="openstack/ceilometer-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.755058 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7bbb449548-wkq7p"] Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.756553 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bbb449548-wkq7p" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.760591 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.760875 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-d9cpc" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.761246 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.761499 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.768153 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7bbb449548-wkq7p"] Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.813168 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4b5f98a-7a0f-42c1-8af4-c220716fe6b4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e4b5f98a-7a0f-42c1-8af4-c220716fe6b4\") " pod="openstack/cinder-api-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.813218 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a8dff220-3208-4f05-aefb-e094594d7ab7-httpd-config\") pod \"neutron-7bbb449548-wkq7p\" (UID: \"a8dff220-3208-4f05-aefb-e094594d7ab7\") " pod="openstack/neutron-7bbb449548-wkq7p" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.813240 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4b5f98a-7a0f-42c1-8af4-c220716fe6b4-logs\") pod \"cinder-api-0\" (UID: \"e4b5f98a-7a0f-42c1-8af4-c220716fe6b4\") " pod="openstack/cinder-api-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.813285 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4b5f98a-7a0f-42c1-8af4-c220716fe6b4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e4b5f98a-7a0f-42c1-8af4-c220716fe6b4\") " pod="openstack/cinder-api-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.813312 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nczgt\" (UniqueName: \"kubernetes.io/projected/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8-kube-api-access-nczgt\") pod \"dnsmasq-dns-58db5546cc-bwjnj\" (UID: \"a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8\") " pod="openstack/dnsmasq-dns-58db5546cc-bwjnj" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.813331 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg7qv\" (UniqueName: \"kubernetes.io/projected/e4b5f98a-7a0f-42c1-8af4-c220716fe6b4-kube-api-access-wg7qv\") pod \"cinder-api-0\" (UID: \"e4b5f98a-7a0f-42c1-8af4-c220716fe6b4\") " pod="openstack/cinder-api-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.813357 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8-config\") pod \"dnsmasq-dns-58db5546cc-bwjnj\" (UID: \"a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8\") " pod="openstack/dnsmasq-dns-58db5546cc-bwjnj" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.813382 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e4b5f98a-7a0f-42c1-8af4-c220716fe6b4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e4b5f98a-7a0f-42c1-8af4-c220716fe6b4\") " pod="openstack/cinder-api-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.813400 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-bwjnj\" (UID: \"a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8\") " pod="openstack/dnsmasq-dns-58db5546cc-bwjnj" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.813420 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8-dns-svc\") pod \"dnsmasq-dns-58db5546cc-bwjnj\" (UID: \"a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8\") " pod="openstack/dnsmasq-dns-58db5546cc-bwjnj" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.813438 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4b5f98a-7a0f-42c1-8af4-c220716fe6b4-scripts\") pod \"cinder-api-0\" (UID: \"e4b5f98a-7a0f-42c1-8af4-c220716fe6b4\") " pod="openstack/cinder-api-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.813451 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4b5f98a-7a0f-42c1-8af4-c220716fe6b4-config-data\") pod \"cinder-api-0\" (UID: \"e4b5f98a-7a0f-42c1-8af4-c220716fe6b4\") " pod="openstack/cinder-api-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.813473 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8dff220-3208-4f05-aefb-e094594d7ab7-ovndb-tls-certs\") pod \"neutron-7bbb449548-wkq7p\" (UID: \"a8dff220-3208-4f05-aefb-e094594d7ab7\") " pod="openstack/neutron-7bbb449548-wkq7p" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.813503 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4b5f98a-7a0f-42c1-8af4-c220716fe6b4-config-data-custom\") pod \"cinder-api-0\" (UID: \"e4b5f98a-7a0f-42c1-8af4-c220716fe6b4\") " pod="openstack/cinder-api-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.813519 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-bwjnj\" (UID: \"a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8\") " pod="openstack/dnsmasq-dns-58db5546cc-bwjnj" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.813544 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a8dff220-3208-4f05-aefb-e094594d7ab7-config\") pod \"neutron-7bbb449548-wkq7p\" (UID: \"a8dff220-3208-4f05-aefb-e094594d7ab7\") " pod="openstack/neutron-7bbb449548-wkq7p" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.813564 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8dff220-3208-4f05-aefb-e094594d7ab7-combined-ca-bundle\") pod \"neutron-7bbb449548-wkq7p\" (UID: \"a8dff220-3208-4f05-aefb-e094594d7ab7\") " pod="openstack/neutron-7bbb449548-wkq7p" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.813584 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4b5f98a-7a0f-42c1-8af4-c220716fe6b4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e4b5f98a-7a0f-42c1-8af4-c220716fe6b4\") " pod="openstack/cinder-api-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.813603 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kh7h\" (UniqueName: \"kubernetes.io/projected/a8dff220-3208-4f05-aefb-e094594d7ab7-kube-api-access-5kh7h\") pod \"neutron-7bbb449548-wkq7p\" (UID: \"a8dff220-3208-4f05-aefb-e094594d7ab7\") " pod="openstack/neutron-7bbb449548-wkq7p" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.816536 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.817191 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.855679 4908 scope.go:117] "RemoveContainer" containerID="c9ef71c4a6ae08297c3e1af97eb961da0bd431dc2d4629cd9840e24851348682" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.899247 4908 scope.go:117] "RemoveContainer" containerID="f0571bc4bb452de79deeceba020fe82ca70fda64aa675bddfbea62cbe1ddbd0c" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.915338 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4b5f98a-7a0f-42c1-8af4-c220716fe6b4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e4b5f98a-7a0f-42c1-8af4-c220716fe6b4\") " pod="openstack/cinder-api-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.915382 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a8dff220-3208-4f05-aefb-e094594d7ab7-httpd-config\") pod \"neutron-7bbb449548-wkq7p\" (UID: \"a8dff220-3208-4f05-aefb-e094594d7ab7\") " pod="openstack/neutron-7bbb449548-wkq7p" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.915403 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4b5f98a-7a0f-42c1-8af4-c220716fe6b4-logs\") pod \"cinder-api-0\" (UID: \"e4b5f98a-7a0f-42c1-8af4-c220716fe6b4\") " pod="openstack/cinder-api-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.915446 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4b5f98a-7a0f-42c1-8af4-c220716fe6b4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e4b5f98a-7a0f-42c1-8af4-c220716fe6b4\") " pod="openstack/cinder-api-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.915475 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nczgt\" (UniqueName: \"kubernetes.io/projected/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8-kube-api-access-nczgt\") pod \"dnsmasq-dns-58db5546cc-bwjnj\" (UID: \"a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8\") " pod="openstack/dnsmasq-dns-58db5546cc-bwjnj" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.915496 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg7qv\" (UniqueName: \"kubernetes.io/projected/e4b5f98a-7a0f-42c1-8af4-c220716fe6b4-kube-api-access-wg7qv\") pod \"cinder-api-0\" (UID: \"e4b5f98a-7a0f-42c1-8af4-c220716fe6b4\") " pod="openstack/cinder-api-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.915531 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8-config\") pod \"dnsmasq-dns-58db5546cc-bwjnj\" (UID: \"a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8\") " pod="openstack/dnsmasq-dns-58db5546cc-bwjnj" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.915559 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e4b5f98a-7a0f-42c1-8af4-c220716fe6b4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e4b5f98a-7a0f-42c1-8af4-c220716fe6b4\") " pod="openstack/cinder-api-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.915580 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-bwjnj\" (UID: \"a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8\") " pod="openstack/dnsmasq-dns-58db5546cc-bwjnj" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.915601 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8-dns-svc\") pod \"dnsmasq-dns-58db5546cc-bwjnj\" (UID: \"a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8\") " pod="openstack/dnsmasq-dns-58db5546cc-bwjnj" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.915619 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4b5f98a-7a0f-42c1-8af4-c220716fe6b4-scripts\") pod \"cinder-api-0\" (UID: \"e4b5f98a-7a0f-42c1-8af4-c220716fe6b4\") " pod="openstack/cinder-api-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.915632 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4b5f98a-7a0f-42c1-8af4-c220716fe6b4-config-data\") pod \"cinder-api-0\" (UID: \"e4b5f98a-7a0f-42c1-8af4-c220716fe6b4\") " pod="openstack/cinder-api-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.915653 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8dff220-3208-4f05-aefb-e094594d7ab7-ovndb-tls-certs\") pod \"neutron-7bbb449548-wkq7p\" (UID: \"a8dff220-3208-4f05-aefb-e094594d7ab7\") " pod="openstack/neutron-7bbb449548-wkq7p" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.915689 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4b5f98a-7a0f-42c1-8af4-c220716fe6b4-config-data-custom\") pod \"cinder-api-0\" (UID: \"e4b5f98a-7a0f-42c1-8af4-c220716fe6b4\") " pod="openstack/cinder-api-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.915706 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-bwjnj\" (UID: \"a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8\") " pod="openstack/dnsmasq-dns-58db5546cc-bwjnj" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.915736 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a8dff220-3208-4f05-aefb-e094594d7ab7-config\") pod \"neutron-7bbb449548-wkq7p\" (UID: \"a8dff220-3208-4f05-aefb-e094594d7ab7\") " pod="openstack/neutron-7bbb449548-wkq7p" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.915756 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8dff220-3208-4f05-aefb-e094594d7ab7-combined-ca-bundle\") pod \"neutron-7bbb449548-wkq7p\" (UID: \"a8dff220-3208-4f05-aefb-e094594d7ab7\") " pod="openstack/neutron-7bbb449548-wkq7p" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.915773 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4b5f98a-7a0f-42c1-8af4-c220716fe6b4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e4b5f98a-7a0f-42c1-8af4-c220716fe6b4\") " pod="openstack/cinder-api-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.915793 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kh7h\" (UniqueName: \"kubernetes.io/projected/a8dff220-3208-4f05-aefb-e094594d7ab7-kube-api-access-5kh7h\") pod \"neutron-7bbb449548-wkq7p\" (UID: \"a8dff220-3208-4f05-aefb-e094594d7ab7\") " pod="openstack/neutron-7bbb449548-wkq7p" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.918104 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-bwjnj\" (UID: \"a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8\") " pod="openstack/dnsmasq-dns-58db5546cc-bwjnj" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.918115 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e4b5f98a-7a0f-42c1-8af4-c220716fe6b4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e4b5f98a-7a0f-42c1-8af4-c220716fe6b4\") " pod="openstack/cinder-api-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.921419 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4b5f98a-7a0f-42c1-8af4-c220716fe6b4-logs\") pod \"cinder-api-0\" (UID: \"e4b5f98a-7a0f-42c1-8af4-c220716fe6b4\") " pod="openstack/cinder-api-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.924352 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4b5f98a-7a0f-42c1-8af4-c220716fe6b4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e4b5f98a-7a0f-42c1-8af4-c220716fe6b4\") " pod="openstack/cinder-api-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.924601 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4b5f98a-7a0f-42c1-8af4-c220716fe6b4-config-data-custom\") pod \"cinder-api-0\" (UID: \"e4b5f98a-7a0f-42c1-8af4-c220716fe6b4\") " pod="openstack/cinder-api-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.925513 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8-config\") pod \"dnsmasq-dns-58db5546cc-bwjnj\" (UID: \"a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8\") " pod="openstack/dnsmasq-dns-58db5546cc-bwjnj" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.926278 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8dff220-3208-4f05-aefb-e094594d7ab7-ovndb-tls-certs\") pod \"neutron-7bbb449548-wkq7p\" (UID: \"a8dff220-3208-4f05-aefb-e094594d7ab7\") " pod="openstack/neutron-7bbb449548-wkq7p" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.926316 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a8dff220-3208-4f05-aefb-e094594d7ab7-config\") pod \"neutron-7bbb449548-wkq7p\" (UID: \"a8dff220-3208-4f05-aefb-e094594d7ab7\") " pod="openstack/neutron-7bbb449548-wkq7p" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.926492 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4b5f98a-7a0f-42c1-8af4-c220716fe6b4-config-data\") pod \"cinder-api-0\" (UID: \"e4b5f98a-7a0f-42c1-8af4-c220716fe6b4\") " pod="openstack/cinder-api-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.926892 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8dff220-3208-4f05-aefb-e094594d7ab7-combined-ca-bundle\") pod \"neutron-7bbb449548-wkq7p\" (UID: \"a8dff220-3208-4f05-aefb-e094594d7ab7\") " pod="openstack/neutron-7bbb449548-wkq7p" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.927446 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4b5f98a-7a0f-42c1-8af4-c220716fe6b4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e4b5f98a-7a0f-42c1-8af4-c220716fe6b4\") " pod="openstack/cinder-api-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.930408 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-bwjnj\" (UID: \"a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8\") " pod="openstack/dnsmasq-dns-58db5546cc-bwjnj" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.930594 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8-dns-svc\") pod \"dnsmasq-dns-58db5546cc-bwjnj\" (UID: \"a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8\") " pod="openstack/dnsmasq-dns-58db5546cc-bwjnj" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.931536 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4b5f98a-7a0f-42c1-8af4-c220716fe6b4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e4b5f98a-7a0f-42c1-8af4-c220716fe6b4\") " pod="openstack/cinder-api-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.933605 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a8dff220-3208-4f05-aefb-e094594d7ab7-httpd-config\") pod \"neutron-7bbb449548-wkq7p\" (UID: \"a8dff220-3208-4f05-aefb-e094594d7ab7\") " pod="openstack/neutron-7bbb449548-wkq7p" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.933706 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4b5f98a-7a0f-42c1-8af4-c220716fe6b4-scripts\") pod \"cinder-api-0\" (UID: \"e4b5f98a-7a0f-42c1-8af4-c220716fe6b4\") " pod="openstack/cinder-api-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.933858 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kh7h\" (UniqueName: \"kubernetes.io/projected/a8dff220-3208-4f05-aefb-e094594d7ab7-kube-api-access-5kh7h\") pod \"neutron-7bbb449548-wkq7p\" (UID: \"a8dff220-3208-4f05-aefb-e094594d7ab7\") " pod="openstack/neutron-7bbb449548-wkq7p" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.939055 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg7qv\" (UniqueName: \"kubernetes.io/projected/e4b5f98a-7a0f-42c1-8af4-c220716fe6b4-kube-api-access-wg7qv\") pod \"cinder-api-0\" (UID: \"e4b5f98a-7a0f-42c1-8af4-c220716fe6b4\") " pod="openstack/cinder-api-0" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.940500 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nczgt\" (UniqueName: \"kubernetes.io/projected/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8-kube-api-access-nczgt\") pod \"dnsmasq-dns-58db5546cc-bwjnj\" (UID: \"a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8\") " pod="openstack/dnsmasq-dns-58db5546cc-bwjnj" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.952361 4908 scope.go:117] "RemoveContainer" containerID="d50769a9eb35ecb483198f59a855657aab9b4dd430b68690ac7822adc4c3eef9" Jan 31 07:43:26 crc kubenswrapper[4908]: I0131 07:43:26.978514 4908 scope.go:117] "RemoveContainer" containerID="661125ce2d6af699b6880ba16e7232844af4958271706b5011c7d6e840eab426" Jan 31 07:43:27 crc kubenswrapper[4908]: I0131 07:43:27.062134 4908 scope.go:117] "RemoveContainer" containerID="51e0c6fab00e445eceb6949bf3428de2b05b9fcf40718991322c595dd85b12b7" Jan 31 07:43:27 crc kubenswrapper[4908]: I0131 07:43:27.105111 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-bwjnj" Jan 31 07:43:27 crc kubenswrapper[4908]: I0131 07:43:27.105145 4908 scope.go:117] "RemoveContainer" containerID="ee1adf4bb511e91ee4f8ea4b6f82d999586026167cacaa45dc685b794551fe88" Jan 31 07:43:27 crc kubenswrapper[4908]: I0131 07:43:27.114415 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 07:43:27 crc kubenswrapper[4908]: I0131 07:43:27.124423 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bbb449548-wkq7p" Jan 31 07:43:27 crc kubenswrapper[4908]: I0131 07:43:27.415782 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:43:27 crc kubenswrapper[4908]: I0131 07:43:27.454652 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be6d66d1-106e-4c09-a985-71a914e56a0b","Type":"ContainerStarted","Data":"95cd51186388efe2e10f32dcc626c4f8344af928cf2696d86b0867ee0425116c"} Jan 31 07:43:27 crc kubenswrapper[4908]: I0131 07:43:27.464590 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"25723372-e8bd-4939-a894-5d994e36b942","Type":"ContainerStarted","Data":"10a47fe8460aaef23f8506551f7702d79cf1c3b296ea4a366f77f2a83a1cf3c1"} Jan 31 07:43:27 crc kubenswrapper[4908]: I0131 07:43:27.493190 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=15.377055144 podStartE2EDuration="16.493174025s" podCreationTimestamp="2026-01-31 07:43:11 +0000 UTC" firstStartedPulling="2026-01-31 07:43:12.379125957 +0000 UTC m=+1298.995070611" lastFinishedPulling="2026-01-31 07:43:13.495244838 +0000 UTC m=+1300.111189492" observedRunningTime="2026-01-31 07:43:27.486235626 +0000 UTC m=+1314.102180280" watchObservedRunningTime="2026-01-31 07:43:27.493174025 +0000 UTC m=+1314.109118679" Jan 31 07:43:27 crc kubenswrapper[4908]: W0131 07:43:27.729228 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4b5f98a_7a0f_42c1_8af4_c220716fe6b4.slice/crio-0f1c75aba5b1cd829798b293ba06f0746c54f93fbe5f2dbb0e4a147dbdfb50b3 WatchSource:0}: Error finding container 0f1c75aba5b1cd829798b293ba06f0746c54f93fbe5f2dbb0e4a147dbdfb50b3: Status 404 returned error can't find the container with id 0f1c75aba5b1cd829798b293ba06f0746c54f93fbe5f2dbb0e4a147dbdfb50b3 Jan 31 07:43:27 crc kubenswrapper[4908]: I0131 07:43:27.732287 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 07:43:27 crc kubenswrapper[4908]: I0131 07:43:27.812754 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-bwjnj"] Jan 31 07:43:27 crc kubenswrapper[4908]: I0131 07:43:27.959899 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01e6536a-a222-489b-90cc-47995bf7c8ec" path="/var/lib/kubelet/pods/01e6536a-a222-489b-90cc-47995bf7c8ec/volumes" Jan 31 07:43:27 crc kubenswrapper[4908]: I0131 07:43:27.966197 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877" path="/var/lib/kubelet/pods/2bbf9c70-7cc9-4b9b-8904-2fb6eb8a9877/volumes" Jan 31 07:43:27 crc kubenswrapper[4908]: I0131 07:43:27.967336 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b99eb244-3bee-4f92-9bb2-097fc753c198" path="/var/lib/kubelet/pods/b99eb244-3bee-4f92-9bb2-097fc753c198/volumes" Jan 31 07:43:27 crc kubenswrapper[4908]: I0131 07:43:27.968046 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7fc3e04-968b-4c28-84b6-1cb5d95ad52c" path="/var/lib/kubelet/pods/c7fc3e04-968b-4c28-84b6-1cb5d95ad52c/volumes" Jan 31 07:43:28 crc kubenswrapper[4908]: I0131 07:43:28.479379 4908 generic.go:334] "Generic (PLEG): container finished" podID="a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8" containerID="90ff1278705d541ac5e35455fe84779b88fd28e316f6bb37a5f7f0c7dff28645" exitCode=0 Jan 31 07:43:28 crc kubenswrapper[4908]: I0131 07:43:28.479584 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-bwjnj" event={"ID":"a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8","Type":"ContainerDied","Data":"90ff1278705d541ac5e35455fe84779b88fd28e316f6bb37a5f7f0c7dff28645"} Jan 31 07:43:28 crc kubenswrapper[4908]: I0131 07:43:28.479729 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-bwjnj" event={"ID":"a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8","Type":"ContainerStarted","Data":"b013292294a58c6455ab1167fed981da0ce56139eadd4de9c2ffac6ec6941e81"} Jan 31 07:43:28 crc kubenswrapper[4908]: I0131 07:43:28.482938 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e4b5f98a-7a0f-42c1-8af4-c220716fe6b4","Type":"ContainerStarted","Data":"0f1c75aba5b1cd829798b293ba06f0746c54f93fbe5f2dbb0e4a147dbdfb50b3"} Jan 31 07:43:28 crc kubenswrapper[4908]: I0131 07:43:28.768560 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7bbb449548-wkq7p"] Jan 31 07:43:28 crc kubenswrapper[4908]: I0131 07:43:28.917931 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6577f594f-lz5n8"] Jan 31 07:43:28 crc kubenswrapper[4908]: I0131 07:43:28.919443 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6577f594f-lz5n8" Jan 31 07:43:28 crc kubenswrapper[4908]: I0131 07:43:28.924535 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 31 07:43:28 crc kubenswrapper[4908]: I0131 07:43:28.924739 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 31 07:43:28 crc kubenswrapper[4908]: I0131 07:43:28.932970 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6577f594f-lz5n8"] Jan 31 07:43:28 crc kubenswrapper[4908]: I0131 07:43:28.985252 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6333954b-6275-456a-a634-1273c62c4cec-ovndb-tls-certs\") pod \"neutron-6577f594f-lz5n8\" (UID: \"6333954b-6275-456a-a634-1273c62c4cec\") " pod="openstack/neutron-6577f594f-lz5n8" Jan 31 07:43:28 crc kubenswrapper[4908]: I0131 07:43:28.985626 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6333954b-6275-456a-a634-1273c62c4cec-httpd-config\") pod \"neutron-6577f594f-lz5n8\" (UID: \"6333954b-6275-456a-a634-1273c62c4cec\") " pod="openstack/neutron-6577f594f-lz5n8" Jan 31 07:43:28 crc kubenswrapper[4908]: I0131 07:43:28.985727 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6333954b-6275-456a-a634-1273c62c4cec-internal-tls-certs\") pod \"neutron-6577f594f-lz5n8\" (UID: \"6333954b-6275-456a-a634-1273c62c4cec\") " pod="openstack/neutron-6577f594f-lz5n8" Jan 31 07:43:28 crc kubenswrapper[4908]: I0131 07:43:28.985821 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6333954b-6275-456a-a634-1273c62c4cec-public-tls-certs\") pod \"neutron-6577f594f-lz5n8\" (UID: \"6333954b-6275-456a-a634-1273c62c4cec\") " pod="openstack/neutron-6577f594f-lz5n8" Jan 31 07:43:28 crc kubenswrapper[4908]: I0131 07:43:28.985877 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6333954b-6275-456a-a634-1273c62c4cec-config\") pod \"neutron-6577f594f-lz5n8\" (UID: \"6333954b-6275-456a-a634-1273c62c4cec\") " pod="openstack/neutron-6577f594f-lz5n8" Jan 31 07:43:28 crc kubenswrapper[4908]: I0131 07:43:28.985940 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6333954b-6275-456a-a634-1273c62c4cec-combined-ca-bundle\") pod \"neutron-6577f594f-lz5n8\" (UID: \"6333954b-6275-456a-a634-1273c62c4cec\") " pod="openstack/neutron-6577f594f-lz5n8" Jan 31 07:43:28 crc kubenswrapper[4908]: I0131 07:43:28.986018 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxtdl\" (UniqueName: \"kubernetes.io/projected/6333954b-6275-456a-a634-1273c62c4cec-kube-api-access-qxtdl\") pod \"neutron-6577f594f-lz5n8\" (UID: \"6333954b-6275-456a-a634-1273c62c4cec\") " pod="openstack/neutron-6577f594f-lz5n8" Jan 31 07:43:29 crc kubenswrapper[4908]: I0131 07:43:29.087807 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6333954b-6275-456a-a634-1273c62c4cec-httpd-config\") pod \"neutron-6577f594f-lz5n8\" (UID: \"6333954b-6275-456a-a634-1273c62c4cec\") " pod="openstack/neutron-6577f594f-lz5n8" Jan 31 07:43:29 crc kubenswrapper[4908]: I0131 07:43:29.087857 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6333954b-6275-456a-a634-1273c62c4cec-internal-tls-certs\") pod \"neutron-6577f594f-lz5n8\" (UID: \"6333954b-6275-456a-a634-1273c62c4cec\") " pod="openstack/neutron-6577f594f-lz5n8" Jan 31 07:43:29 crc kubenswrapper[4908]: I0131 07:43:29.087889 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6333954b-6275-456a-a634-1273c62c4cec-public-tls-certs\") pod \"neutron-6577f594f-lz5n8\" (UID: \"6333954b-6275-456a-a634-1273c62c4cec\") " pod="openstack/neutron-6577f594f-lz5n8" Jan 31 07:43:29 crc kubenswrapper[4908]: I0131 07:43:29.087915 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6333954b-6275-456a-a634-1273c62c4cec-config\") pod \"neutron-6577f594f-lz5n8\" (UID: \"6333954b-6275-456a-a634-1273c62c4cec\") " pod="openstack/neutron-6577f594f-lz5n8" Jan 31 07:43:29 crc kubenswrapper[4908]: I0131 07:43:29.087942 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6333954b-6275-456a-a634-1273c62c4cec-combined-ca-bundle\") pod \"neutron-6577f594f-lz5n8\" (UID: \"6333954b-6275-456a-a634-1273c62c4cec\") " pod="openstack/neutron-6577f594f-lz5n8" Jan 31 07:43:29 crc kubenswrapper[4908]: I0131 07:43:29.087970 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxtdl\" (UniqueName: \"kubernetes.io/projected/6333954b-6275-456a-a634-1273c62c4cec-kube-api-access-qxtdl\") pod \"neutron-6577f594f-lz5n8\" (UID: \"6333954b-6275-456a-a634-1273c62c4cec\") " pod="openstack/neutron-6577f594f-lz5n8" Jan 31 07:43:29 crc kubenswrapper[4908]: I0131 07:43:29.088046 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6333954b-6275-456a-a634-1273c62c4cec-ovndb-tls-certs\") pod \"neutron-6577f594f-lz5n8\" (UID: \"6333954b-6275-456a-a634-1273c62c4cec\") " pod="openstack/neutron-6577f594f-lz5n8" Jan 31 07:43:29 crc kubenswrapper[4908]: I0131 07:43:29.093389 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6333954b-6275-456a-a634-1273c62c4cec-ovndb-tls-certs\") pod \"neutron-6577f594f-lz5n8\" (UID: \"6333954b-6275-456a-a634-1273c62c4cec\") " pod="openstack/neutron-6577f594f-lz5n8" Jan 31 07:43:29 crc kubenswrapper[4908]: I0131 07:43:29.094653 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6333954b-6275-456a-a634-1273c62c4cec-public-tls-certs\") pod \"neutron-6577f594f-lz5n8\" (UID: \"6333954b-6275-456a-a634-1273c62c4cec\") " pod="openstack/neutron-6577f594f-lz5n8" Jan 31 07:43:29 crc kubenswrapper[4908]: I0131 07:43:29.096880 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6333954b-6275-456a-a634-1273c62c4cec-internal-tls-certs\") pod \"neutron-6577f594f-lz5n8\" (UID: \"6333954b-6275-456a-a634-1273c62c4cec\") " pod="openstack/neutron-6577f594f-lz5n8" Jan 31 07:43:29 crc kubenswrapper[4908]: I0131 07:43:29.097669 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6333954b-6275-456a-a634-1273c62c4cec-combined-ca-bundle\") pod \"neutron-6577f594f-lz5n8\" (UID: \"6333954b-6275-456a-a634-1273c62c4cec\") " pod="openstack/neutron-6577f594f-lz5n8" Jan 31 07:43:29 crc kubenswrapper[4908]: I0131 07:43:29.100638 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6333954b-6275-456a-a634-1273c62c4cec-httpd-config\") pod \"neutron-6577f594f-lz5n8\" (UID: \"6333954b-6275-456a-a634-1273c62c4cec\") " pod="openstack/neutron-6577f594f-lz5n8" Jan 31 07:43:29 crc kubenswrapper[4908]: I0131 07:43:29.106893 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6333954b-6275-456a-a634-1273c62c4cec-config\") pod \"neutron-6577f594f-lz5n8\" (UID: \"6333954b-6275-456a-a634-1273c62c4cec\") " pod="openstack/neutron-6577f594f-lz5n8" Jan 31 07:43:29 crc kubenswrapper[4908]: I0131 07:43:29.115612 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxtdl\" (UniqueName: \"kubernetes.io/projected/6333954b-6275-456a-a634-1273c62c4cec-kube-api-access-qxtdl\") pod \"neutron-6577f594f-lz5n8\" (UID: \"6333954b-6275-456a-a634-1273c62c4cec\") " pod="openstack/neutron-6577f594f-lz5n8" Jan 31 07:43:29 crc kubenswrapper[4908]: I0131 07:43:29.261148 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6577f594f-lz5n8" Jan 31 07:43:29 crc kubenswrapper[4908]: I0131 07:43:29.619284 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be6d66d1-106e-4c09-a985-71a914e56a0b","Type":"ContainerStarted","Data":"592f3dfe11849df6e806c89bec4c960d74c4c26b0a3bc317a33bd0b60297330c"} Jan 31 07:43:29 crc kubenswrapper[4908]: I0131 07:43:29.635910 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-bwjnj" event={"ID":"a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8","Type":"ContainerStarted","Data":"86896221467e4c66b1a2d59f2489fabebb17d706b2f1a59ed744661c2cdef141"} Jan 31 07:43:29 crc kubenswrapper[4908]: I0131 07:43:29.636454 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58db5546cc-bwjnj" Jan 31 07:43:29 crc kubenswrapper[4908]: I0131 07:43:29.646628 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e4b5f98a-7a0f-42c1-8af4-c220716fe6b4","Type":"ContainerStarted","Data":"7a1fe750c1e007176d61008424040dab67c83a9731627111afa1719c9e12e860"} Jan 31 07:43:29 crc kubenswrapper[4908]: I0131 07:43:29.652397 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bbb449548-wkq7p" event={"ID":"a8dff220-3208-4f05-aefb-e094594d7ab7","Type":"ContainerStarted","Data":"5f18187ba37f22c2dbe97b309a1bbc29e5a900f2359dec9d7499b5b17bbdcb95"} Jan 31 07:43:29 crc kubenswrapper[4908]: I0131 07:43:29.652449 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bbb449548-wkq7p" event={"ID":"a8dff220-3208-4f05-aefb-e094594d7ab7","Type":"ContainerStarted","Data":"3cece66178658d357fb41431396362aa2621985337051b7015cb1d6c172aeea6"} Jan 31 07:43:29 crc kubenswrapper[4908]: I0131 07:43:29.652615 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bbb449548-wkq7p" event={"ID":"a8dff220-3208-4f05-aefb-e094594d7ab7","Type":"ContainerStarted","Data":"4cf0e316ceca10efdd77091d6cc235a2d130357cd13cbf94a21c95af6fc59c94"} Jan 31 07:43:29 crc kubenswrapper[4908]: I0131 07:43:29.654449 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7bbb449548-wkq7p" Jan 31 07:43:29 crc kubenswrapper[4908]: I0131 07:43:29.706151 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58db5546cc-bwjnj" podStartSLOduration=3.706132689 podStartE2EDuration="3.706132689s" podCreationTimestamp="2026-01-31 07:43:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:43:29.665599761 +0000 UTC m=+1316.281544425" watchObservedRunningTime="2026-01-31 07:43:29.706132689 +0000 UTC m=+1316.322077333" Jan 31 07:43:29 crc kubenswrapper[4908]: I0131 07:43:29.915407 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7bbb449548-wkq7p" podStartSLOduration=3.915377583 podStartE2EDuration="3.915377583s" podCreationTimestamp="2026-01-31 07:43:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:43:29.692272631 +0000 UTC m=+1316.308217285" watchObservedRunningTime="2026-01-31 07:43:29.915377583 +0000 UTC m=+1316.531322237" Jan 31 07:43:29 crc kubenswrapper[4908]: I0131 07:43:29.917891 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6577f594f-lz5n8"] Jan 31 07:43:29 crc kubenswrapper[4908]: W0131 07:43:29.937854 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6333954b_6275_456a_a634_1273c62c4cec.slice/crio-6d3642ee87a2a1cf9bfea9f0c7300ae0ea859513dc90a46bd65a898a10b84593 WatchSource:0}: Error finding container 6d3642ee87a2a1cf9bfea9f0c7300ae0ea859513dc90a46bd65a898a10b84593: Status 404 returned error can't find the container with id 6d3642ee87a2a1cf9bfea9f0c7300ae0ea859513dc90a46bd65a898a10b84593 Jan 31 07:43:30 crc kubenswrapper[4908]: I0131 07:43:30.664185 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6577f594f-lz5n8" event={"ID":"6333954b-6275-456a-a634-1273c62c4cec","Type":"ContainerStarted","Data":"433dcccc9bc16b72bb55a98bcfe6c747c26175e93c910015d707fe459a46cba4"} Jan 31 07:43:30 crc kubenswrapper[4908]: I0131 07:43:30.664822 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6577f594f-lz5n8" Jan 31 07:43:30 crc kubenswrapper[4908]: I0131 07:43:30.664838 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6577f594f-lz5n8" event={"ID":"6333954b-6275-456a-a634-1273c62c4cec","Type":"ContainerStarted","Data":"80293f99132e8f7154bf53dfd197e64c7f06aac78c19244ad62f9ffd3747fda1"} Jan 31 07:43:30 crc kubenswrapper[4908]: I0131 07:43:30.664851 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6577f594f-lz5n8" event={"ID":"6333954b-6275-456a-a634-1273c62c4cec","Type":"ContainerStarted","Data":"6d3642ee87a2a1cf9bfea9f0c7300ae0ea859513dc90a46bd65a898a10b84593"} Jan 31 07:43:30 crc kubenswrapper[4908]: I0131 07:43:30.666413 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be6d66d1-106e-4c09-a985-71a914e56a0b","Type":"ContainerStarted","Data":"8644e28c44d5a1a588cea525557c876c2a70aa810fb21fc46b8bc60122f45c9e"} Jan 31 07:43:30 crc kubenswrapper[4908]: I0131 07:43:30.668450 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e4b5f98a-7a0f-42c1-8af4-c220716fe6b4","Type":"ContainerStarted","Data":"7bce1f621d38f797213bbe35898ec74d3db9474b09161b069db37b53c610c59f"} Jan 31 07:43:30 crc kubenswrapper[4908]: I0131 07:43:30.690157 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6577f594f-lz5n8" podStartSLOduration=2.69014034 podStartE2EDuration="2.69014034s" podCreationTimestamp="2026-01-31 07:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:43:30.683638711 +0000 UTC m=+1317.299583365" watchObservedRunningTime="2026-01-31 07:43:30.69014034 +0000 UTC m=+1317.306084994" Jan 31 07:43:30 crc kubenswrapper[4908]: I0131 07:43:30.710286 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.71026676 podStartE2EDuration="4.71026676s" podCreationTimestamp="2026-01-31 07:43:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:43:30.707257897 +0000 UTC m=+1317.323202551" watchObservedRunningTime="2026-01-31 07:43:30.71026676 +0000 UTC m=+1317.326211414" Jan 31 07:43:31 crc kubenswrapper[4908]: I0131 07:43:31.675430 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 31 07:43:31 crc kubenswrapper[4908]: I0131 07:43:31.678258 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be6d66d1-106e-4c09-a985-71a914e56a0b","Type":"ContainerStarted","Data":"1bb7941f690a9dd1e83229d75f2ff0f710f5f5e4bc0dab3a5706685c5b066de2"} Jan 31 07:43:31 crc kubenswrapper[4908]: I0131 07:43:31.678660 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 31 07:43:31 crc kubenswrapper[4908]: I0131 07:43:31.933826 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-mbdvd"] Jan 31 07:43:31 crc kubenswrapper[4908]: I0131 07:43:31.936129 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mbdvd" Jan 31 07:43:31 crc kubenswrapper[4908]: I0131 07:43:31.936933 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g5nz\" (UniqueName: \"kubernetes.io/projected/4b02f781-7443-49e3-aa1e-a9a1a8a36329-kube-api-access-9g5nz\") pod \"nova-api-db-create-mbdvd\" (UID: \"4b02f781-7443-49e3-aa1e-a9a1a8a36329\") " pod="openstack/nova-api-db-create-mbdvd" Jan 31 07:43:31 crc kubenswrapper[4908]: I0131 07:43:31.937038 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b02f781-7443-49e3-aa1e-a9a1a8a36329-operator-scripts\") pod \"nova-api-db-create-mbdvd\" (UID: \"4b02f781-7443-49e3-aa1e-a9a1a8a36329\") " pod="openstack/nova-api-db-create-mbdvd" Jan 31 07:43:31 crc kubenswrapper[4908]: I0131 07:43:31.955744 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-mbdvd"] Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.038439 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g5nz\" (UniqueName: \"kubernetes.io/projected/4b02f781-7443-49e3-aa1e-a9a1a8a36329-kube-api-access-9g5nz\") pod \"nova-api-db-create-mbdvd\" (UID: \"4b02f781-7443-49e3-aa1e-a9a1a8a36329\") " pod="openstack/nova-api-db-create-mbdvd" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.038848 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-kkx5h"] Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.039362 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b02f781-7443-49e3-aa1e-a9a1a8a36329-operator-scripts\") pod \"nova-api-db-create-mbdvd\" (UID: \"4b02f781-7443-49e3-aa1e-a9a1a8a36329\") " pod="openstack/nova-api-db-create-mbdvd" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.040267 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kkx5h" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.040351 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b02f781-7443-49e3-aa1e-a9a1a8a36329-operator-scripts\") pod \"nova-api-db-create-mbdvd\" (UID: \"4b02f781-7443-49e3-aa1e-a9a1a8a36329\") " pod="openstack/nova-api-db-create-mbdvd" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.051483 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-ada9-account-create-update-pprbw"] Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.052519 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ada9-account-create-update-pprbw" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.055853 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.096231 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-kkx5h"] Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.098782 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g5nz\" (UniqueName: \"kubernetes.io/projected/4b02f781-7443-49e3-aa1e-a9a1a8a36329-kube-api-access-9g5nz\") pod \"nova-api-db-create-mbdvd\" (UID: \"4b02f781-7443-49e3-aa1e-a9a1a8a36329\") " pod="openstack/nova-api-db-create-mbdvd" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.123476 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ada9-account-create-update-pprbw"] Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.141398 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.142709 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b99de16f-0b42-4cfb-b041-c0a388bc31e0-operator-scripts\") pod \"nova-cell0-db-create-kkx5h\" (UID: \"b99de16f-0b42-4cfb-b041-c0a388bc31e0\") " pod="openstack/nova-cell0-db-create-kkx5h" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.142900 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5gc6\" (UniqueName: \"kubernetes.io/projected/b99de16f-0b42-4cfb-b041-c0a388bc31e0-kube-api-access-w5gc6\") pod \"nova-cell0-db-create-kkx5h\" (UID: \"b99de16f-0b42-4cfb-b041-c0a388bc31e0\") " pod="openstack/nova-cell0-db-create-kkx5h" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.143165 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220-operator-scripts\") pod \"nova-api-ada9-account-create-update-pprbw\" (UID: \"b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220\") " pod="openstack/nova-api-ada9-account-create-update-pprbw" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.143449 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdx72\" (UniqueName: \"kubernetes.io/projected/b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220-kube-api-access-kdx72\") pod \"nova-api-ada9-account-create-update-pprbw\" (UID: \"b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220\") " pod="openstack/nova-api-ada9-account-create-update-pprbw" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.229022 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-b44a-account-create-update-bd28z"] Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.230425 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b44a-account-create-update-bd28z" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.232528 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.241604 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b44a-account-create-update-bd28z"] Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.244752 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220-operator-scripts\") pod \"nova-api-ada9-account-create-update-pprbw\" (UID: \"b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220\") " pod="openstack/nova-api-ada9-account-create-update-pprbw" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.244850 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb882\" (UniqueName: \"kubernetes.io/projected/c4ca7fd3-7fa3-4048-8325-f58efea50f94-kube-api-access-bb882\") pod \"nova-cell0-b44a-account-create-update-bd28z\" (UID: \"c4ca7fd3-7fa3-4048-8325-f58efea50f94\") " pod="openstack/nova-cell0-b44a-account-create-update-bd28z" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.244873 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdx72\" (UniqueName: \"kubernetes.io/projected/b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220-kube-api-access-kdx72\") pod \"nova-api-ada9-account-create-update-pprbw\" (UID: \"b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220\") " pod="openstack/nova-api-ada9-account-create-update-pprbw" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.244895 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4ca7fd3-7fa3-4048-8325-f58efea50f94-operator-scripts\") pod \"nova-cell0-b44a-account-create-update-bd28z\" (UID: \"c4ca7fd3-7fa3-4048-8325-f58efea50f94\") " pod="openstack/nova-cell0-b44a-account-create-update-bd28z" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.244927 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b99de16f-0b42-4cfb-b041-c0a388bc31e0-operator-scripts\") pod \"nova-cell0-db-create-kkx5h\" (UID: \"b99de16f-0b42-4cfb-b041-c0a388bc31e0\") " pod="openstack/nova-cell0-db-create-kkx5h" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.245079 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5gc6\" (UniqueName: \"kubernetes.io/projected/b99de16f-0b42-4cfb-b041-c0a388bc31e0-kube-api-access-w5gc6\") pod \"nova-cell0-db-create-kkx5h\" (UID: \"b99de16f-0b42-4cfb-b041-c0a388bc31e0\") " pod="openstack/nova-cell0-db-create-kkx5h" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.245464 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220-operator-scripts\") pod \"nova-api-ada9-account-create-update-pprbw\" (UID: \"b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220\") " pod="openstack/nova-api-ada9-account-create-update-pprbw" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.245730 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b99de16f-0b42-4cfb-b041-c0a388bc31e0-operator-scripts\") pod \"nova-cell0-db-create-kkx5h\" (UID: \"b99de16f-0b42-4cfb-b041-c0a388bc31e0\") " pod="openstack/nova-cell0-db-create-kkx5h" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.273210 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5gc6\" (UniqueName: \"kubernetes.io/projected/b99de16f-0b42-4cfb-b041-c0a388bc31e0-kube-api-access-w5gc6\") pod \"nova-cell0-db-create-kkx5h\" (UID: \"b99de16f-0b42-4cfb-b041-c0a388bc31e0\") " pod="openstack/nova-cell0-db-create-kkx5h" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.273998 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdx72\" (UniqueName: \"kubernetes.io/projected/b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220-kube-api-access-kdx72\") pod \"nova-api-ada9-account-create-update-pprbw\" (UID: \"b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220\") " pod="openstack/nova-api-ada9-account-create-update-pprbw" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.280523 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mbdvd" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.333513 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-zhkcp"] Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.334577 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zhkcp" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.347151 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a874264b-db1f-4a01-9f15-c1e50e22b854-operator-scripts\") pod \"nova-cell1-db-create-zhkcp\" (UID: \"a874264b-db1f-4a01-9f15-c1e50e22b854\") " pod="openstack/nova-cell1-db-create-zhkcp" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.347197 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb882\" (UniqueName: \"kubernetes.io/projected/c4ca7fd3-7fa3-4048-8325-f58efea50f94-kube-api-access-bb882\") pod \"nova-cell0-b44a-account-create-update-bd28z\" (UID: \"c4ca7fd3-7fa3-4048-8325-f58efea50f94\") " pod="openstack/nova-cell0-b44a-account-create-update-bd28z" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.347225 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4ca7fd3-7fa3-4048-8325-f58efea50f94-operator-scripts\") pod \"nova-cell0-b44a-account-create-update-bd28z\" (UID: \"c4ca7fd3-7fa3-4048-8325-f58efea50f94\") " pod="openstack/nova-cell0-b44a-account-create-update-bd28z" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.347378 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glvtx\" (UniqueName: \"kubernetes.io/projected/a874264b-db1f-4a01-9f15-c1e50e22b854-kube-api-access-glvtx\") pod \"nova-cell1-db-create-zhkcp\" (UID: \"a874264b-db1f-4a01-9f15-c1e50e22b854\") " pod="openstack/nova-cell1-db-create-zhkcp" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.348537 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4ca7fd3-7fa3-4048-8325-f58efea50f94-operator-scripts\") pod \"nova-cell0-b44a-account-create-update-bd28z\" (UID: \"c4ca7fd3-7fa3-4048-8325-f58efea50f94\") " pod="openstack/nova-cell0-b44a-account-create-update-bd28z" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.351220 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-zhkcp"] Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.361836 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kkx5h" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.370024 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb882\" (UniqueName: \"kubernetes.io/projected/c4ca7fd3-7fa3-4048-8325-f58efea50f94-kube-api-access-bb882\") pod \"nova-cell0-b44a-account-create-update-bd28z\" (UID: \"c4ca7fd3-7fa3-4048-8325-f58efea50f94\") " pod="openstack/nova-cell0-b44a-account-create-update-bd28z" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.386781 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ada9-account-create-update-pprbw" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.449683 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a874264b-db1f-4a01-9f15-c1e50e22b854-operator-scripts\") pod \"nova-cell1-db-create-zhkcp\" (UID: \"a874264b-db1f-4a01-9f15-c1e50e22b854\") " pod="openstack/nova-cell1-db-create-zhkcp" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.449802 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glvtx\" (UniqueName: \"kubernetes.io/projected/a874264b-db1f-4a01-9f15-c1e50e22b854-kube-api-access-glvtx\") pod \"nova-cell1-db-create-zhkcp\" (UID: \"a874264b-db1f-4a01-9f15-c1e50e22b854\") " pod="openstack/nova-cell1-db-create-zhkcp" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.450968 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a874264b-db1f-4a01-9f15-c1e50e22b854-operator-scripts\") pod \"nova-cell1-db-create-zhkcp\" (UID: \"a874264b-db1f-4a01-9f15-c1e50e22b854\") " pod="openstack/nova-cell1-db-create-zhkcp" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.451043 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-b96a-account-create-update-tppx9"] Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.452204 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b96a-account-create-update-tppx9" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.456197 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.470455 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-b96a-account-create-update-tppx9"] Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.474585 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glvtx\" (UniqueName: \"kubernetes.io/projected/a874264b-db1f-4a01-9f15-c1e50e22b854-kube-api-access-glvtx\") pod \"nova-cell1-db-create-zhkcp\" (UID: \"a874264b-db1f-4a01-9f15-c1e50e22b854\") " pod="openstack/nova-cell1-db-create-zhkcp" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.538965 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zhkcp" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.552200 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b44a-account-create-update-bd28z" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.553212 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0405b85c-acf0-4a3a-9018-c34165dd440a-operator-scripts\") pod \"nova-cell1-b96a-account-create-update-tppx9\" (UID: \"0405b85c-acf0-4a3a-9018-c34165dd440a\") " pod="openstack/nova-cell1-b96a-account-create-update-tppx9" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.553332 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsksx\" (UniqueName: \"kubernetes.io/projected/0405b85c-acf0-4a3a-9018-c34165dd440a-kube-api-access-wsksx\") pod \"nova-cell1-b96a-account-create-update-tppx9\" (UID: \"0405b85c-acf0-4a3a-9018-c34165dd440a\") " pod="openstack/nova-cell1-b96a-account-create-update-tppx9" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.655300 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0405b85c-acf0-4a3a-9018-c34165dd440a-operator-scripts\") pod \"nova-cell1-b96a-account-create-update-tppx9\" (UID: \"0405b85c-acf0-4a3a-9018-c34165dd440a\") " pod="openstack/nova-cell1-b96a-account-create-update-tppx9" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.655676 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsksx\" (UniqueName: \"kubernetes.io/projected/0405b85c-acf0-4a3a-9018-c34165dd440a-kube-api-access-wsksx\") pod \"nova-cell1-b96a-account-create-update-tppx9\" (UID: \"0405b85c-acf0-4a3a-9018-c34165dd440a\") " pod="openstack/nova-cell1-b96a-account-create-update-tppx9" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.656457 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0405b85c-acf0-4a3a-9018-c34165dd440a-operator-scripts\") pod \"nova-cell1-b96a-account-create-update-tppx9\" (UID: \"0405b85c-acf0-4a3a-9018-c34165dd440a\") " pod="openstack/nova-cell1-b96a-account-create-update-tppx9" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.673655 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsksx\" (UniqueName: \"kubernetes.io/projected/0405b85c-acf0-4a3a-9018-c34165dd440a-kube-api-access-wsksx\") pod \"nova-cell1-b96a-account-create-update-tppx9\" (UID: \"0405b85c-acf0-4a3a-9018-c34165dd440a\") " pod="openstack/nova-cell1-b96a-account-create-update-tppx9" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.750818 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.873048 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b96a-account-create-update-tppx9" Jan 31 07:43:32 crc kubenswrapper[4908]: I0131 07:43:32.939277 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-mbdvd"] Jan 31 07:43:33 crc kubenswrapper[4908]: I0131 07:43:33.058868 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ada9-account-create-update-pprbw"] Jan 31 07:43:33 crc kubenswrapper[4908]: W0131 07:43:33.067076 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0ced2b4_a151_4b9c_9bc2_1f9eee1e2220.slice/crio-069198b87c716209496a304abd378f9f65415442b6bf60f33a5bd457f777df73 WatchSource:0}: Error finding container 069198b87c716209496a304abd378f9f65415442b6bf60f33a5bd457f777df73: Status 404 returned error can't find the container with id 069198b87c716209496a304abd378f9f65415442b6bf60f33a5bd457f777df73 Jan 31 07:43:33 crc kubenswrapper[4908]: I0131 07:43:33.240036 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b44a-account-create-update-bd28z"] Jan 31 07:43:33 crc kubenswrapper[4908]: I0131 07:43:33.262416 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-kkx5h"] Jan 31 07:43:33 crc kubenswrapper[4908]: I0131 07:43:33.577722 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-zhkcp"] Jan 31 07:43:33 crc kubenswrapper[4908]: I0131 07:43:33.700251 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ada9-account-create-update-pprbw" event={"ID":"b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220","Type":"ContainerStarted","Data":"50eda1ac36c54e872cf51f718cccffdc29dfb09a5e5827257c12b06db26bafc6"} Jan 31 07:43:33 crc kubenswrapper[4908]: I0131 07:43:33.700293 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ada9-account-create-update-pprbw" event={"ID":"b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220","Type":"ContainerStarted","Data":"069198b87c716209496a304abd378f9f65415442b6bf60f33a5bd457f777df73"} Jan 31 07:43:33 crc kubenswrapper[4908]: I0131 07:43:33.701417 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zhkcp" event={"ID":"a874264b-db1f-4a01-9f15-c1e50e22b854","Type":"ContainerStarted","Data":"90aeb9820da8d376662551f5fb06da4a976d87045938fe807281b4b077537cda"} Jan 31 07:43:33 crc kubenswrapper[4908]: I0131 07:43:33.702864 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kkx5h" event={"ID":"b99de16f-0b42-4cfb-b041-c0a388bc31e0","Type":"ContainerStarted","Data":"c4111be580f71c81f7e5629e3ab6cec9e978eda21a807fca55e56e0d00a7596f"} Jan 31 07:43:33 crc kubenswrapper[4908]: I0131 07:43:33.702902 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kkx5h" event={"ID":"b99de16f-0b42-4cfb-b041-c0a388bc31e0","Type":"ContainerStarted","Data":"f049de1b9d88ff6ab492d91b9d2e52898aad64e250ebfb7f3b9d70733c19ae60"} Jan 31 07:43:33 crc kubenswrapper[4908]: I0131 07:43:33.703692 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b44a-account-create-update-bd28z" event={"ID":"c4ca7fd3-7fa3-4048-8325-f58efea50f94","Type":"ContainerStarted","Data":"a1686ae54de6ff85beb5152904e609ff3b9ef8725aa090a2130d7126890f8280"} Jan 31 07:43:33 crc kubenswrapper[4908]: I0131 07:43:33.705081 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mbdvd" event={"ID":"4b02f781-7443-49e3-aa1e-a9a1a8a36329","Type":"ContainerStarted","Data":"6082979425b80523e7336fd990b3cbabac41140f2b3c15cc96cc5cc0d13cd27a"} Jan 31 07:43:33 crc kubenswrapper[4908]: I0131 07:43:33.705116 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mbdvd" event={"ID":"4b02f781-7443-49e3-aa1e-a9a1a8a36329","Type":"ContainerStarted","Data":"4c464968aecce6a659d89d908194282e097ff17794578d6d3b9fb21f7906281f"} Jan 31 07:43:33 crc kubenswrapper[4908]: I0131 07:43:33.705327 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="25723372-e8bd-4939-a894-5d994e36b942" containerName="cinder-scheduler" containerID="cri-o://300959358ca079a3040b5ed490d339fa95f8ba7945fd8aec3e83cd73c1a50bd6" gracePeriod=30 Jan 31 07:43:33 crc kubenswrapper[4908]: I0131 07:43:33.705376 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="25723372-e8bd-4939-a894-5d994e36b942" containerName="probe" containerID="cri-o://10a47fe8460aaef23f8506551f7702d79cf1c3b296ea4a366f77f2a83a1cf3c1" gracePeriod=30 Jan 31 07:43:33 crc kubenswrapper[4908]: I0131 07:43:33.730455 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-b96a-account-create-update-tppx9"] Jan 31 07:43:33 crc kubenswrapper[4908]: I0131 07:43:33.730833 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-mbdvd" podStartSLOduration=2.730817611 podStartE2EDuration="2.730817611s" podCreationTimestamp="2026-01-31 07:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:43:33.723972224 +0000 UTC m=+1320.339916878" watchObservedRunningTime="2026-01-31 07:43:33.730817611 +0000 UTC m=+1320.346762275" Jan 31 07:43:33 crc kubenswrapper[4908]: W0131 07:43:33.731503 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0405b85c_acf0_4a3a_9018_c34165dd440a.slice/crio-30d3a385551c1cbd5fae73dad70b719290ed96925d36e96f694a8a44a99955c5 WatchSource:0}: Error finding container 30d3a385551c1cbd5fae73dad70b719290ed96925d36e96f694a8a44a99955c5: Status 404 returned error can't find the container with id 30d3a385551c1cbd5fae73dad70b719290ed96925d36e96f694a8a44a99955c5 Jan 31 07:43:34 crc kubenswrapper[4908]: I0131 07:43:34.726232 4908 generic.go:334] "Generic (PLEG): container finished" podID="25723372-e8bd-4939-a894-5d994e36b942" containerID="10a47fe8460aaef23f8506551f7702d79cf1c3b296ea4a366f77f2a83a1cf3c1" exitCode=0 Jan 31 07:43:34 crc kubenswrapper[4908]: I0131 07:43:34.726296 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"25723372-e8bd-4939-a894-5d994e36b942","Type":"ContainerDied","Data":"10a47fe8460aaef23f8506551f7702d79cf1c3b296ea4a366f77f2a83a1cf3c1"} Jan 31 07:43:34 crc kubenswrapper[4908]: I0131 07:43:34.728906 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b96a-account-create-update-tppx9" event={"ID":"0405b85c-acf0-4a3a-9018-c34165dd440a","Type":"ContainerStarted","Data":"30d3a385551c1cbd5fae73dad70b719290ed96925d36e96f694a8a44a99955c5"} Jan 31 07:43:34 crc kubenswrapper[4908]: I0131 07:43:34.732349 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b44a-account-create-update-bd28z" event={"ID":"c4ca7fd3-7fa3-4048-8325-f58efea50f94","Type":"ContainerStarted","Data":"ba7fa44fe398d73e051b3d97c9c9d2ca28595d34d22e982b1fad5a8abc928307"} Jan 31 07:43:34 crc kubenswrapper[4908]: I0131 07:43:34.758048 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-ada9-account-create-update-pprbw" podStartSLOduration=2.7580294629999997 podStartE2EDuration="2.758029463s" podCreationTimestamp="2026-01-31 07:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:43:34.752379716 +0000 UTC m=+1321.368324380" watchObservedRunningTime="2026-01-31 07:43:34.758029463 +0000 UTC m=+1321.373974117" Jan 31 07:43:34 crc kubenswrapper[4908]: I0131 07:43:34.769507 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-b44a-account-create-update-bd28z" podStartSLOduration=2.769485403 podStartE2EDuration="2.769485403s" podCreationTimestamp="2026-01-31 07:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:43:34.766685654 +0000 UTC m=+1321.382630308" watchObservedRunningTime="2026-01-31 07:43:34.769485403 +0000 UTC m=+1321.385430057" Jan 31 07:43:34 crc kubenswrapper[4908]: I0131 07:43:34.802279 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-kkx5h" podStartSLOduration=2.802259462 podStartE2EDuration="2.802259462s" podCreationTimestamp="2026-01-31 07:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:43:34.783443593 +0000 UTC m=+1321.399388247" watchObservedRunningTime="2026-01-31 07:43:34.802259462 +0000 UTC m=+1321.418204116" Jan 31 07:43:35 crc kubenswrapper[4908]: I0131 07:43:35.741044 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b96a-account-create-update-tppx9" event={"ID":"0405b85c-acf0-4a3a-9018-c34165dd440a","Type":"ContainerStarted","Data":"9dee05cc9703721636256b8027e2ba92f2bbb0b1fda6ee47ea382fedae0af7d1"} Jan 31 07:43:35 crc kubenswrapper[4908]: I0131 07:43:35.744184 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be6d66d1-106e-4c09-a985-71a914e56a0b","Type":"ContainerStarted","Data":"e880abfca674b6ba02cab4aa1a0bc307ac248ca0f40d0b53df4324942e4559f9"} Jan 31 07:43:35 crc kubenswrapper[4908]: I0131 07:43:35.744275 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be6d66d1-106e-4c09-a985-71a914e56a0b" containerName="ceilometer-central-agent" containerID="cri-o://592f3dfe11849df6e806c89bec4c960d74c4c26b0a3bc317a33bd0b60297330c" gracePeriod=30 Jan 31 07:43:35 crc kubenswrapper[4908]: I0131 07:43:35.744327 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be6d66d1-106e-4c09-a985-71a914e56a0b" containerName="sg-core" containerID="cri-o://1bb7941f690a9dd1e83229d75f2ff0f710f5f5e4bc0dab3a5706685c5b066de2" gracePeriod=30 Jan 31 07:43:35 crc kubenswrapper[4908]: I0131 07:43:35.744375 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be6d66d1-106e-4c09-a985-71a914e56a0b" containerName="proxy-httpd" containerID="cri-o://e880abfca674b6ba02cab4aa1a0bc307ac248ca0f40d0b53df4324942e4559f9" gracePeriod=30 Jan 31 07:43:35 crc kubenswrapper[4908]: I0131 07:43:35.744404 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 07:43:35 crc kubenswrapper[4908]: I0131 07:43:35.744391 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be6d66d1-106e-4c09-a985-71a914e56a0b" containerName="ceilometer-notification-agent" containerID="cri-o://8644e28c44d5a1a588cea525557c876c2a70aa810fb21fc46b8bc60122f45c9e" gracePeriod=30 Jan 31 07:43:35 crc kubenswrapper[4908]: I0131 07:43:35.750429 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zhkcp" event={"ID":"a874264b-db1f-4a01-9f15-c1e50e22b854","Type":"ContainerStarted","Data":"6b4776641867526effabb111ab2cb2e8ab183558e547005297d967fc452316eb"} Jan 31 07:43:35 crc kubenswrapper[4908]: I0131 07:43:35.762394 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-b96a-account-create-update-tppx9" podStartSLOduration=3.762374569 podStartE2EDuration="3.762374569s" podCreationTimestamp="2026-01-31 07:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:43:35.76158306 +0000 UTC m=+1322.377527714" watchObservedRunningTime="2026-01-31 07:43:35.762374569 +0000 UTC m=+1322.378319223" Jan 31 07:43:35 crc kubenswrapper[4908]: I0131 07:43:35.788613 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-zhkcp" podStartSLOduration=3.788584988 podStartE2EDuration="3.788584988s" podCreationTimestamp="2026-01-31 07:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:43:35.783836263 +0000 UTC m=+1322.399780917" watchObservedRunningTime="2026-01-31 07:43:35.788584988 +0000 UTC m=+1322.404529642" Jan 31 07:43:35 crc kubenswrapper[4908]: I0131 07:43:35.822344 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.481860779 podStartE2EDuration="9.822319991s" podCreationTimestamp="2026-01-31 07:43:26 +0000 UTC" firstStartedPulling="2026-01-31 07:43:27.439044235 +0000 UTC m=+1314.054988889" lastFinishedPulling="2026-01-31 07:43:34.779503447 +0000 UTC m=+1321.395448101" observedRunningTime="2026-01-31 07:43:35.815916615 +0000 UTC m=+1322.431861269" watchObservedRunningTime="2026-01-31 07:43:35.822319991 +0000 UTC m=+1322.438264645" Jan 31 07:43:36 crc kubenswrapper[4908]: E0131 07:43:36.417177 4908 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe6d66d1_106e_4c09_a985_71a914e56a0b.slice/crio-conmon-8644e28c44d5a1a588cea525557c876c2a70aa810fb21fc46b8bc60122f45c9e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb99de16f_0b42_4cfb_b041_c0a388bc31e0.slice/crio-conmon-c4111be580f71c81f7e5629e3ab6cec9e978eda21a807fca55e56e0d00a7596f.scope\": RecentStats: unable to find data in memory cache]" Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.174445 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58db5546cc-bwjnj" Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.190393 4908 generic.go:334] "Generic (PLEG): container finished" podID="a874264b-db1f-4a01-9f15-c1e50e22b854" containerID="6b4776641867526effabb111ab2cb2e8ab183558e547005297d967fc452316eb" exitCode=0 Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.190630 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zhkcp" event={"ID":"a874264b-db1f-4a01-9f15-c1e50e22b854","Type":"ContainerDied","Data":"6b4776641867526effabb111ab2cb2e8ab183558e547005297d967fc452316eb"} Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.193523 4908 generic.go:334] "Generic (PLEG): container finished" podID="b99de16f-0b42-4cfb-b041-c0a388bc31e0" containerID="c4111be580f71c81f7e5629e3ab6cec9e978eda21a807fca55e56e0d00a7596f" exitCode=0 Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.193633 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kkx5h" event={"ID":"b99de16f-0b42-4cfb-b041-c0a388bc31e0","Type":"ContainerDied","Data":"c4111be580f71c81f7e5629e3ab6cec9e978eda21a807fca55e56e0d00a7596f"} Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.197001 4908 generic.go:334] "Generic (PLEG): container finished" podID="be6d66d1-106e-4c09-a985-71a914e56a0b" containerID="e880abfca674b6ba02cab4aa1a0bc307ac248ca0f40d0b53df4324942e4559f9" exitCode=0 Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.197035 4908 generic.go:334] "Generic (PLEG): container finished" podID="be6d66d1-106e-4c09-a985-71a914e56a0b" containerID="1bb7941f690a9dd1e83229d75f2ff0f710f5f5e4bc0dab3a5706685c5b066de2" exitCode=2 Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.197052 4908 generic.go:334] "Generic (PLEG): container finished" podID="be6d66d1-106e-4c09-a985-71a914e56a0b" containerID="8644e28c44d5a1a588cea525557c876c2a70aa810fb21fc46b8bc60122f45c9e" exitCode=0 Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.197065 4908 generic.go:334] "Generic (PLEG): container finished" podID="be6d66d1-106e-4c09-a985-71a914e56a0b" containerID="592f3dfe11849df6e806c89bec4c960d74c4c26b0a3bc317a33bd0b60297330c" exitCode=0 Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.197023 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be6d66d1-106e-4c09-a985-71a914e56a0b","Type":"ContainerDied","Data":"e880abfca674b6ba02cab4aa1a0bc307ac248ca0f40d0b53df4324942e4559f9"} Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.197398 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be6d66d1-106e-4c09-a985-71a914e56a0b","Type":"ContainerDied","Data":"1bb7941f690a9dd1e83229d75f2ff0f710f5f5e4bc0dab3a5706685c5b066de2"} Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.197486 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be6d66d1-106e-4c09-a985-71a914e56a0b","Type":"ContainerDied","Data":"8644e28c44d5a1a588cea525557c876c2a70aa810fb21fc46b8bc60122f45c9e"} Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.197566 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be6d66d1-106e-4c09-a985-71a914e56a0b","Type":"ContainerDied","Data":"592f3dfe11849df6e806c89bec4c960d74c4c26b0a3bc317a33bd0b60297330c"} Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.199432 4908 generic.go:334] "Generic (PLEG): container finished" podID="4b02f781-7443-49e3-aa1e-a9a1a8a36329" containerID="6082979425b80523e7336fd990b3cbabac41140f2b3c15cc96cc5cc0d13cd27a" exitCode=0 Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.199716 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mbdvd" event={"ID":"4b02f781-7443-49e3-aa1e-a9a1a8a36329","Type":"ContainerDied","Data":"6082979425b80523e7336fd990b3cbabac41140f2b3c15cc96cc5cc0d13cd27a"} Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.266408 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2"] Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.267262 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2" podUID="3e69c80a-41fd-4df0-8758-e105a45521bf" containerName="dnsmasq-dns" containerID="cri-o://fea40fcb28b68de975c23fa52c9d11cdc99017ab4ae7be08e522bc84b9f61be1" gracePeriod=10 Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.755134 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.888797 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be6d66d1-106e-4c09-a985-71a914e56a0b-log-httpd\") pod \"be6d66d1-106e-4c09-a985-71a914e56a0b\" (UID: \"be6d66d1-106e-4c09-a985-71a914e56a0b\") " Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.888858 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be6d66d1-106e-4c09-a985-71a914e56a0b-run-httpd\") pod \"be6d66d1-106e-4c09-a985-71a914e56a0b\" (UID: \"be6d66d1-106e-4c09-a985-71a914e56a0b\") " Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.888913 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77ltf\" (UniqueName: \"kubernetes.io/projected/be6d66d1-106e-4c09-a985-71a914e56a0b-kube-api-access-77ltf\") pod \"be6d66d1-106e-4c09-a985-71a914e56a0b\" (UID: \"be6d66d1-106e-4c09-a985-71a914e56a0b\") " Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.888937 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be6d66d1-106e-4c09-a985-71a914e56a0b-combined-ca-bundle\") pod \"be6d66d1-106e-4c09-a985-71a914e56a0b\" (UID: \"be6d66d1-106e-4c09-a985-71a914e56a0b\") " Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.889004 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be6d66d1-106e-4c09-a985-71a914e56a0b-config-data\") pod \"be6d66d1-106e-4c09-a985-71a914e56a0b\" (UID: \"be6d66d1-106e-4c09-a985-71a914e56a0b\") " Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.889023 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be6d66d1-106e-4c09-a985-71a914e56a0b-sg-core-conf-yaml\") pod \"be6d66d1-106e-4c09-a985-71a914e56a0b\" (UID: \"be6d66d1-106e-4c09-a985-71a914e56a0b\") " Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.889106 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be6d66d1-106e-4c09-a985-71a914e56a0b-scripts\") pod \"be6d66d1-106e-4c09-a985-71a914e56a0b\" (UID: \"be6d66d1-106e-4c09-a985-71a914e56a0b\") " Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.889624 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be6d66d1-106e-4c09-a985-71a914e56a0b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "be6d66d1-106e-4c09-a985-71a914e56a0b" (UID: "be6d66d1-106e-4c09-a985-71a914e56a0b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.897949 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be6d66d1-106e-4c09-a985-71a914e56a0b-kube-api-access-77ltf" (OuterVolumeSpecName: "kube-api-access-77ltf") pod "be6d66d1-106e-4c09-a985-71a914e56a0b" (UID: "be6d66d1-106e-4c09-a985-71a914e56a0b"). InnerVolumeSpecName "kube-api-access-77ltf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.898310 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be6d66d1-106e-4c09-a985-71a914e56a0b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "be6d66d1-106e-4c09-a985-71a914e56a0b" (UID: "be6d66d1-106e-4c09-a985-71a914e56a0b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.899121 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be6d66d1-106e-4c09-a985-71a914e56a0b-scripts" (OuterVolumeSpecName: "scripts") pod "be6d66d1-106e-4c09-a985-71a914e56a0b" (UID: "be6d66d1-106e-4c09-a985-71a914e56a0b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.955209 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be6d66d1-106e-4c09-a985-71a914e56a0b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "be6d66d1-106e-4c09-a985-71a914e56a0b" (UID: "be6d66d1-106e-4c09-a985-71a914e56a0b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.968636 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2" Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.989715 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e69c80a-41fd-4df0-8758-e105a45521bf-ovsdbserver-sb\") pod \"3e69c80a-41fd-4df0-8758-e105a45521bf\" (UID: \"3e69c80a-41fd-4df0-8758-e105a45521bf\") " Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.989779 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e69c80a-41fd-4df0-8758-e105a45521bf-ovsdbserver-nb\") pod \"3e69c80a-41fd-4df0-8758-e105a45521bf\" (UID: \"3e69c80a-41fd-4df0-8758-e105a45521bf\") " Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.989807 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x97l2\" (UniqueName: \"kubernetes.io/projected/3e69c80a-41fd-4df0-8758-e105a45521bf-kube-api-access-x97l2\") pod \"3e69c80a-41fd-4df0-8758-e105a45521bf\" (UID: \"3e69c80a-41fd-4df0-8758-e105a45521bf\") " Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.989913 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e69c80a-41fd-4df0-8758-e105a45521bf-dns-svc\") pod \"3e69c80a-41fd-4df0-8758-e105a45521bf\" (UID: \"3e69c80a-41fd-4df0-8758-e105a45521bf\") " Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.989951 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e69c80a-41fd-4df0-8758-e105a45521bf-config\") pod \"3e69c80a-41fd-4df0-8758-e105a45521bf\" (UID: \"3e69c80a-41fd-4df0-8758-e105a45521bf\") " Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.990695 4908 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be6d66d1-106e-4c09-a985-71a914e56a0b-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.990709 4908 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be6d66d1-106e-4c09-a985-71a914e56a0b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.990718 4908 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be6d66d1-106e-4c09-a985-71a914e56a0b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.990728 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77ltf\" (UniqueName: \"kubernetes.io/projected/be6d66d1-106e-4c09-a985-71a914e56a0b-kube-api-access-77ltf\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.990738 4908 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be6d66d1-106e-4c09-a985-71a914e56a0b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:37 crc kubenswrapper[4908]: I0131 07:43:37.994612 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e69c80a-41fd-4df0-8758-e105a45521bf-kube-api-access-x97l2" (OuterVolumeSpecName: "kube-api-access-x97l2") pod "3e69c80a-41fd-4df0-8758-e105a45521bf" (UID: "3e69c80a-41fd-4df0-8758-e105a45521bf"). InnerVolumeSpecName "kube-api-access-x97l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.028928 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be6d66d1-106e-4c09-a985-71a914e56a0b-config-data" (OuterVolumeSpecName: "config-data") pod "be6d66d1-106e-4c09-a985-71a914e56a0b" (UID: "be6d66d1-106e-4c09-a985-71a914e56a0b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.064768 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e69c80a-41fd-4df0-8758-e105a45521bf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3e69c80a-41fd-4df0-8758-e105a45521bf" (UID: "3e69c80a-41fd-4df0-8758-e105a45521bf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.065573 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be6d66d1-106e-4c09-a985-71a914e56a0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be6d66d1-106e-4c09-a985-71a914e56a0b" (UID: "be6d66d1-106e-4c09-a985-71a914e56a0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.073296 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e69c80a-41fd-4df0-8758-e105a45521bf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3e69c80a-41fd-4df0-8758-e105a45521bf" (UID: "3e69c80a-41fd-4df0-8758-e105a45521bf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.092018 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be6d66d1-106e-4c09-a985-71a914e56a0b-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.092050 4908 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e69c80a-41fd-4df0-8758-e105a45521bf-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.092062 4908 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e69c80a-41fd-4df0-8758-e105a45521bf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.092073 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x97l2\" (UniqueName: \"kubernetes.io/projected/3e69c80a-41fd-4df0-8758-e105a45521bf-kube-api-access-x97l2\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.092084 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be6d66d1-106e-4c09-a985-71a914e56a0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.096328 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e69c80a-41fd-4df0-8758-e105a45521bf-config" (OuterVolumeSpecName: "config") pod "3e69c80a-41fd-4df0-8758-e105a45521bf" (UID: "3e69c80a-41fd-4df0-8758-e105a45521bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.107322 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e69c80a-41fd-4df0-8758-e105a45521bf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3e69c80a-41fd-4df0-8758-e105a45521bf" (UID: "3e69c80a-41fd-4df0-8758-e105a45521bf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.159706 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.193938 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25723372-e8bd-4939-a894-5d994e36b942-combined-ca-bundle\") pod \"25723372-e8bd-4939-a894-5d994e36b942\" (UID: \"25723372-e8bd-4939-a894-5d994e36b942\") " Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.194003 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25723372-e8bd-4939-a894-5d994e36b942-config-data-custom\") pod \"25723372-e8bd-4939-a894-5d994e36b942\" (UID: \"25723372-e8bd-4939-a894-5d994e36b942\") " Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.194048 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25723372-e8bd-4939-a894-5d994e36b942-scripts\") pod \"25723372-e8bd-4939-a894-5d994e36b942\" (UID: \"25723372-e8bd-4939-a894-5d994e36b942\") " Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.194070 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25723372-e8bd-4939-a894-5d994e36b942-etc-machine-id\") pod \"25723372-e8bd-4939-a894-5d994e36b942\" (UID: \"25723372-e8bd-4939-a894-5d994e36b942\") " Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.194756 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25723372-e8bd-4939-a894-5d994e36b942-config-data\") pod \"25723372-e8bd-4939-a894-5d994e36b942\" (UID: \"25723372-e8bd-4939-a894-5d994e36b942\") " Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.194784 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrmrv\" (UniqueName: \"kubernetes.io/projected/25723372-e8bd-4939-a894-5d994e36b942-kube-api-access-zrmrv\") pod \"25723372-e8bd-4939-a894-5d994e36b942\" (UID: \"25723372-e8bd-4939-a894-5d994e36b942\") " Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.195924 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e69c80a-41fd-4df0-8758-e105a45521bf-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.195943 4908 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e69c80a-41fd-4df0-8758-e105a45521bf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.194641 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25723372-e8bd-4939-a894-5d994e36b942-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "25723372-e8bd-4939-a894-5d994e36b942" (UID: "25723372-e8bd-4939-a894-5d994e36b942"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.198190 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25723372-e8bd-4939-a894-5d994e36b942-kube-api-access-zrmrv" (OuterVolumeSpecName: "kube-api-access-zrmrv") pod "25723372-e8bd-4939-a894-5d994e36b942" (UID: "25723372-e8bd-4939-a894-5d994e36b942"). InnerVolumeSpecName "kube-api-access-zrmrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.199237 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25723372-e8bd-4939-a894-5d994e36b942-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "25723372-e8bd-4939-a894-5d994e36b942" (UID: "25723372-e8bd-4939-a894-5d994e36b942"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.204845 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25723372-e8bd-4939-a894-5d994e36b942-scripts" (OuterVolumeSpecName: "scripts") pod "25723372-e8bd-4939-a894-5d994e36b942" (UID: "25723372-e8bd-4939-a894-5d994e36b942"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.212377 4908 generic.go:334] "Generic (PLEG): container finished" podID="c4ca7fd3-7fa3-4048-8325-f58efea50f94" containerID="ba7fa44fe398d73e051b3d97c9c9d2ca28595d34d22e982b1fad5a8abc928307" exitCode=0 Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.212441 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b44a-account-create-update-bd28z" event={"ID":"c4ca7fd3-7fa3-4048-8325-f58efea50f94","Type":"ContainerDied","Data":"ba7fa44fe398d73e051b3d97c9c9d2ca28595d34d22e982b1fad5a8abc928307"} Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.216362 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be6d66d1-106e-4c09-a985-71a914e56a0b","Type":"ContainerDied","Data":"95cd51186388efe2e10f32dcc626c4f8344af928cf2696d86b0867ee0425116c"} Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.216617 4908 scope.go:117] "RemoveContainer" containerID="e880abfca674b6ba02cab4aa1a0bc307ac248ca0f40d0b53df4324942e4559f9" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.216374 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.219893 4908 generic.go:334] "Generic (PLEG): container finished" podID="b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220" containerID="50eda1ac36c54e872cf51f718cccffdc29dfb09a5e5827257c12b06db26bafc6" exitCode=0 Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.219961 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ada9-account-create-update-pprbw" event={"ID":"b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220","Type":"ContainerDied","Data":"50eda1ac36c54e872cf51f718cccffdc29dfb09a5e5827257c12b06db26bafc6"} Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.236126 4908 generic.go:334] "Generic (PLEG): container finished" podID="25723372-e8bd-4939-a894-5d994e36b942" containerID="300959358ca079a3040b5ed490d339fa95f8ba7945fd8aec3e83cd73c1a50bd6" exitCode=0 Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.236433 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.236468 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"25723372-e8bd-4939-a894-5d994e36b942","Type":"ContainerDied","Data":"300959358ca079a3040b5ed490d339fa95f8ba7945fd8aec3e83cd73c1a50bd6"} Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.236500 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"25723372-e8bd-4939-a894-5d994e36b942","Type":"ContainerDied","Data":"a2c269e3087d08f0dcb891098bf41d1c2695fde384cba3f01200984914e91ea5"} Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.256376 4908 scope.go:117] "RemoveContainer" containerID="1bb7941f690a9dd1e83229d75f2ff0f710f5f5e4bc0dab3a5706685c5b066de2" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.258914 4908 generic.go:334] "Generic (PLEG): container finished" podID="3e69c80a-41fd-4df0-8758-e105a45521bf" containerID="fea40fcb28b68de975c23fa52c9d11cdc99017ab4ae7be08e522bc84b9f61be1" exitCode=0 Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.259018 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.259030 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2" event={"ID":"3e69c80a-41fd-4df0-8758-e105a45521bf","Type":"ContainerDied","Data":"fea40fcb28b68de975c23fa52c9d11cdc99017ab4ae7be08e522bc84b9f61be1"} Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.260116 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2" event={"ID":"3e69c80a-41fd-4df0-8758-e105a45521bf","Type":"ContainerDied","Data":"500b4afc91ab0235cb29cc43ad8f7dbeed13910b3f8ddec769fb9cb4dffb9b10"} Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.267414 4908 generic.go:334] "Generic (PLEG): container finished" podID="0405b85c-acf0-4a3a-9018-c34165dd440a" containerID="9dee05cc9703721636256b8027e2ba92f2bbb0b1fda6ee47ea382fedae0af7d1" exitCode=0 Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.267625 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b96a-account-create-update-tppx9" event={"ID":"0405b85c-acf0-4a3a-9018-c34165dd440a","Type":"ContainerDied","Data":"9dee05cc9703721636256b8027e2ba92f2bbb0b1fda6ee47ea382fedae0af7d1"} Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.272553 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25723372-e8bd-4939-a894-5d994e36b942-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25723372-e8bd-4939-a894-5d994e36b942" (UID: "25723372-e8bd-4939-a894-5d994e36b942"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.297064 4908 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25723372-e8bd-4939-a894-5d994e36b942-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.297096 4908 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25723372-e8bd-4939-a894-5d994e36b942-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.297108 4908 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25723372-e8bd-4939-a894-5d994e36b942-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.297118 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrmrv\" (UniqueName: \"kubernetes.io/projected/25723372-e8bd-4939-a894-5d994e36b942-kube-api-access-zrmrv\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.297131 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25723372-e8bd-4939-a894-5d994e36b942-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.325438 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.397340 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.398209 4908 scope.go:117] "RemoveContainer" containerID="8644e28c44d5a1a588cea525557c876c2a70aa810fb21fc46b8bc60122f45c9e" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.412915 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:43:38 crc kubenswrapper[4908]: E0131 07:43:38.413988 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be6d66d1-106e-4c09-a985-71a914e56a0b" containerName="ceilometer-central-agent" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.414012 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="be6d66d1-106e-4c09-a985-71a914e56a0b" containerName="ceilometer-central-agent" Jan 31 07:43:38 crc kubenswrapper[4908]: E0131 07:43:38.414033 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be6d66d1-106e-4c09-a985-71a914e56a0b" containerName="ceilometer-notification-agent" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.414043 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="be6d66d1-106e-4c09-a985-71a914e56a0b" containerName="ceilometer-notification-agent" Jan 31 07:43:38 crc kubenswrapper[4908]: E0131 07:43:38.414073 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e69c80a-41fd-4df0-8758-e105a45521bf" containerName="init" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.414082 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e69c80a-41fd-4df0-8758-e105a45521bf" containerName="init" Jan 31 07:43:38 crc kubenswrapper[4908]: E0131 07:43:38.414103 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25723372-e8bd-4939-a894-5d994e36b942" containerName="probe" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.414111 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="25723372-e8bd-4939-a894-5d994e36b942" containerName="probe" Jan 31 07:43:38 crc kubenswrapper[4908]: E0131 07:43:38.414134 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be6d66d1-106e-4c09-a985-71a914e56a0b" containerName="sg-core" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.414142 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="be6d66d1-106e-4c09-a985-71a914e56a0b" containerName="sg-core" Jan 31 07:43:38 crc kubenswrapper[4908]: E0131 07:43:38.414169 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e69c80a-41fd-4df0-8758-e105a45521bf" containerName="dnsmasq-dns" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.414178 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e69c80a-41fd-4df0-8758-e105a45521bf" containerName="dnsmasq-dns" Jan 31 07:43:38 crc kubenswrapper[4908]: E0131 07:43:38.414196 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25723372-e8bd-4939-a894-5d994e36b942" containerName="cinder-scheduler" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.414206 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="25723372-e8bd-4939-a894-5d994e36b942" containerName="cinder-scheduler" Jan 31 07:43:38 crc kubenswrapper[4908]: E0131 07:43:38.414227 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be6d66d1-106e-4c09-a985-71a914e56a0b" containerName="proxy-httpd" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.414235 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="be6d66d1-106e-4c09-a985-71a914e56a0b" containerName="proxy-httpd" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.414689 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="be6d66d1-106e-4c09-a985-71a914e56a0b" containerName="ceilometer-central-agent" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.414737 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="be6d66d1-106e-4c09-a985-71a914e56a0b" containerName="sg-core" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.414775 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="25723372-e8bd-4939-a894-5d994e36b942" containerName="cinder-scheduler" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.414799 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="be6d66d1-106e-4c09-a985-71a914e56a0b" containerName="proxy-httpd" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.414826 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="25723372-e8bd-4939-a894-5d994e36b942" containerName="probe" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.414853 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e69c80a-41fd-4df0-8758-e105a45521bf" containerName="dnsmasq-dns" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.414881 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="be6d66d1-106e-4c09-a985-71a914e56a0b" containerName="ceilometer-notification-agent" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.423091 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.432333 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.432703 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.463455 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25723372-e8bd-4939-a894-5d994e36b942-config-data" (OuterVolumeSpecName: "config-data") pod "25723372-e8bd-4939-a894-5d994e36b942" (UID: "25723372-e8bd-4939-a894-5d994e36b942"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.483533 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.502739 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/986d78d2-4837-4d5d-b414-3b16d6d46314-log-httpd\") pod \"ceilometer-0\" (UID: \"986d78d2-4837-4d5d-b414-3b16d6d46314\") " pod="openstack/ceilometer-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.502799 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/986d78d2-4837-4d5d-b414-3b16d6d46314-run-httpd\") pod \"ceilometer-0\" (UID: \"986d78d2-4837-4d5d-b414-3b16d6d46314\") " pod="openstack/ceilometer-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.502857 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/986d78d2-4837-4d5d-b414-3b16d6d46314-scripts\") pod \"ceilometer-0\" (UID: \"986d78d2-4837-4d5d-b414-3b16d6d46314\") " pod="openstack/ceilometer-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.502878 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/986d78d2-4837-4d5d-b414-3b16d6d46314-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"986d78d2-4837-4d5d-b414-3b16d6d46314\") " pod="openstack/ceilometer-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.502904 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/986d78d2-4837-4d5d-b414-3b16d6d46314-config-data\") pod \"ceilometer-0\" (UID: \"986d78d2-4837-4d5d-b414-3b16d6d46314\") " pod="openstack/ceilometer-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.502931 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twmbk\" (UniqueName: \"kubernetes.io/projected/986d78d2-4837-4d5d-b414-3b16d6d46314-kube-api-access-twmbk\") pod \"ceilometer-0\" (UID: \"986d78d2-4837-4d5d-b414-3b16d6d46314\") " pod="openstack/ceilometer-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.503152 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/986d78d2-4837-4d5d-b414-3b16d6d46314-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"986d78d2-4837-4d5d-b414-3b16d6d46314\") " pod="openstack/ceilometer-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.503250 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25723372-e8bd-4939-a894-5d994e36b942-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.509401 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2"] Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.520057 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f7f9f7cbf-xlrg2"] Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.546758 4908 scope.go:117] "RemoveContainer" containerID="592f3dfe11849df6e806c89bec4c960d74c4c26b0a3bc317a33bd0b60297330c" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.580198 4908 scope.go:117] "RemoveContainer" containerID="10a47fe8460aaef23f8506551f7702d79cf1c3b296ea4a366f77f2a83a1cf3c1" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.604717 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/986d78d2-4837-4d5d-b414-3b16d6d46314-scripts\") pod \"ceilometer-0\" (UID: \"986d78d2-4837-4d5d-b414-3b16d6d46314\") " pod="openstack/ceilometer-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.604769 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/986d78d2-4837-4d5d-b414-3b16d6d46314-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"986d78d2-4837-4d5d-b414-3b16d6d46314\") " pod="openstack/ceilometer-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.604793 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/986d78d2-4837-4d5d-b414-3b16d6d46314-config-data\") pod \"ceilometer-0\" (UID: \"986d78d2-4837-4d5d-b414-3b16d6d46314\") " pod="openstack/ceilometer-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.604821 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twmbk\" (UniqueName: \"kubernetes.io/projected/986d78d2-4837-4d5d-b414-3b16d6d46314-kube-api-access-twmbk\") pod \"ceilometer-0\" (UID: \"986d78d2-4837-4d5d-b414-3b16d6d46314\") " pod="openstack/ceilometer-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.604905 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/986d78d2-4837-4d5d-b414-3b16d6d46314-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"986d78d2-4837-4d5d-b414-3b16d6d46314\") " pod="openstack/ceilometer-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.604973 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/986d78d2-4837-4d5d-b414-3b16d6d46314-log-httpd\") pod \"ceilometer-0\" (UID: \"986d78d2-4837-4d5d-b414-3b16d6d46314\") " pod="openstack/ceilometer-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.605023 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/986d78d2-4837-4d5d-b414-3b16d6d46314-run-httpd\") pod \"ceilometer-0\" (UID: \"986d78d2-4837-4d5d-b414-3b16d6d46314\") " pod="openstack/ceilometer-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.619517 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/986d78d2-4837-4d5d-b414-3b16d6d46314-log-httpd\") pod \"ceilometer-0\" (UID: \"986d78d2-4837-4d5d-b414-3b16d6d46314\") " pod="openstack/ceilometer-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.620304 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/986d78d2-4837-4d5d-b414-3b16d6d46314-run-httpd\") pod \"ceilometer-0\" (UID: \"986d78d2-4837-4d5d-b414-3b16d6d46314\") " pod="openstack/ceilometer-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.620365 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.629094 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/986d78d2-4837-4d5d-b414-3b16d6d46314-scripts\") pod \"ceilometer-0\" (UID: \"986d78d2-4837-4d5d-b414-3b16d6d46314\") " pod="openstack/ceilometer-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.629631 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/986d78d2-4837-4d5d-b414-3b16d6d46314-config-data\") pod \"ceilometer-0\" (UID: \"986d78d2-4837-4d5d-b414-3b16d6d46314\") " pod="openstack/ceilometer-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.640503 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/986d78d2-4837-4d5d-b414-3b16d6d46314-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"986d78d2-4837-4d5d-b414-3b16d6d46314\") " pod="openstack/ceilometer-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.646241 4908 scope.go:117] "RemoveContainer" containerID="300959358ca079a3040b5ed490d339fa95f8ba7945fd8aec3e83cd73c1a50bd6" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.649798 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/986d78d2-4837-4d5d-b414-3b16d6d46314-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"986d78d2-4837-4d5d-b414-3b16d6d46314\") " pod="openstack/ceilometer-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.652714 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twmbk\" (UniqueName: \"kubernetes.io/projected/986d78d2-4837-4d5d-b414-3b16d6d46314-kube-api-access-twmbk\") pod \"ceilometer-0\" (UID: \"986d78d2-4837-4d5d-b414-3b16d6d46314\") " pod="openstack/ceilometer-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.655109 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.672145 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.673769 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.690475 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.713426 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.805453 4908 scope.go:117] "RemoveContainer" containerID="10a47fe8460aaef23f8506551f7702d79cf1c3b296ea4a366f77f2a83a1cf3c1" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.807863 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650ef73d-d2fe-4042-accc-ae37bbacde25-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"650ef73d-d2fe-4042-accc-ae37bbacde25\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.807949 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/650ef73d-d2fe-4042-accc-ae37bbacde25-scripts\") pod \"cinder-scheduler-0\" (UID: \"650ef73d-d2fe-4042-accc-ae37bbacde25\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.807976 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650ef73d-d2fe-4042-accc-ae37bbacde25-config-data\") pod \"cinder-scheduler-0\" (UID: \"650ef73d-d2fe-4042-accc-ae37bbacde25\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.808057 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djhtj\" (UniqueName: \"kubernetes.io/projected/650ef73d-d2fe-4042-accc-ae37bbacde25-kube-api-access-djhtj\") pod \"cinder-scheduler-0\" (UID: \"650ef73d-d2fe-4042-accc-ae37bbacde25\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.808092 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/650ef73d-d2fe-4042-accc-ae37bbacde25-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"650ef73d-d2fe-4042-accc-ae37bbacde25\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.808125 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/650ef73d-d2fe-4042-accc-ae37bbacde25-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"650ef73d-d2fe-4042-accc-ae37bbacde25\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:38 crc kubenswrapper[4908]: E0131 07:43:38.811252 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10a47fe8460aaef23f8506551f7702d79cf1c3b296ea4a366f77f2a83a1cf3c1\": container with ID starting with 10a47fe8460aaef23f8506551f7702d79cf1c3b296ea4a366f77f2a83a1cf3c1 not found: ID does not exist" containerID="10a47fe8460aaef23f8506551f7702d79cf1c3b296ea4a366f77f2a83a1cf3c1" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.811305 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10a47fe8460aaef23f8506551f7702d79cf1c3b296ea4a366f77f2a83a1cf3c1"} err="failed to get container status \"10a47fe8460aaef23f8506551f7702d79cf1c3b296ea4a366f77f2a83a1cf3c1\": rpc error: code = NotFound desc = could not find container \"10a47fe8460aaef23f8506551f7702d79cf1c3b296ea4a366f77f2a83a1cf3c1\": container with ID starting with 10a47fe8460aaef23f8506551f7702d79cf1c3b296ea4a366f77f2a83a1cf3c1 not found: ID does not exist" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.811337 4908 scope.go:117] "RemoveContainer" containerID="300959358ca079a3040b5ed490d339fa95f8ba7945fd8aec3e83cd73c1a50bd6" Jan 31 07:43:38 crc kubenswrapper[4908]: E0131 07:43:38.820254 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"300959358ca079a3040b5ed490d339fa95f8ba7945fd8aec3e83cd73c1a50bd6\": container with ID starting with 300959358ca079a3040b5ed490d339fa95f8ba7945fd8aec3e83cd73c1a50bd6 not found: ID does not exist" containerID="300959358ca079a3040b5ed490d339fa95f8ba7945fd8aec3e83cd73c1a50bd6" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.820301 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"300959358ca079a3040b5ed490d339fa95f8ba7945fd8aec3e83cd73c1a50bd6"} err="failed to get container status \"300959358ca079a3040b5ed490d339fa95f8ba7945fd8aec3e83cd73c1a50bd6\": rpc error: code = NotFound desc = could not find container \"300959358ca079a3040b5ed490d339fa95f8ba7945fd8aec3e83cd73c1a50bd6\": container with ID starting with 300959358ca079a3040b5ed490d339fa95f8ba7945fd8aec3e83cd73c1a50bd6 not found: ID does not exist" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.820335 4908 scope.go:117] "RemoveContainer" containerID="fea40fcb28b68de975c23fa52c9d11cdc99017ab4ae7be08e522bc84b9f61be1" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.875365 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.883872 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zhkcp" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.911879 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/650ef73d-d2fe-4042-accc-ae37bbacde25-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"650ef73d-d2fe-4042-accc-ae37bbacde25\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.912002 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650ef73d-d2fe-4042-accc-ae37bbacde25-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"650ef73d-d2fe-4042-accc-ae37bbacde25\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.912048 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/650ef73d-d2fe-4042-accc-ae37bbacde25-scripts\") pod \"cinder-scheduler-0\" (UID: \"650ef73d-d2fe-4042-accc-ae37bbacde25\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.912065 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650ef73d-d2fe-4042-accc-ae37bbacde25-config-data\") pod \"cinder-scheduler-0\" (UID: \"650ef73d-d2fe-4042-accc-ae37bbacde25\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.912122 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djhtj\" (UniqueName: \"kubernetes.io/projected/650ef73d-d2fe-4042-accc-ae37bbacde25-kube-api-access-djhtj\") pod \"cinder-scheduler-0\" (UID: \"650ef73d-d2fe-4042-accc-ae37bbacde25\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.912149 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/650ef73d-d2fe-4042-accc-ae37bbacde25-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"650ef73d-d2fe-4042-accc-ae37bbacde25\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.920162 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/650ef73d-d2fe-4042-accc-ae37bbacde25-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"650ef73d-d2fe-4042-accc-ae37bbacde25\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.921064 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/650ef73d-d2fe-4042-accc-ae37bbacde25-scripts\") pod \"cinder-scheduler-0\" (UID: \"650ef73d-d2fe-4042-accc-ae37bbacde25\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.921581 4908 scope.go:117] "RemoveContainer" containerID="2b73e3ee9b1bc2f283566c27f4b7a0c3e824b8a3528ba91d3d7310935de2b26a" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.921842 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650ef73d-d2fe-4042-accc-ae37bbacde25-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"650ef73d-d2fe-4042-accc-ae37bbacde25\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.923595 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/650ef73d-d2fe-4042-accc-ae37bbacde25-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"650ef73d-d2fe-4042-accc-ae37bbacde25\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.927803 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650ef73d-d2fe-4042-accc-ae37bbacde25-config-data\") pod \"cinder-scheduler-0\" (UID: \"650ef73d-d2fe-4042-accc-ae37bbacde25\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.932762 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djhtj\" (UniqueName: \"kubernetes.io/projected/650ef73d-d2fe-4042-accc-ae37bbacde25-kube-api-access-djhtj\") pod \"cinder-scheduler-0\" (UID: \"650ef73d-d2fe-4042-accc-ae37bbacde25\") " pod="openstack/cinder-scheduler-0" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.951722 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mbdvd" Jan 31 07:43:38 crc kubenswrapper[4908]: I0131 07:43:38.997122 4908 scope.go:117] "RemoveContainer" containerID="fea40fcb28b68de975c23fa52c9d11cdc99017ab4ae7be08e522bc84b9f61be1" Jan 31 07:43:39 crc kubenswrapper[4908]: E0131 07:43:39.004238 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fea40fcb28b68de975c23fa52c9d11cdc99017ab4ae7be08e522bc84b9f61be1\": container with ID starting with fea40fcb28b68de975c23fa52c9d11cdc99017ab4ae7be08e522bc84b9f61be1 not found: ID does not exist" containerID="fea40fcb28b68de975c23fa52c9d11cdc99017ab4ae7be08e522bc84b9f61be1" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.004284 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fea40fcb28b68de975c23fa52c9d11cdc99017ab4ae7be08e522bc84b9f61be1"} err="failed to get container status \"fea40fcb28b68de975c23fa52c9d11cdc99017ab4ae7be08e522bc84b9f61be1\": rpc error: code = NotFound desc = could not find container \"fea40fcb28b68de975c23fa52c9d11cdc99017ab4ae7be08e522bc84b9f61be1\": container with ID starting with fea40fcb28b68de975c23fa52c9d11cdc99017ab4ae7be08e522bc84b9f61be1 not found: ID does not exist" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.004316 4908 scope.go:117] "RemoveContainer" containerID="2b73e3ee9b1bc2f283566c27f4b7a0c3e824b8a3528ba91d3d7310935de2b26a" Jan 31 07:43:39 crc kubenswrapper[4908]: E0131 07:43:39.004911 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b73e3ee9b1bc2f283566c27f4b7a0c3e824b8a3528ba91d3d7310935de2b26a\": container with ID starting with 2b73e3ee9b1bc2f283566c27f4b7a0c3e824b8a3528ba91d3d7310935de2b26a not found: ID does not exist" containerID="2b73e3ee9b1bc2f283566c27f4b7a0c3e824b8a3528ba91d3d7310935de2b26a" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.004954 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b73e3ee9b1bc2f283566c27f4b7a0c3e824b8a3528ba91d3d7310935de2b26a"} err="failed to get container status \"2b73e3ee9b1bc2f283566c27f4b7a0c3e824b8a3528ba91d3d7310935de2b26a\": rpc error: code = NotFound desc = could not find container \"2b73e3ee9b1bc2f283566c27f4b7a0c3e824b8a3528ba91d3d7310935de2b26a\": container with ID starting with 2b73e3ee9b1bc2f283566c27f4b7a0c3e824b8a3528ba91d3d7310935de2b26a not found: ID does not exist" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.015945 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a874264b-db1f-4a01-9f15-c1e50e22b854-operator-scripts\") pod \"a874264b-db1f-4a01-9f15-c1e50e22b854\" (UID: \"a874264b-db1f-4a01-9f15-c1e50e22b854\") " Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.016045 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glvtx\" (UniqueName: \"kubernetes.io/projected/a874264b-db1f-4a01-9f15-c1e50e22b854-kube-api-access-glvtx\") pod \"a874264b-db1f-4a01-9f15-c1e50e22b854\" (UID: \"a874264b-db1f-4a01-9f15-c1e50e22b854\") " Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.016877 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a874264b-db1f-4a01-9f15-c1e50e22b854-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a874264b-db1f-4a01-9f15-c1e50e22b854" (UID: "a874264b-db1f-4a01-9f15-c1e50e22b854"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.025844 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a874264b-db1f-4a01-9f15-c1e50e22b854-kube-api-access-glvtx" (OuterVolumeSpecName: "kube-api-access-glvtx") pod "a874264b-db1f-4a01-9f15-c1e50e22b854" (UID: "a874264b-db1f-4a01-9f15-c1e50e22b854"). InnerVolumeSpecName "kube-api-access-glvtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.036302 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kkx5h" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.046518 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.117336 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b02f781-7443-49e3-aa1e-a9a1a8a36329-operator-scripts\") pod \"4b02f781-7443-49e3-aa1e-a9a1a8a36329\" (UID: \"4b02f781-7443-49e3-aa1e-a9a1a8a36329\") " Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.117383 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9g5nz\" (UniqueName: \"kubernetes.io/projected/4b02f781-7443-49e3-aa1e-a9a1a8a36329-kube-api-access-9g5nz\") pod \"4b02f781-7443-49e3-aa1e-a9a1a8a36329\" (UID: \"4b02f781-7443-49e3-aa1e-a9a1a8a36329\") " Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.118112 4908 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a874264b-db1f-4a01-9f15-c1e50e22b854-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.118133 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glvtx\" (UniqueName: \"kubernetes.io/projected/a874264b-db1f-4a01-9f15-c1e50e22b854-kube-api-access-glvtx\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.118548 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b02f781-7443-49e3-aa1e-a9a1a8a36329-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4b02f781-7443-49e3-aa1e-a9a1a8a36329" (UID: "4b02f781-7443-49e3-aa1e-a9a1a8a36329"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.125165 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b02f781-7443-49e3-aa1e-a9a1a8a36329-kube-api-access-9g5nz" (OuterVolumeSpecName: "kube-api-access-9g5nz") pod "4b02f781-7443-49e3-aa1e-a9a1a8a36329" (UID: "4b02f781-7443-49e3-aa1e-a9a1a8a36329"). InnerVolumeSpecName "kube-api-access-9g5nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.219738 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b99de16f-0b42-4cfb-b041-c0a388bc31e0-operator-scripts\") pod \"b99de16f-0b42-4cfb-b041-c0a388bc31e0\" (UID: \"b99de16f-0b42-4cfb-b041-c0a388bc31e0\") " Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.219854 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5gc6\" (UniqueName: \"kubernetes.io/projected/b99de16f-0b42-4cfb-b041-c0a388bc31e0-kube-api-access-w5gc6\") pod \"b99de16f-0b42-4cfb-b041-c0a388bc31e0\" (UID: \"b99de16f-0b42-4cfb-b041-c0a388bc31e0\") " Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.220211 4908 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b02f781-7443-49e3-aa1e-a9a1a8a36329-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.220221 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9g5nz\" (UniqueName: \"kubernetes.io/projected/4b02f781-7443-49e3-aa1e-a9a1a8a36329-kube-api-access-9g5nz\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.222372 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b99de16f-0b42-4cfb-b041-c0a388bc31e0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b99de16f-0b42-4cfb-b041-c0a388bc31e0" (UID: "b99de16f-0b42-4cfb-b041-c0a388bc31e0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.224009 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b99de16f-0b42-4cfb-b041-c0a388bc31e0-kube-api-access-w5gc6" (OuterVolumeSpecName: "kube-api-access-w5gc6") pod "b99de16f-0b42-4cfb-b041-c0a388bc31e0" (UID: "b99de16f-0b42-4cfb-b041-c0a388bc31e0"). InnerVolumeSpecName "kube-api-access-w5gc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.292745 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kkx5h" event={"ID":"b99de16f-0b42-4cfb-b041-c0a388bc31e0","Type":"ContainerDied","Data":"f049de1b9d88ff6ab492d91b9d2e52898aad64e250ebfb7f3b9d70733c19ae60"} Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.292783 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f049de1b9d88ff6ab492d91b9d2e52898aad64e250ebfb7f3b9d70733c19ae60" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.292834 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kkx5h" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.313628 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f0d30b67-d125-4065-bb37-e91a0ba45b29","Type":"ContainerStarted","Data":"7633880d430b496bfb92e8fd5ddb47ff4a065727e04598d8f6d36d915c786939"} Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.338393 4908 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b99de16f-0b42-4cfb-b041-c0a388bc31e0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.338424 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5gc6\" (UniqueName: \"kubernetes.io/projected/b99de16f-0b42-4cfb-b041-c0a388bc31e0-kube-api-access-w5gc6\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.338926 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mbdvd" event={"ID":"4b02f781-7443-49e3-aa1e-a9a1a8a36329","Type":"ContainerDied","Data":"4c464968aecce6a659d89d908194282e097ff17794578d6d3b9fb21f7906281f"} Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.338956 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c464968aecce6a659d89d908194282e097ff17794578d6d3b9fb21f7906281f" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.339023 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mbdvd" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.341205 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zhkcp" event={"ID":"a874264b-db1f-4a01-9f15-c1e50e22b854","Type":"ContainerDied","Data":"90aeb9820da8d376662551f5fb06da4a976d87045938fe807281b4b077537cda"} Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.341249 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90aeb9820da8d376662551f5fb06da4a976d87045938fe807281b4b077537cda" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.341522 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zhkcp" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.354597 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.790275387 podStartE2EDuration="41.354583462s" podCreationTimestamp="2026-01-31 07:42:58 +0000 UTC" firstStartedPulling="2026-01-31 07:43:01.276819324 +0000 UTC m=+1287.892763978" lastFinishedPulling="2026-01-31 07:43:38.841127399 +0000 UTC m=+1325.457072053" observedRunningTime="2026-01-31 07:43:39.351568229 +0000 UTC m=+1325.967512883" watchObservedRunningTime="2026-01-31 07:43:39.354583462 +0000 UTC m=+1325.970528116" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.503526 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:43:39 crc kubenswrapper[4908]: W0131 07:43:39.533680 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod986d78d2_4837_4d5d_b414_3b16d6d46314.slice/crio-04813f7f40ffad77b8894f41e87626b6a72f7d662528900650e2b8ce31ea16a5 WatchSource:0}: Error finding container 04813f7f40ffad77b8894f41e87626b6a72f7d662528900650e2b8ce31ea16a5: Status 404 returned error can't find the container with id 04813f7f40ffad77b8894f41e87626b6a72f7d662528900650e2b8ce31ea16a5 Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.644442 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.705193 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.802485 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ada9-account-create-update-pprbw" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.853611 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b96a-account-create-update-tppx9" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.855770 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdx72\" (UniqueName: \"kubernetes.io/projected/b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220-kube-api-access-kdx72\") pod \"b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220\" (UID: \"b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220\") " Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.859421 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220-operator-scripts\") pod \"b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220\" (UID: \"b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220\") " Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.861533 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220" (UID: "b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.874810 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220-kube-api-access-kdx72" (OuterVolumeSpecName: "kube-api-access-kdx72") pod "b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220" (UID: "b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220"). InnerVolumeSpecName "kube-api-access-kdx72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.958758 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25723372-e8bd-4939-a894-5d994e36b942" path="/var/lib/kubelet/pods/25723372-e8bd-4939-a894-5d994e36b942/volumes" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.960070 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e69c80a-41fd-4df0-8758-e105a45521bf" path="/var/lib/kubelet/pods/3e69c80a-41fd-4df0-8758-e105a45521bf/volumes" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.960758 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be6d66d1-106e-4c09-a985-71a914e56a0b" path="/var/lib/kubelet/pods/be6d66d1-106e-4c09-a985-71a914e56a0b/volumes" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.974924 4908 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:39 crc kubenswrapper[4908]: I0131 07:43:39.975117 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdx72\" (UniqueName: \"kubernetes.io/projected/b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220-kube-api-access-kdx72\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:40 crc kubenswrapper[4908]: I0131 07:43:40.008415 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b44a-account-create-update-bd28z" Jan 31 07:43:40 crc kubenswrapper[4908]: I0131 07:43:40.076844 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0405b85c-acf0-4a3a-9018-c34165dd440a-operator-scripts\") pod \"0405b85c-acf0-4a3a-9018-c34165dd440a\" (UID: \"0405b85c-acf0-4a3a-9018-c34165dd440a\") " Jan 31 07:43:40 crc kubenswrapper[4908]: I0131 07:43:40.077044 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsksx\" (UniqueName: \"kubernetes.io/projected/0405b85c-acf0-4a3a-9018-c34165dd440a-kube-api-access-wsksx\") pod \"0405b85c-acf0-4a3a-9018-c34165dd440a\" (UID: \"0405b85c-acf0-4a3a-9018-c34165dd440a\") " Jan 31 07:43:40 crc kubenswrapper[4908]: I0131 07:43:40.078722 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0405b85c-acf0-4a3a-9018-c34165dd440a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0405b85c-acf0-4a3a-9018-c34165dd440a" (UID: "0405b85c-acf0-4a3a-9018-c34165dd440a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:43:40 crc kubenswrapper[4908]: I0131 07:43:40.087158 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0405b85c-acf0-4a3a-9018-c34165dd440a-kube-api-access-wsksx" (OuterVolumeSpecName: "kube-api-access-wsksx") pod "0405b85c-acf0-4a3a-9018-c34165dd440a" (UID: "0405b85c-acf0-4a3a-9018-c34165dd440a"). InnerVolumeSpecName "kube-api-access-wsksx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:43:40 crc kubenswrapper[4908]: I0131 07:43:40.178491 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4ca7fd3-7fa3-4048-8325-f58efea50f94-operator-scripts\") pod \"c4ca7fd3-7fa3-4048-8325-f58efea50f94\" (UID: \"c4ca7fd3-7fa3-4048-8325-f58efea50f94\") " Jan 31 07:43:40 crc kubenswrapper[4908]: I0131 07:43:40.178551 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb882\" (UniqueName: \"kubernetes.io/projected/c4ca7fd3-7fa3-4048-8325-f58efea50f94-kube-api-access-bb882\") pod \"c4ca7fd3-7fa3-4048-8325-f58efea50f94\" (UID: \"c4ca7fd3-7fa3-4048-8325-f58efea50f94\") " Jan 31 07:43:40 crc kubenswrapper[4908]: I0131 07:43:40.179124 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsksx\" (UniqueName: \"kubernetes.io/projected/0405b85c-acf0-4a3a-9018-c34165dd440a-kube-api-access-wsksx\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:40 crc kubenswrapper[4908]: I0131 07:43:40.179141 4908 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0405b85c-acf0-4a3a-9018-c34165dd440a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:40 crc kubenswrapper[4908]: I0131 07:43:40.179943 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4ca7fd3-7fa3-4048-8325-f58efea50f94-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c4ca7fd3-7fa3-4048-8325-f58efea50f94" (UID: "c4ca7fd3-7fa3-4048-8325-f58efea50f94"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:43:40 crc kubenswrapper[4908]: I0131 07:43:40.181725 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4ca7fd3-7fa3-4048-8325-f58efea50f94-kube-api-access-bb882" (OuterVolumeSpecName: "kube-api-access-bb882") pod "c4ca7fd3-7fa3-4048-8325-f58efea50f94" (UID: "c4ca7fd3-7fa3-4048-8325-f58efea50f94"). InnerVolumeSpecName "kube-api-access-bb882". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:43:40 crc kubenswrapper[4908]: I0131 07:43:40.288169 4908 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4ca7fd3-7fa3-4048-8325-f58efea50f94-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:40 crc kubenswrapper[4908]: I0131 07:43:40.288211 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb882\" (UniqueName: \"kubernetes.io/projected/c4ca7fd3-7fa3-4048-8325-f58efea50f94-kube-api-access-bb882\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:40 crc kubenswrapper[4908]: I0131 07:43:40.375820 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b96a-account-create-update-tppx9" event={"ID":"0405b85c-acf0-4a3a-9018-c34165dd440a","Type":"ContainerDied","Data":"30d3a385551c1cbd5fae73dad70b719290ed96925d36e96f694a8a44a99955c5"} Jan 31 07:43:40 crc kubenswrapper[4908]: I0131 07:43:40.376133 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30d3a385551c1cbd5fae73dad70b719290ed96925d36e96f694a8a44a99955c5" Jan 31 07:43:40 crc kubenswrapper[4908]: I0131 07:43:40.375908 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b96a-account-create-update-tppx9" Jan 31 07:43:40 crc kubenswrapper[4908]: I0131 07:43:40.398081 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b44a-account-create-update-bd28z" event={"ID":"c4ca7fd3-7fa3-4048-8325-f58efea50f94","Type":"ContainerDied","Data":"a1686ae54de6ff85beb5152904e609ff3b9ef8725aa090a2130d7126890f8280"} Jan 31 07:43:40 crc kubenswrapper[4908]: I0131 07:43:40.398127 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1686ae54de6ff85beb5152904e609ff3b9ef8725aa090a2130d7126890f8280" Jan 31 07:43:40 crc kubenswrapper[4908]: I0131 07:43:40.398101 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b44a-account-create-update-bd28z" Jan 31 07:43:40 crc kubenswrapper[4908]: I0131 07:43:40.413885 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"650ef73d-d2fe-4042-accc-ae37bbacde25","Type":"ContainerStarted","Data":"71d17fdab7dd2cae25531c2f4c641a8fa3de2c96ed1becce3150a8101423346f"} Jan 31 07:43:40 crc kubenswrapper[4908]: I0131 07:43:40.420611 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ada9-account-create-update-pprbw" event={"ID":"b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220","Type":"ContainerDied","Data":"069198b87c716209496a304abd378f9f65415442b6bf60f33a5bd457f777df73"} Jan 31 07:43:40 crc kubenswrapper[4908]: I0131 07:43:40.420653 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="069198b87c716209496a304abd378f9f65415442b6bf60f33a5bd457f777df73" Jan 31 07:43:40 crc kubenswrapper[4908]: I0131 07:43:40.420749 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ada9-account-create-update-pprbw" Jan 31 07:43:40 crc kubenswrapper[4908]: I0131 07:43:40.424035 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"986d78d2-4837-4d5d-b414-3b16d6d46314","Type":"ContainerStarted","Data":"d45652c1f9c173b3ea2212c62a04285b501842ced4f48fa7549b0e224790f148"} Jan 31 07:43:40 crc kubenswrapper[4908]: I0131 07:43:40.424079 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"986d78d2-4837-4d5d-b414-3b16d6d46314","Type":"ContainerStarted","Data":"04813f7f40ffad77b8894f41e87626b6a72f7d662528900650e2b8ce31ea16a5"} Jan 31 07:43:41 crc kubenswrapper[4908]: I0131 07:43:41.433486 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"650ef73d-d2fe-4042-accc-ae37bbacde25","Type":"ContainerStarted","Data":"6d8d51260d00d71ddf16ec4b17bdbf8f9093a24074efd9c1182bb6e13f15c15b"} Jan 31 07:43:41 crc kubenswrapper[4908]: I0131 07:43:41.433933 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"650ef73d-d2fe-4042-accc-ae37bbacde25","Type":"ContainerStarted","Data":"53c4dd222b63f3dda1768eb35a23bb50de7220275412b2bee3340d06918c1e72"} Jan 31 07:43:41 crc kubenswrapper[4908]: I0131 07:43:41.438060 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"986d78d2-4837-4d5d-b414-3b16d6d46314","Type":"ContainerStarted","Data":"5aebb49ddd4e388b398722070c215652ddaa863c98bca3623d0d3e688d7c6e8f"} Jan 31 07:43:41 crc kubenswrapper[4908]: I0131 07:43:41.455578 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.455555995 podStartE2EDuration="3.455555995s" podCreationTimestamp="2026-01-31 07:43:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:43:41.45206301 +0000 UTC m=+1328.068007664" watchObservedRunningTime="2026-01-31 07:43:41.455555995 +0000 UTC m=+1328.071500649" Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.450745 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"986d78d2-4837-4d5d-b414-3b16d6d46314","Type":"ContainerStarted","Data":"fe1d0d26c23afa2cc706d1a969b6ff6cff6e154cd9e5375a3b39ed34ae9c6729"} Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.576083 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7ksrc"] Jan 31 07:43:42 crc kubenswrapper[4908]: E0131 07:43:42.577041 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220" containerName="mariadb-account-create-update" Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.577063 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220" containerName="mariadb-account-create-update" Jan 31 07:43:42 crc kubenswrapper[4908]: E0131 07:43:42.577079 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a874264b-db1f-4a01-9f15-c1e50e22b854" containerName="mariadb-database-create" Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.577087 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="a874264b-db1f-4a01-9f15-c1e50e22b854" containerName="mariadb-database-create" Jan 31 07:43:42 crc kubenswrapper[4908]: E0131 07:43:42.577103 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0405b85c-acf0-4a3a-9018-c34165dd440a" containerName="mariadb-account-create-update" Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.577113 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="0405b85c-acf0-4a3a-9018-c34165dd440a" containerName="mariadb-account-create-update" Jan 31 07:43:42 crc kubenswrapper[4908]: E0131 07:43:42.577128 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b99de16f-0b42-4cfb-b041-c0a388bc31e0" containerName="mariadb-database-create" Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.577136 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="b99de16f-0b42-4cfb-b041-c0a388bc31e0" containerName="mariadb-database-create" Jan 31 07:43:42 crc kubenswrapper[4908]: E0131 07:43:42.577146 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b02f781-7443-49e3-aa1e-a9a1a8a36329" containerName="mariadb-database-create" Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.577153 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b02f781-7443-49e3-aa1e-a9a1a8a36329" containerName="mariadb-database-create" Jan 31 07:43:42 crc kubenswrapper[4908]: E0131 07:43:42.577171 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4ca7fd3-7fa3-4048-8325-f58efea50f94" containerName="mariadb-account-create-update" Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.577179 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4ca7fd3-7fa3-4048-8325-f58efea50f94" containerName="mariadb-account-create-update" Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.578936 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="0405b85c-acf0-4a3a-9018-c34165dd440a" containerName="mariadb-account-create-update" Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.579024 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220" containerName="mariadb-account-create-update" Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.579036 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4ca7fd3-7fa3-4048-8325-f58efea50f94" containerName="mariadb-account-create-update" Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.579052 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="b99de16f-0b42-4cfb-b041-c0a388bc31e0" containerName="mariadb-database-create" Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.579067 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b02f781-7443-49e3-aa1e-a9a1a8a36329" containerName="mariadb-database-create" Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.579077 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="a874264b-db1f-4a01-9f15-c1e50e22b854" containerName="mariadb-database-create" Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.579914 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7ksrc" Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.588957 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-6c4f4" Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.602000 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7ksrc"] Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.603013 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.603286 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.735501 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/668eb706-10d5-4310-9c66-d8d77bebe230-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7ksrc\" (UID: \"668eb706-10d5-4310-9c66-d8d77bebe230\") " pod="openstack/nova-cell0-conductor-db-sync-7ksrc" Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.735724 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/668eb706-10d5-4310-9c66-d8d77bebe230-scripts\") pod \"nova-cell0-conductor-db-sync-7ksrc\" (UID: \"668eb706-10d5-4310-9c66-d8d77bebe230\") " pod="openstack/nova-cell0-conductor-db-sync-7ksrc" Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.735881 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xl2t\" (UniqueName: \"kubernetes.io/projected/668eb706-10d5-4310-9c66-d8d77bebe230-kube-api-access-5xl2t\") pod \"nova-cell0-conductor-db-sync-7ksrc\" (UID: \"668eb706-10d5-4310-9c66-d8d77bebe230\") " pod="openstack/nova-cell0-conductor-db-sync-7ksrc" Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.735945 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/668eb706-10d5-4310-9c66-d8d77bebe230-config-data\") pod \"nova-cell0-conductor-db-sync-7ksrc\" (UID: \"668eb706-10d5-4310-9c66-d8d77bebe230\") " pod="openstack/nova-cell0-conductor-db-sync-7ksrc" Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.837272 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/668eb706-10d5-4310-9c66-d8d77bebe230-scripts\") pod \"nova-cell0-conductor-db-sync-7ksrc\" (UID: \"668eb706-10d5-4310-9c66-d8d77bebe230\") " pod="openstack/nova-cell0-conductor-db-sync-7ksrc" Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.837316 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xl2t\" (UniqueName: \"kubernetes.io/projected/668eb706-10d5-4310-9c66-d8d77bebe230-kube-api-access-5xl2t\") pod \"nova-cell0-conductor-db-sync-7ksrc\" (UID: \"668eb706-10d5-4310-9c66-d8d77bebe230\") " pod="openstack/nova-cell0-conductor-db-sync-7ksrc" Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.837378 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/668eb706-10d5-4310-9c66-d8d77bebe230-config-data\") pod \"nova-cell0-conductor-db-sync-7ksrc\" (UID: \"668eb706-10d5-4310-9c66-d8d77bebe230\") " pod="openstack/nova-cell0-conductor-db-sync-7ksrc" Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.837430 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/668eb706-10d5-4310-9c66-d8d77bebe230-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7ksrc\" (UID: \"668eb706-10d5-4310-9c66-d8d77bebe230\") " pod="openstack/nova-cell0-conductor-db-sync-7ksrc" Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.848434 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/668eb706-10d5-4310-9c66-d8d77bebe230-config-data\") pod \"nova-cell0-conductor-db-sync-7ksrc\" (UID: \"668eb706-10d5-4310-9c66-d8d77bebe230\") " pod="openstack/nova-cell0-conductor-db-sync-7ksrc" Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.850067 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/668eb706-10d5-4310-9c66-d8d77bebe230-scripts\") pod \"nova-cell0-conductor-db-sync-7ksrc\" (UID: \"668eb706-10d5-4310-9c66-d8d77bebe230\") " pod="openstack/nova-cell0-conductor-db-sync-7ksrc" Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.851429 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/668eb706-10d5-4310-9c66-d8d77bebe230-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7ksrc\" (UID: \"668eb706-10d5-4310-9c66-d8d77bebe230\") " pod="openstack/nova-cell0-conductor-db-sync-7ksrc" Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.859288 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xl2t\" (UniqueName: \"kubernetes.io/projected/668eb706-10d5-4310-9c66-d8d77bebe230-kube-api-access-5xl2t\") pod \"nova-cell0-conductor-db-sync-7ksrc\" (UID: \"668eb706-10d5-4310-9c66-d8d77bebe230\") " pod="openstack/nova-cell0-conductor-db-sync-7ksrc" Jan 31 07:43:42 crc kubenswrapper[4908]: I0131 07:43:42.912630 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7ksrc" Jan 31 07:43:43 crc kubenswrapper[4908]: I0131 07:43:43.449323 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7ksrc"] Jan 31 07:43:43 crc kubenswrapper[4908]: W0131 07:43:43.454479 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod668eb706_10d5_4310_9c66_d8d77bebe230.slice/crio-27f4536d37640b1afd6cc8abe5ea444a718fbe0e28a1e18ac8b61940705a2a03 WatchSource:0}: Error finding container 27f4536d37640b1afd6cc8abe5ea444a718fbe0e28a1e18ac8b61940705a2a03: Status 404 returned error can't find the container with id 27f4536d37640b1afd6cc8abe5ea444a718fbe0e28a1e18ac8b61940705a2a03 Jan 31 07:43:43 crc kubenswrapper[4908]: I0131 07:43:43.457680 4908 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 07:43:44 crc kubenswrapper[4908]: I0131 07:43:44.048053 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 31 07:43:44 crc kubenswrapper[4908]: I0131 07:43:44.377414 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:43:44 crc kubenswrapper[4908]: I0131 07:43:44.470408 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7ksrc" event={"ID":"668eb706-10d5-4310-9c66-d8d77bebe230","Type":"ContainerStarted","Data":"27f4536d37640b1afd6cc8abe5ea444a718fbe0e28a1e18ac8b61940705a2a03"} Jan 31 07:43:44 crc kubenswrapper[4908]: I0131 07:43:44.474433 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"986d78d2-4837-4d5d-b414-3b16d6d46314","Type":"ContainerStarted","Data":"b97279036c1565ebc36b701feff8b6dba03f7e32525d8c1b094531f562c8d732"} Jan 31 07:43:44 crc kubenswrapper[4908]: I0131 07:43:44.474605 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 07:43:44 crc kubenswrapper[4908]: I0131 07:43:44.497722 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.211826771 podStartE2EDuration="6.497697922s" podCreationTimestamp="2026-01-31 07:43:38 +0000 UTC" firstStartedPulling="2026-01-31 07:43:39.537912614 +0000 UTC m=+1326.153857258" lastFinishedPulling="2026-01-31 07:43:43.823783755 +0000 UTC m=+1330.439728409" observedRunningTime="2026-01-31 07:43:44.491210864 +0000 UTC m=+1331.107155518" watchObservedRunningTime="2026-01-31 07:43:44.497697922 +0000 UTC m=+1331.113642566" Jan 31 07:43:45 crc kubenswrapper[4908]: I0131 07:43:45.486028 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="986d78d2-4837-4d5d-b414-3b16d6d46314" containerName="ceilometer-central-agent" containerID="cri-o://d45652c1f9c173b3ea2212c62a04285b501842ced4f48fa7549b0e224790f148" gracePeriod=30 Jan 31 07:43:45 crc kubenswrapper[4908]: I0131 07:43:45.486480 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="986d78d2-4837-4d5d-b414-3b16d6d46314" containerName="proxy-httpd" containerID="cri-o://b97279036c1565ebc36b701feff8b6dba03f7e32525d8c1b094531f562c8d732" gracePeriod=30 Jan 31 07:43:45 crc kubenswrapper[4908]: I0131 07:43:45.486536 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="986d78d2-4837-4d5d-b414-3b16d6d46314" containerName="ceilometer-notification-agent" containerID="cri-o://5aebb49ddd4e388b398722070c215652ddaa863c98bca3623d0d3e688d7c6e8f" gracePeriod=30 Jan 31 07:43:45 crc kubenswrapper[4908]: I0131 07:43:45.486685 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="986d78d2-4837-4d5d-b414-3b16d6d46314" containerName="sg-core" containerID="cri-o://fe1d0d26c23afa2cc706d1a969b6ff6cff6e154cd9e5375a3b39ed34ae9c6729" gracePeriod=30 Jan 31 07:43:46 crc kubenswrapper[4908]: I0131 07:43:46.502452 4908 generic.go:334] "Generic (PLEG): container finished" podID="986d78d2-4837-4d5d-b414-3b16d6d46314" containerID="b97279036c1565ebc36b701feff8b6dba03f7e32525d8c1b094531f562c8d732" exitCode=0 Jan 31 07:43:46 crc kubenswrapper[4908]: I0131 07:43:46.502492 4908 generic.go:334] "Generic (PLEG): container finished" podID="986d78d2-4837-4d5d-b414-3b16d6d46314" containerID="fe1d0d26c23afa2cc706d1a969b6ff6cff6e154cd9e5375a3b39ed34ae9c6729" exitCode=2 Jan 31 07:43:46 crc kubenswrapper[4908]: I0131 07:43:46.502501 4908 generic.go:334] "Generic (PLEG): container finished" podID="986d78d2-4837-4d5d-b414-3b16d6d46314" containerID="5aebb49ddd4e388b398722070c215652ddaa863c98bca3623d0d3e688d7c6e8f" exitCode=0 Jan 31 07:43:46 crc kubenswrapper[4908]: I0131 07:43:46.502512 4908 generic.go:334] "Generic (PLEG): container finished" podID="986d78d2-4837-4d5d-b414-3b16d6d46314" containerID="d45652c1f9c173b3ea2212c62a04285b501842ced4f48fa7549b0e224790f148" exitCode=0 Jan 31 07:43:46 crc kubenswrapper[4908]: I0131 07:43:46.502526 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"986d78d2-4837-4d5d-b414-3b16d6d46314","Type":"ContainerDied","Data":"b97279036c1565ebc36b701feff8b6dba03f7e32525d8c1b094531f562c8d732"} Jan 31 07:43:46 crc kubenswrapper[4908]: I0131 07:43:46.502568 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"986d78d2-4837-4d5d-b414-3b16d6d46314","Type":"ContainerDied","Data":"fe1d0d26c23afa2cc706d1a969b6ff6cff6e154cd9e5375a3b39ed34ae9c6729"} Jan 31 07:43:46 crc kubenswrapper[4908]: I0131 07:43:46.502579 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"986d78d2-4837-4d5d-b414-3b16d6d46314","Type":"ContainerDied","Data":"5aebb49ddd4e388b398722070c215652ddaa863c98bca3623d0d3e688d7c6e8f"} Jan 31 07:43:46 crc kubenswrapper[4908]: I0131 07:43:46.502589 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"986d78d2-4837-4d5d-b414-3b16d6d46314","Type":"ContainerDied","Data":"d45652c1f9c173b3ea2212c62a04285b501842ced4f48fa7549b0e224790f148"} Jan 31 07:43:49 crc kubenswrapper[4908]: I0131 07:43:49.236058 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 31 07:43:53 crc kubenswrapper[4908]: I0131 07:43:53.664452 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:43:53 crc kubenswrapper[4908]: I0131 07:43:53.830191 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/986d78d2-4837-4d5d-b414-3b16d6d46314-scripts\") pod \"986d78d2-4837-4d5d-b414-3b16d6d46314\" (UID: \"986d78d2-4837-4d5d-b414-3b16d6d46314\") " Jan 31 07:43:53 crc kubenswrapper[4908]: I0131 07:43:53.830293 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twmbk\" (UniqueName: \"kubernetes.io/projected/986d78d2-4837-4d5d-b414-3b16d6d46314-kube-api-access-twmbk\") pod \"986d78d2-4837-4d5d-b414-3b16d6d46314\" (UID: \"986d78d2-4837-4d5d-b414-3b16d6d46314\") " Jan 31 07:43:53 crc kubenswrapper[4908]: I0131 07:43:53.830330 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/986d78d2-4837-4d5d-b414-3b16d6d46314-combined-ca-bundle\") pod \"986d78d2-4837-4d5d-b414-3b16d6d46314\" (UID: \"986d78d2-4837-4d5d-b414-3b16d6d46314\") " Jan 31 07:43:53 crc kubenswrapper[4908]: I0131 07:43:53.830369 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/986d78d2-4837-4d5d-b414-3b16d6d46314-log-httpd\") pod \"986d78d2-4837-4d5d-b414-3b16d6d46314\" (UID: \"986d78d2-4837-4d5d-b414-3b16d6d46314\") " Jan 31 07:43:53 crc kubenswrapper[4908]: I0131 07:43:53.830399 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/986d78d2-4837-4d5d-b414-3b16d6d46314-sg-core-conf-yaml\") pod \"986d78d2-4837-4d5d-b414-3b16d6d46314\" (UID: \"986d78d2-4837-4d5d-b414-3b16d6d46314\") " Jan 31 07:43:53 crc kubenswrapper[4908]: I0131 07:43:53.830418 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/986d78d2-4837-4d5d-b414-3b16d6d46314-config-data\") pod \"986d78d2-4837-4d5d-b414-3b16d6d46314\" (UID: \"986d78d2-4837-4d5d-b414-3b16d6d46314\") " Jan 31 07:43:53 crc kubenswrapper[4908]: I0131 07:43:53.830460 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/986d78d2-4837-4d5d-b414-3b16d6d46314-run-httpd\") pod \"986d78d2-4837-4d5d-b414-3b16d6d46314\" (UID: \"986d78d2-4837-4d5d-b414-3b16d6d46314\") " Jan 31 07:43:53 crc kubenswrapper[4908]: I0131 07:43:53.831233 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/986d78d2-4837-4d5d-b414-3b16d6d46314-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "986d78d2-4837-4d5d-b414-3b16d6d46314" (UID: "986d78d2-4837-4d5d-b414-3b16d6d46314"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:43:53 crc kubenswrapper[4908]: I0131 07:43:53.831512 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/986d78d2-4837-4d5d-b414-3b16d6d46314-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "986d78d2-4837-4d5d-b414-3b16d6d46314" (UID: "986d78d2-4837-4d5d-b414-3b16d6d46314"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:43:53 crc kubenswrapper[4908]: I0131 07:43:53.835938 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/986d78d2-4837-4d5d-b414-3b16d6d46314-kube-api-access-twmbk" (OuterVolumeSpecName: "kube-api-access-twmbk") pod "986d78d2-4837-4d5d-b414-3b16d6d46314" (UID: "986d78d2-4837-4d5d-b414-3b16d6d46314"). InnerVolumeSpecName "kube-api-access-twmbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:43:53 crc kubenswrapper[4908]: I0131 07:43:53.836112 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/986d78d2-4837-4d5d-b414-3b16d6d46314-scripts" (OuterVolumeSpecName: "scripts") pod "986d78d2-4837-4d5d-b414-3b16d6d46314" (UID: "986d78d2-4837-4d5d-b414-3b16d6d46314"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:43:53 crc kubenswrapper[4908]: I0131 07:43:53.854645 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/986d78d2-4837-4d5d-b414-3b16d6d46314-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "986d78d2-4837-4d5d-b414-3b16d6d46314" (UID: "986d78d2-4837-4d5d-b414-3b16d6d46314"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:43:53 crc kubenswrapper[4908]: I0131 07:43:53.895207 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/986d78d2-4837-4d5d-b414-3b16d6d46314-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "986d78d2-4837-4d5d-b414-3b16d6d46314" (UID: "986d78d2-4837-4d5d-b414-3b16d6d46314"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:43:53 crc kubenswrapper[4908]: I0131 07:43:53.912889 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/986d78d2-4837-4d5d-b414-3b16d6d46314-config-data" (OuterVolumeSpecName: "config-data") pod "986d78d2-4837-4d5d-b414-3b16d6d46314" (UID: "986d78d2-4837-4d5d-b414-3b16d6d46314"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:43:53 crc kubenswrapper[4908]: I0131 07:43:53.932178 4908 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/986d78d2-4837-4d5d-b414-3b16d6d46314-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:53 crc kubenswrapper[4908]: I0131 07:43:53.932217 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twmbk\" (UniqueName: \"kubernetes.io/projected/986d78d2-4837-4d5d-b414-3b16d6d46314-kube-api-access-twmbk\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:53 crc kubenswrapper[4908]: I0131 07:43:53.932234 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/986d78d2-4837-4d5d-b414-3b16d6d46314-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:53 crc kubenswrapper[4908]: I0131 07:43:53.932246 4908 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/986d78d2-4837-4d5d-b414-3b16d6d46314-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:53 crc kubenswrapper[4908]: I0131 07:43:53.932258 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/986d78d2-4837-4d5d-b414-3b16d6d46314-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:53 crc kubenswrapper[4908]: I0131 07:43:53.932268 4908 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/986d78d2-4837-4d5d-b414-3b16d6d46314-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:53 crc kubenswrapper[4908]: I0131 07:43:53.932377 4908 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/986d78d2-4837-4d5d-b414-3b16d6d46314-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.579802 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"986d78d2-4837-4d5d-b414-3b16d6d46314","Type":"ContainerDied","Data":"04813f7f40ffad77b8894f41e87626b6a72f7d662528900650e2b8ce31ea16a5"} Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.579867 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.580133 4908 scope.go:117] "RemoveContainer" containerID="b97279036c1565ebc36b701feff8b6dba03f7e32525d8c1b094531f562c8d732" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.657319 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.675686 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.690925 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:43:54 crc kubenswrapper[4908]: E0131 07:43:54.691351 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986d78d2-4837-4d5d-b414-3b16d6d46314" containerName="proxy-httpd" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.691375 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="986d78d2-4837-4d5d-b414-3b16d6d46314" containerName="proxy-httpd" Jan 31 07:43:54 crc kubenswrapper[4908]: E0131 07:43:54.691395 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986d78d2-4837-4d5d-b414-3b16d6d46314" containerName="sg-core" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.691403 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="986d78d2-4837-4d5d-b414-3b16d6d46314" containerName="sg-core" Jan 31 07:43:54 crc kubenswrapper[4908]: E0131 07:43:54.691413 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986d78d2-4837-4d5d-b414-3b16d6d46314" containerName="ceilometer-central-agent" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.691421 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="986d78d2-4837-4d5d-b414-3b16d6d46314" containerName="ceilometer-central-agent" Jan 31 07:43:54 crc kubenswrapper[4908]: E0131 07:43:54.691439 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986d78d2-4837-4d5d-b414-3b16d6d46314" containerName="ceilometer-notification-agent" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.691447 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="986d78d2-4837-4d5d-b414-3b16d6d46314" containerName="ceilometer-notification-agent" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.691653 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="986d78d2-4837-4d5d-b414-3b16d6d46314" containerName="proxy-httpd" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.691681 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="986d78d2-4837-4d5d-b414-3b16d6d46314" containerName="ceilometer-central-agent" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.691696 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="986d78d2-4837-4d5d-b414-3b16d6d46314" containerName="ceilometer-notification-agent" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.691705 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="986d78d2-4837-4d5d-b414-3b16d6d46314" containerName="sg-core" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.694851 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.699342 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.703857 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.706234 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.852247 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t52ch\" (UniqueName: \"kubernetes.io/projected/7c46e9f0-5277-43ab-bdf6-645a028db8fc-kube-api-access-t52ch\") pod \"ceilometer-0\" (UID: \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\") " pod="openstack/ceilometer-0" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.852307 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c46e9f0-5277-43ab-bdf6-645a028db8fc-run-httpd\") pod \"ceilometer-0\" (UID: \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\") " pod="openstack/ceilometer-0" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.852335 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c46e9f0-5277-43ab-bdf6-645a028db8fc-config-data\") pod \"ceilometer-0\" (UID: \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\") " pod="openstack/ceilometer-0" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.852388 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c46e9f0-5277-43ab-bdf6-645a028db8fc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\") " pod="openstack/ceilometer-0" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.852420 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c46e9f0-5277-43ab-bdf6-645a028db8fc-log-httpd\") pod \"ceilometer-0\" (UID: \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\") " pod="openstack/ceilometer-0" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.852437 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c46e9f0-5277-43ab-bdf6-645a028db8fc-scripts\") pod \"ceilometer-0\" (UID: \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\") " pod="openstack/ceilometer-0" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.852454 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c46e9f0-5277-43ab-bdf6-645a028db8fc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\") " pod="openstack/ceilometer-0" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.954382 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c46e9f0-5277-43ab-bdf6-645a028db8fc-run-httpd\") pod \"ceilometer-0\" (UID: \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\") " pod="openstack/ceilometer-0" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.954437 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c46e9f0-5277-43ab-bdf6-645a028db8fc-config-data\") pod \"ceilometer-0\" (UID: \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\") " pod="openstack/ceilometer-0" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.954519 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c46e9f0-5277-43ab-bdf6-645a028db8fc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\") " pod="openstack/ceilometer-0" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.954572 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c46e9f0-5277-43ab-bdf6-645a028db8fc-log-httpd\") pod \"ceilometer-0\" (UID: \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\") " pod="openstack/ceilometer-0" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.954597 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c46e9f0-5277-43ab-bdf6-645a028db8fc-scripts\") pod \"ceilometer-0\" (UID: \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\") " pod="openstack/ceilometer-0" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.954621 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c46e9f0-5277-43ab-bdf6-645a028db8fc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\") " pod="openstack/ceilometer-0" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.954689 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t52ch\" (UniqueName: \"kubernetes.io/projected/7c46e9f0-5277-43ab-bdf6-645a028db8fc-kube-api-access-t52ch\") pod \"ceilometer-0\" (UID: \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\") " pod="openstack/ceilometer-0" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.955590 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c46e9f0-5277-43ab-bdf6-645a028db8fc-log-httpd\") pod \"ceilometer-0\" (UID: \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\") " pod="openstack/ceilometer-0" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.956455 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c46e9f0-5277-43ab-bdf6-645a028db8fc-run-httpd\") pod \"ceilometer-0\" (UID: \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\") " pod="openstack/ceilometer-0" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.959028 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c46e9f0-5277-43ab-bdf6-645a028db8fc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\") " pod="openstack/ceilometer-0" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.959739 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c46e9f0-5277-43ab-bdf6-645a028db8fc-scripts\") pod \"ceilometer-0\" (UID: \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\") " pod="openstack/ceilometer-0" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.960791 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c46e9f0-5277-43ab-bdf6-645a028db8fc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\") " pod="openstack/ceilometer-0" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.961062 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c46e9f0-5277-43ab-bdf6-645a028db8fc-config-data\") pod \"ceilometer-0\" (UID: \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\") " pod="openstack/ceilometer-0" Jan 31 07:43:54 crc kubenswrapper[4908]: I0131 07:43:54.970160 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t52ch\" (UniqueName: \"kubernetes.io/projected/7c46e9f0-5277-43ab-bdf6-645a028db8fc-kube-api-access-t52ch\") pod \"ceilometer-0\" (UID: \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\") " pod="openstack/ceilometer-0" Jan 31 07:43:55 crc kubenswrapper[4908]: I0131 07:43:55.011096 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:43:55 crc kubenswrapper[4908]: I0131 07:43:55.857245 4908 scope.go:117] "RemoveContainer" containerID="fe1d0d26c23afa2cc706d1a969b6ff6cff6e154cd9e5375a3b39ed34ae9c6729" Jan 31 07:43:55 crc kubenswrapper[4908]: I0131 07:43:55.951154 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="986d78d2-4837-4d5d-b414-3b16d6d46314" path="/var/lib/kubelet/pods/986d78d2-4837-4d5d-b414-3b16d6d46314/volumes" Jan 31 07:43:56 crc kubenswrapper[4908]: I0131 07:43:56.034713 4908 scope.go:117] "RemoveContainer" containerID="5aebb49ddd4e388b398722070c215652ddaa863c98bca3623d0d3e688d7c6e8f" Jan 31 07:43:56 crc kubenswrapper[4908]: I0131 07:43:56.090440 4908 scope.go:117] "RemoveContainer" containerID="d45652c1f9c173b3ea2212c62a04285b501842ced4f48fa7549b0e224790f148" Jan 31 07:43:56 crc kubenswrapper[4908]: I0131 07:43:56.485915 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:43:56 crc kubenswrapper[4908]: I0131 07:43:56.599891 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c46e9f0-5277-43ab-bdf6-645a028db8fc","Type":"ContainerStarted","Data":"dd0bdb815dd9567f3fdde54fa8d0f1b839e34cca105c886ef22f265d2d6d83f6"} Jan 31 07:43:56 crc kubenswrapper[4908]: I0131 07:43:56.601636 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7ksrc" event={"ID":"668eb706-10d5-4310-9c66-d8d77bebe230","Type":"ContainerStarted","Data":"292464a81710a3ad057c52f4a6867bd255c4ad37143b5c763f45c9dd8012504b"} Jan 31 07:43:56 crc kubenswrapper[4908]: I0131 07:43:56.618873 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-7ksrc" podStartSLOduration=1.9793831499999999 podStartE2EDuration="14.618855154s" podCreationTimestamp="2026-01-31 07:43:42 +0000 UTC" firstStartedPulling="2026-01-31 07:43:43.4574431 +0000 UTC m=+1330.073387754" lastFinishedPulling="2026-01-31 07:43:56.096915104 +0000 UTC m=+1342.712859758" observedRunningTime="2026-01-31 07:43:56.615427291 +0000 UTC m=+1343.231371945" watchObservedRunningTime="2026-01-31 07:43:56.618855154 +0000 UTC m=+1343.234799808" Jan 31 07:43:57 crc kubenswrapper[4908]: I0131 07:43:57.133723 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7bbb449548-wkq7p" Jan 31 07:43:58 crc kubenswrapper[4908]: I0131 07:43:58.619595 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c46e9f0-5277-43ab-bdf6-645a028db8fc","Type":"ContainerStarted","Data":"a31dde0ea0708ba86383de76c78ad89e82c8893149e715908920a7fd926983be"} Jan 31 07:43:59 crc kubenswrapper[4908]: I0131 07:43:59.275295 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6577f594f-lz5n8" Jan 31 07:43:59 crc kubenswrapper[4908]: I0131 07:43:59.348170 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7bbb449548-wkq7p"] Jan 31 07:43:59 crc kubenswrapper[4908]: I0131 07:43:59.348395 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7bbb449548-wkq7p" podUID="a8dff220-3208-4f05-aefb-e094594d7ab7" containerName="neutron-api" containerID="cri-o://3cece66178658d357fb41431396362aa2621985337051b7015cb1d6c172aeea6" gracePeriod=30 Jan 31 07:43:59 crc kubenswrapper[4908]: I0131 07:43:59.348838 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7bbb449548-wkq7p" podUID="a8dff220-3208-4f05-aefb-e094594d7ab7" containerName="neutron-httpd" containerID="cri-o://5f18187ba37f22c2dbe97b309a1bbc29e5a900f2359dec9d7499b5b17bbdcb95" gracePeriod=30 Jan 31 07:44:00 crc kubenswrapper[4908]: I0131 07:44:00.639281 4908 generic.go:334] "Generic (PLEG): container finished" podID="a8dff220-3208-4f05-aefb-e094594d7ab7" containerID="5f18187ba37f22c2dbe97b309a1bbc29e5a900f2359dec9d7499b5b17bbdcb95" exitCode=0 Jan 31 07:44:00 crc kubenswrapper[4908]: I0131 07:44:00.639513 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bbb449548-wkq7p" event={"ID":"a8dff220-3208-4f05-aefb-e094594d7ab7","Type":"ContainerDied","Data":"5f18187ba37f22c2dbe97b309a1bbc29e5a900f2359dec9d7499b5b17bbdcb95"} Jan 31 07:44:02 crc kubenswrapper[4908]: I0131 07:44:02.659320 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c46e9f0-5277-43ab-bdf6-645a028db8fc","Type":"ContainerStarted","Data":"a54fd38984b39ebb594f899649b1044f23f5bb09028ce8e8198dee491fb48bca"} Jan 31 07:44:04 crc kubenswrapper[4908]: I0131 07:44:04.679037 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c46e9f0-5277-43ab-bdf6-645a028db8fc","Type":"ContainerStarted","Data":"8753a3ca6fa861c80484f443ca8e134f57a1a36cb09cab870e14c1f9b1ddf57f"} Jan 31 07:44:04 crc kubenswrapper[4908]: I0131 07:44:04.681471 4908 generic.go:334] "Generic (PLEG): container finished" podID="a8dff220-3208-4f05-aefb-e094594d7ab7" containerID="3cece66178658d357fb41431396362aa2621985337051b7015cb1d6c172aeea6" exitCode=0 Jan 31 07:44:04 crc kubenswrapper[4908]: I0131 07:44:04.681521 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bbb449548-wkq7p" event={"ID":"a8dff220-3208-4f05-aefb-e094594d7ab7","Type":"ContainerDied","Data":"3cece66178658d357fb41431396362aa2621985337051b7015cb1d6c172aeea6"} Jan 31 07:44:05 crc kubenswrapper[4908]: I0131 07:44:05.145804 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:44:05 crc kubenswrapper[4908]: I0131 07:44:05.215578 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bbb449548-wkq7p" Jan 31 07:44:05 crc kubenswrapper[4908]: I0131 07:44:05.339277 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kh7h\" (UniqueName: \"kubernetes.io/projected/a8dff220-3208-4f05-aefb-e094594d7ab7-kube-api-access-5kh7h\") pod \"a8dff220-3208-4f05-aefb-e094594d7ab7\" (UID: \"a8dff220-3208-4f05-aefb-e094594d7ab7\") " Jan 31 07:44:05 crc kubenswrapper[4908]: I0131 07:44:05.339363 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8dff220-3208-4f05-aefb-e094594d7ab7-combined-ca-bundle\") pod \"a8dff220-3208-4f05-aefb-e094594d7ab7\" (UID: \"a8dff220-3208-4f05-aefb-e094594d7ab7\") " Jan 31 07:44:05 crc kubenswrapper[4908]: I0131 07:44:05.339480 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a8dff220-3208-4f05-aefb-e094594d7ab7-config\") pod \"a8dff220-3208-4f05-aefb-e094594d7ab7\" (UID: \"a8dff220-3208-4f05-aefb-e094594d7ab7\") " Jan 31 07:44:05 crc kubenswrapper[4908]: I0131 07:44:05.339503 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a8dff220-3208-4f05-aefb-e094594d7ab7-httpd-config\") pod \"a8dff220-3208-4f05-aefb-e094594d7ab7\" (UID: \"a8dff220-3208-4f05-aefb-e094594d7ab7\") " Jan 31 07:44:05 crc kubenswrapper[4908]: I0131 07:44:05.339531 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8dff220-3208-4f05-aefb-e094594d7ab7-ovndb-tls-certs\") pod \"a8dff220-3208-4f05-aefb-e094594d7ab7\" (UID: \"a8dff220-3208-4f05-aefb-e094594d7ab7\") " Jan 31 07:44:05 crc kubenswrapper[4908]: I0131 07:44:05.344864 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8dff220-3208-4f05-aefb-e094594d7ab7-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a8dff220-3208-4f05-aefb-e094594d7ab7" (UID: "a8dff220-3208-4f05-aefb-e094594d7ab7"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:44:05 crc kubenswrapper[4908]: I0131 07:44:05.352753 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8dff220-3208-4f05-aefb-e094594d7ab7-kube-api-access-5kh7h" (OuterVolumeSpecName: "kube-api-access-5kh7h") pod "a8dff220-3208-4f05-aefb-e094594d7ab7" (UID: "a8dff220-3208-4f05-aefb-e094594d7ab7"). InnerVolumeSpecName "kube-api-access-5kh7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:44:05 crc kubenswrapper[4908]: I0131 07:44:05.384937 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8dff220-3208-4f05-aefb-e094594d7ab7-config" (OuterVolumeSpecName: "config") pod "a8dff220-3208-4f05-aefb-e094594d7ab7" (UID: "a8dff220-3208-4f05-aefb-e094594d7ab7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:44:05 crc kubenswrapper[4908]: I0131 07:44:05.389049 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8dff220-3208-4f05-aefb-e094594d7ab7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8dff220-3208-4f05-aefb-e094594d7ab7" (UID: "a8dff220-3208-4f05-aefb-e094594d7ab7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:44:05 crc kubenswrapper[4908]: I0131 07:44:05.415872 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8dff220-3208-4f05-aefb-e094594d7ab7-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a8dff220-3208-4f05-aefb-e094594d7ab7" (UID: "a8dff220-3208-4f05-aefb-e094594d7ab7"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:44:05 crc kubenswrapper[4908]: I0131 07:44:05.440990 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a8dff220-3208-4f05-aefb-e094594d7ab7-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:05 crc kubenswrapper[4908]: I0131 07:44:05.441015 4908 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a8dff220-3208-4f05-aefb-e094594d7ab7-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:05 crc kubenswrapper[4908]: I0131 07:44:05.441026 4908 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8dff220-3208-4f05-aefb-e094594d7ab7-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:05 crc kubenswrapper[4908]: I0131 07:44:05.441037 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kh7h\" (UniqueName: \"kubernetes.io/projected/a8dff220-3208-4f05-aefb-e094594d7ab7-kube-api-access-5kh7h\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:05 crc kubenswrapper[4908]: I0131 07:44:05.441045 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8dff220-3208-4f05-aefb-e094594d7ab7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:05 crc kubenswrapper[4908]: I0131 07:44:05.692622 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bbb449548-wkq7p" event={"ID":"a8dff220-3208-4f05-aefb-e094594d7ab7","Type":"ContainerDied","Data":"4cf0e316ceca10efdd77091d6cc235a2d130357cd13cbf94a21c95af6fc59c94"} Jan 31 07:44:05 crc kubenswrapper[4908]: I0131 07:44:05.692735 4908 scope.go:117] "RemoveContainer" containerID="5f18187ba37f22c2dbe97b309a1bbc29e5a900f2359dec9d7499b5b17bbdcb95" Jan 31 07:44:05 crc kubenswrapper[4908]: I0131 07:44:05.692898 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bbb449548-wkq7p" Jan 31 07:44:05 crc kubenswrapper[4908]: I0131 07:44:05.720345 4908 scope.go:117] "RemoveContainer" containerID="3cece66178658d357fb41431396362aa2621985337051b7015cb1d6c172aeea6" Jan 31 07:44:05 crc kubenswrapper[4908]: I0131 07:44:05.743713 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7bbb449548-wkq7p"] Jan 31 07:44:05 crc kubenswrapper[4908]: I0131 07:44:05.748963 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7bbb449548-wkq7p"] Jan 31 07:44:05 crc kubenswrapper[4908]: I0131 07:44:05.953067 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8dff220-3208-4f05-aefb-e094594d7ab7" path="/var/lib/kubelet/pods/a8dff220-3208-4f05-aefb-e094594d7ab7/volumes" Jan 31 07:44:07 crc kubenswrapper[4908]: I0131 07:44:07.715049 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c46e9f0-5277-43ab-bdf6-645a028db8fc","Type":"ContainerStarted","Data":"3bba13309f1286fd85ce85ee00269aa8dabf672bed1f09d75a404348594d59af"} Jan 31 07:44:07 crc kubenswrapper[4908]: I0131 07:44:07.715548 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 07:44:07 crc kubenswrapper[4908]: I0131 07:44:07.715336 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7c46e9f0-5277-43ab-bdf6-645a028db8fc" containerName="sg-core" containerID="cri-o://8753a3ca6fa861c80484f443ca8e134f57a1a36cb09cab870e14c1f9b1ddf57f" gracePeriod=30 Jan 31 07:44:07 crc kubenswrapper[4908]: I0131 07:44:07.715240 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7c46e9f0-5277-43ab-bdf6-645a028db8fc" containerName="ceilometer-central-agent" containerID="cri-o://a31dde0ea0708ba86383de76c78ad89e82c8893149e715908920a7fd926983be" gracePeriod=30 Jan 31 07:44:07 crc kubenswrapper[4908]: I0131 07:44:07.715406 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7c46e9f0-5277-43ab-bdf6-645a028db8fc" containerName="ceilometer-notification-agent" containerID="cri-o://a54fd38984b39ebb594f899649b1044f23f5bb09028ce8e8198dee491fb48bca" gracePeriod=30 Jan 31 07:44:07 crc kubenswrapper[4908]: I0131 07:44:07.715339 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7c46e9f0-5277-43ab-bdf6-645a028db8fc" containerName="proxy-httpd" containerID="cri-o://3bba13309f1286fd85ce85ee00269aa8dabf672bed1f09d75a404348594d59af" gracePeriod=30 Jan 31 07:44:07 crc kubenswrapper[4908]: I0131 07:44:07.749332 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.701171912 podStartE2EDuration="13.749311164s" podCreationTimestamp="2026-01-31 07:43:54 +0000 UTC" firstStartedPulling="2026-01-31 07:43:56.497872194 +0000 UTC m=+1343.113816848" lastFinishedPulling="2026-01-31 07:44:06.546011446 +0000 UTC m=+1353.161956100" observedRunningTime="2026-01-31 07:44:07.742670232 +0000 UTC m=+1354.358614886" watchObservedRunningTime="2026-01-31 07:44:07.749311164 +0000 UTC m=+1354.365255818" Jan 31 07:44:08 crc kubenswrapper[4908]: I0131 07:44:08.724487 4908 generic.go:334] "Generic (PLEG): container finished" podID="7c46e9f0-5277-43ab-bdf6-645a028db8fc" containerID="3bba13309f1286fd85ce85ee00269aa8dabf672bed1f09d75a404348594d59af" exitCode=0 Jan 31 07:44:08 crc kubenswrapper[4908]: I0131 07:44:08.724718 4908 generic.go:334] "Generic (PLEG): container finished" podID="7c46e9f0-5277-43ab-bdf6-645a028db8fc" containerID="8753a3ca6fa861c80484f443ca8e134f57a1a36cb09cab870e14c1f9b1ddf57f" exitCode=2 Jan 31 07:44:08 crc kubenswrapper[4908]: I0131 07:44:08.724727 4908 generic.go:334] "Generic (PLEG): container finished" podID="7c46e9f0-5277-43ab-bdf6-645a028db8fc" containerID="a54fd38984b39ebb594f899649b1044f23f5bb09028ce8e8198dee491fb48bca" exitCode=0 Jan 31 07:44:08 crc kubenswrapper[4908]: I0131 07:44:08.724734 4908 generic.go:334] "Generic (PLEG): container finished" podID="7c46e9f0-5277-43ab-bdf6-645a028db8fc" containerID="a31dde0ea0708ba86383de76c78ad89e82c8893149e715908920a7fd926983be" exitCode=0 Jan 31 07:44:08 crc kubenswrapper[4908]: I0131 07:44:08.724623 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c46e9f0-5277-43ab-bdf6-645a028db8fc","Type":"ContainerDied","Data":"3bba13309f1286fd85ce85ee00269aa8dabf672bed1f09d75a404348594d59af"} Jan 31 07:44:08 crc kubenswrapper[4908]: I0131 07:44:08.724762 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c46e9f0-5277-43ab-bdf6-645a028db8fc","Type":"ContainerDied","Data":"8753a3ca6fa861c80484f443ca8e134f57a1a36cb09cab870e14c1f9b1ddf57f"} Jan 31 07:44:08 crc kubenswrapper[4908]: I0131 07:44:08.724773 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c46e9f0-5277-43ab-bdf6-645a028db8fc","Type":"ContainerDied","Data":"a54fd38984b39ebb594f899649b1044f23f5bb09028ce8e8198dee491fb48bca"} Jan 31 07:44:08 crc kubenswrapper[4908]: I0131 07:44:08.724781 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c46e9f0-5277-43ab-bdf6-645a028db8fc","Type":"ContainerDied","Data":"a31dde0ea0708ba86383de76c78ad89e82c8893149e715908920a7fd926983be"} Jan 31 07:44:08 crc kubenswrapper[4908]: I0131 07:44:08.819050 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:44:08 crc kubenswrapper[4908]: I0131 07:44:08.899372 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c46e9f0-5277-43ab-bdf6-645a028db8fc-combined-ca-bundle\") pod \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\" (UID: \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\") " Jan 31 07:44:08 crc kubenswrapper[4908]: I0131 07:44:08.899476 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c46e9f0-5277-43ab-bdf6-645a028db8fc-sg-core-conf-yaml\") pod \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\" (UID: \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\") " Jan 31 07:44:08 crc kubenswrapper[4908]: I0131 07:44:08.899496 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c46e9f0-5277-43ab-bdf6-645a028db8fc-scripts\") pod \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\" (UID: \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\") " Jan 31 07:44:08 crc kubenswrapper[4908]: I0131 07:44:08.899544 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t52ch\" (UniqueName: \"kubernetes.io/projected/7c46e9f0-5277-43ab-bdf6-645a028db8fc-kube-api-access-t52ch\") pod \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\" (UID: \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\") " Jan 31 07:44:08 crc kubenswrapper[4908]: I0131 07:44:08.899561 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c46e9f0-5277-43ab-bdf6-645a028db8fc-config-data\") pod \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\" (UID: \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\") " Jan 31 07:44:08 crc kubenswrapper[4908]: I0131 07:44:08.899577 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c46e9f0-5277-43ab-bdf6-645a028db8fc-log-httpd\") pod \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\" (UID: \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\") " Jan 31 07:44:08 crc kubenswrapper[4908]: I0131 07:44:08.899593 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c46e9f0-5277-43ab-bdf6-645a028db8fc-run-httpd\") pod \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\" (UID: \"7c46e9f0-5277-43ab-bdf6-645a028db8fc\") " Jan 31 07:44:08 crc kubenswrapper[4908]: I0131 07:44:08.900469 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c46e9f0-5277-43ab-bdf6-645a028db8fc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7c46e9f0-5277-43ab-bdf6-645a028db8fc" (UID: "7c46e9f0-5277-43ab-bdf6-645a028db8fc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:44:08 crc kubenswrapper[4908]: I0131 07:44:08.900746 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c46e9f0-5277-43ab-bdf6-645a028db8fc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7c46e9f0-5277-43ab-bdf6-645a028db8fc" (UID: "7c46e9f0-5277-43ab-bdf6-645a028db8fc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:44:08 crc kubenswrapper[4908]: I0131 07:44:08.905051 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c46e9f0-5277-43ab-bdf6-645a028db8fc-scripts" (OuterVolumeSpecName: "scripts") pod "7c46e9f0-5277-43ab-bdf6-645a028db8fc" (UID: "7c46e9f0-5277-43ab-bdf6-645a028db8fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:44:08 crc kubenswrapper[4908]: I0131 07:44:08.908211 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c46e9f0-5277-43ab-bdf6-645a028db8fc-kube-api-access-t52ch" (OuterVolumeSpecName: "kube-api-access-t52ch") pod "7c46e9f0-5277-43ab-bdf6-645a028db8fc" (UID: "7c46e9f0-5277-43ab-bdf6-645a028db8fc"). InnerVolumeSpecName "kube-api-access-t52ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:44:08 crc kubenswrapper[4908]: I0131 07:44:08.923423 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c46e9f0-5277-43ab-bdf6-645a028db8fc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7c46e9f0-5277-43ab-bdf6-645a028db8fc" (UID: "7c46e9f0-5277-43ab-bdf6-645a028db8fc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:44:08 crc kubenswrapper[4908]: I0131 07:44:08.969822 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c46e9f0-5277-43ab-bdf6-645a028db8fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c46e9f0-5277-43ab-bdf6-645a028db8fc" (UID: "7c46e9f0-5277-43ab-bdf6-645a028db8fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:44:08 crc kubenswrapper[4908]: I0131 07:44:08.987812 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c46e9f0-5277-43ab-bdf6-645a028db8fc-config-data" (OuterVolumeSpecName: "config-data") pod "7c46e9f0-5277-43ab-bdf6-645a028db8fc" (UID: "7c46e9f0-5277-43ab-bdf6-645a028db8fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.002024 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c46e9f0-5277-43ab-bdf6-645a028db8fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.002304 4908 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c46e9f0-5277-43ab-bdf6-645a028db8fc-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.002315 4908 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c46e9f0-5277-43ab-bdf6-645a028db8fc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.002323 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t52ch\" (UniqueName: \"kubernetes.io/projected/7c46e9f0-5277-43ab-bdf6-645a028db8fc-kube-api-access-t52ch\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.002333 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c46e9f0-5277-43ab-bdf6-645a028db8fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.002341 4908 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c46e9f0-5277-43ab-bdf6-645a028db8fc-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.002349 4908 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c46e9f0-5277-43ab-bdf6-645a028db8fc-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.739717 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c46e9f0-5277-43ab-bdf6-645a028db8fc","Type":"ContainerDied","Data":"dd0bdb815dd9567f3fdde54fa8d0f1b839e34cca105c886ef22f265d2d6d83f6"} Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.739772 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.739784 4908 scope.go:117] "RemoveContainer" containerID="3bba13309f1286fd85ce85ee00269aa8dabf672bed1f09d75a404348594d59af" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.766482 4908 scope.go:117] "RemoveContainer" containerID="8753a3ca6fa861c80484f443ca8e134f57a1a36cb09cab870e14c1f9b1ddf57f" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.795226 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.798624 4908 scope.go:117] "RemoveContainer" containerID="a54fd38984b39ebb594f899649b1044f23f5bb09028ce8e8198dee491fb48bca" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.805699 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.817389 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:44:09 crc kubenswrapper[4908]: E0131 07:44:09.817820 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c46e9f0-5277-43ab-bdf6-645a028db8fc" containerName="ceilometer-central-agent" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.817837 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c46e9f0-5277-43ab-bdf6-645a028db8fc" containerName="ceilometer-central-agent" Jan 31 07:44:09 crc kubenswrapper[4908]: E0131 07:44:09.817857 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c46e9f0-5277-43ab-bdf6-645a028db8fc" containerName="proxy-httpd" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.817865 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c46e9f0-5277-43ab-bdf6-645a028db8fc" containerName="proxy-httpd" Jan 31 07:44:09 crc kubenswrapper[4908]: E0131 07:44:09.817882 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8dff220-3208-4f05-aefb-e094594d7ab7" containerName="neutron-api" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.817889 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8dff220-3208-4f05-aefb-e094594d7ab7" containerName="neutron-api" Jan 31 07:44:09 crc kubenswrapper[4908]: E0131 07:44:09.817900 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c46e9f0-5277-43ab-bdf6-645a028db8fc" containerName="ceilometer-notification-agent" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.817907 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c46e9f0-5277-43ab-bdf6-645a028db8fc" containerName="ceilometer-notification-agent" Jan 31 07:44:09 crc kubenswrapper[4908]: E0131 07:44:09.817928 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c46e9f0-5277-43ab-bdf6-645a028db8fc" containerName="sg-core" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.817936 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c46e9f0-5277-43ab-bdf6-645a028db8fc" containerName="sg-core" Jan 31 07:44:09 crc kubenswrapper[4908]: E0131 07:44:09.817998 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8dff220-3208-4f05-aefb-e094594d7ab7" containerName="neutron-httpd" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.818008 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8dff220-3208-4f05-aefb-e094594d7ab7" containerName="neutron-httpd" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.826231 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c46e9f0-5277-43ab-bdf6-645a028db8fc" containerName="proxy-httpd" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.826275 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8dff220-3208-4f05-aefb-e094594d7ab7" containerName="neutron-httpd" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.826298 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8dff220-3208-4f05-aefb-e094594d7ab7" containerName="neutron-api" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.826313 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c46e9f0-5277-43ab-bdf6-645a028db8fc" containerName="ceilometer-notification-agent" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.826327 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c46e9f0-5277-43ab-bdf6-645a028db8fc" containerName="ceilometer-central-agent" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.826348 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c46e9f0-5277-43ab-bdf6-645a028db8fc" containerName="sg-core" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.829825 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.832088 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.832377 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.835026 4908 scope.go:117] "RemoveContainer" containerID="a31dde0ea0708ba86383de76c78ad89e82c8893149e715908920a7fd926983be" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.850492 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.919428 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/025fff27-1ca8-4619-b60d-ee68f0a7fff8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\") " pod="openstack/ceilometer-0" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.919520 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/025fff27-1ca8-4619-b60d-ee68f0a7fff8-log-httpd\") pod \"ceilometer-0\" (UID: \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\") " pod="openstack/ceilometer-0" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.919582 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/025fff27-1ca8-4619-b60d-ee68f0a7fff8-run-httpd\") pod \"ceilometer-0\" (UID: \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\") " pod="openstack/ceilometer-0" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.919720 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2br4\" (UniqueName: \"kubernetes.io/projected/025fff27-1ca8-4619-b60d-ee68f0a7fff8-kube-api-access-v2br4\") pod \"ceilometer-0\" (UID: \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\") " pod="openstack/ceilometer-0" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.919953 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/025fff27-1ca8-4619-b60d-ee68f0a7fff8-config-data\") pod \"ceilometer-0\" (UID: \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\") " pod="openstack/ceilometer-0" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.920035 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/025fff27-1ca8-4619-b60d-ee68f0a7fff8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\") " pod="openstack/ceilometer-0" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.920069 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/025fff27-1ca8-4619-b60d-ee68f0a7fff8-scripts\") pod \"ceilometer-0\" (UID: \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\") " pod="openstack/ceilometer-0" Jan 31 07:44:09 crc kubenswrapper[4908]: I0131 07:44:09.955743 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c46e9f0-5277-43ab-bdf6-645a028db8fc" path="/var/lib/kubelet/pods/7c46e9f0-5277-43ab-bdf6-645a028db8fc/volumes" Jan 31 07:44:10 crc kubenswrapper[4908]: I0131 07:44:10.021541 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/025fff27-1ca8-4619-b60d-ee68f0a7fff8-config-data\") pod \"ceilometer-0\" (UID: \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\") " pod="openstack/ceilometer-0" Jan 31 07:44:10 crc kubenswrapper[4908]: I0131 07:44:10.021625 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/025fff27-1ca8-4619-b60d-ee68f0a7fff8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\") " pod="openstack/ceilometer-0" Jan 31 07:44:10 crc kubenswrapper[4908]: I0131 07:44:10.021654 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/025fff27-1ca8-4619-b60d-ee68f0a7fff8-scripts\") pod \"ceilometer-0\" (UID: \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\") " pod="openstack/ceilometer-0" Jan 31 07:44:10 crc kubenswrapper[4908]: I0131 07:44:10.022419 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/025fff27-1ca8-4619-b60d-ee68f0a7fff8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\") " pod="openstack/ceilometer-0" Jan 31 07:44:10 crc kubenswrapper[4908]: I0131 07:44:10.022532 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/025fff27-1ca8-4619-b60d-ee68f0a7fff8-log-httpd\") pod \"ceilometer-0\" (UID: \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\") " pod="openstack/ceilometer-0" Jan 31 07:44:10 crc kubenswrapper[4908]: I0131 07:44:10.022618 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/025fff27-1ca8-4619-b60d-ee68f0a7fff8-run-httpd\") pod \"ceilometer-0\" (UID: \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\") " pod="openstack/ceilometer-0" Jan 31 07:44:10 crc kubenswrapper[4908]: I0131 07:44:10.022701 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2br4\" (UniqueName: \"kubernetes.io/projected/025fff27-1ca8-4619-b60d-ee68f0a7fff8-kube-api-access-v2br4\") pod \"ceilometer-0\" (UID: \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\") " pod="openstack/ceilometer-0" Jan 31 07:44:10 crc kubenswrapper[4908]: I0131 07:44:10.022969 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/025fff27-1ca8-4619-b60d-ee68f0a7fff8-log-httpd\") pod \"ceilometer-0\" (UID: \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\") " pod="openstack/ceilometer-0" Jan 31 07:44:10 crc kubenswrapper[4908]: I0131 07:44:10.023004 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/025fff27-1ca8-4619-b60d-ee68f0a7fff8-run-httpd\") pod \"ceilometer-0\" (UID: \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\") " pod="openstack/ceilometer-0" Jan 31 07:44:10 crc kubenswrapper[4908]: I0131 07:44:10.036031 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/025fff27-1ca8-4619-b60d-ee68f0a7fff8-scripts\") pod \"ceilometer-0\" (UID: \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\") " pod="openstack/ceilometer-0" Jan 31 07:44:10 crc kubenswrapper[4908]: I0131 07:44:10.036194 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/025fff27-1ca8-4619-b60d-ee68f0a7fff8-config-data\") pod \"ceilometer-0\" (UID: \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\") " pod="openstack/ceilometer-0" Jan 31 07:44:10 crc kubenswrapper[4908]: I0131 07:44:10.037103 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/025fff27-1ca8-4619-b60d-ee68f0a7fff8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\") " pod="openstack/ceilometer-0" Jan 31 07:44:10 crc kubenswrapper[4908]: I0131 07:44:10.039266 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/025fff27-1ca8-4619-b60d-ee68f0a7fff8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\") " pod="openstack/ceilometer-0" Jan 31 07:44:10 crc kubenswrapper[4908]: I0131 07:44:10.045943 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2br4\" (UniqueName: \"kubernetes.io/projected/025fff27-1ca8-4619-b60d-ee68f0a7fff8-kube-api-access-v2br4\") pod \"ceilometer-0\" (UID: \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\") " pod="openstack/ceilometer-0" Jan 31 07:44:10 crc kubenswrapper[4908]: I0131 07:44:10.158365 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:44:10 crc kubenswrapper[4908]: I0131 07:44:10.600995 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:44:10 crc kubenswrapper[4908]: W0131 07:44:10.607282 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod025fff27_1ca8_4619_b60d_ee68f0a7fff8.slice/crio-ab856df615f74a5d69d2def74ae0f7329b1e229e5f4f734490a666a0c039aa3a WatchSource:0}: Error finding container ab856df615f74a5d69d2def74ae0f7329b1e229e5f4f734490a666a0c039aa3a: Status 404 returned error can't find the container with id ab856df615f74a5d69d2def74ae0f7329b1e229e5f4f734490a666a0c039aa3a Jan 31 07:44:10 crc kubenswrapper[4908]: I0131 07:44:10.749906 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"025fff27-1ca8-4619-b60d-ee68f0a7fff8","Type":"ContainerStarted","Data":"ab856df615f74a5d69d2def74ae0f7329b1e229e5f4f734490a666a0c039aa3a"} Jan 31 07:44:11 crc kubenswrapper[4908]: I0131 07:44:11.759500 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"025fff27-1ca8-4619-b60d-ee68f0a7fff8","Type":"ContainerStarted","Data":"f69c16f2521faf6aa05d6d07e6ae6ae62a43a2a0489e91485c8a90167a709741"} Jan 31 07:44:12 crc kubenswrapper[4908]: I0131 07:44:12.768487 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"025fff27-1ca8-4619-b60d-ee68f0a7fff8","Type":"ContainerStarted","Data":"ed0a0c6c0d7420e018f093d06d439faea400831b79c869697fdc7c2aad095fff"} Jan 31 07:44:13 crc kubenswrapper[4908]: I0131 07:44:13.777177 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"025fff27-1ca8-4619-b60d-ee68f0a7fff8","Type":"ContainerStarted","Data":"d02b051f220915ce0dcf0ac5914b13a410bf24a92985b40c6f63d5e3fdbd89bf"} Jan 31 07:44:15 crc kubenswrapper[4908]: I0131 07:44:15.804114 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"025fff27-1ca8-4619-b60d-ee68f0a7fff8","Type":"ContainerStarted","Data":"db2b1974dd95dc85249afd7582688b0a84393aa6791ea87b6435ea0030341ac4"} Jan 31 07:44:15 crc kubenswrapper[4908]: I0131 07:44:15.805541 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 07:44:15 crc kubenswrapper[4908]: I0131 07:44:15.835192 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.468555784 podStartE2EDuration="6.83517646s" podCreationTimestamp="2026-01-31 07:44:09 +0000 UTC" firstStartedPulling="2026-01-31 07:44:10.608699226 +0000 UTC m=+1357.224643880" lastFinishedPulling="2026-01-31 07:44:14.975319902 +0000 UTC m=+1361.591264556" observedRunningTime="2026-01-31 07:44:15.829805817 +0000 UTC m=+1362.445750471" watchObservedRunningTime="2026-01-31 07:44:15.83517646 +0000 UTC m=+1362.451121114" Jan 31 07:44:22 crc kubenswrapper[4908]: I0131 07:44:22.863124 4908 generic.go:334] "Generic (PLEG): container finished" podID="668eb706-10d5-4310-9c66-d8d77bebe230" containerID="292464a81710a3ad057c52f4a6867bd255c4ad37143b5c763f45c9dd8012504b" exitCode=0 Jan 31 07:44:22 crc kubenswrapper[4908]: I0131 07:44:22.863239 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7ksrc" event={"ID":"668eb706-10d5-4310-9c66-d8d77bebe230","Type":"ContainerDied","Data":"292464a81710a3ad057c52f4a6867bd255c4ad37143b5c763f45c9dd8012504b"} Jan 31 07:44:24 crc kubenswrapper[4908]: I0131 07:44:24.180478 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7ksrc" Jan 31 07:44:24 crc kubenswrapper[4908]: I0131 07:44:24.269827 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/668eb706-10d5-4310-9c66-d8d77bebe230-combined-ca-bundle\") pod \"668eb706-10d5-4310-9c66-d8d77bebe230\" (UID: \"668eb706-10d5-4310-9c66-d8d77bebe230\") " Jan 31 07:44:24 crc kubenswrapper[4908]: I0131 07:44:24.269948 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/668eb706-10d5-4310-9c66-d8d77bebe230-scripts\") pod \"668eb706-10d5-4310-9c66-d8d77bebe230\" (UID: \"668eb706-10d5-4310-9c66-d8d77bebe230\") " Jan 31 07:44:24 crc kubenswrapper[4908]: I0131 07:44:24.269972 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xl2t\" (UniqueName: \"kubernetes.io/projected/668eb706-10d5-4310-9c66-d8d77bebe230-kube-api-access-5xl2t\") pod \"668eb706-10d5-4310-9c66-d8d77bebe230\" (UID: \"668eb706-10d5-4310-9c66-d8d77bebe230\") " Jan 31 07:44:24 crc kubenswrapper[4908]: I0131 07:44:24.270054 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/668eb706-10d5-4310-9c66-d8d77bebe230-config-data\") pod \"668eb706-10d5-4310-9c66-d8d77bebe230\" (UID: \"668eb706-10d5-4310-9c66-d8d77bebe230\") " Jan 31 07:44:24 crc kubenswrapper[4908]: I0131 07:44:24.275106 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/668eb706-10d5-4310-9c66-d8d77bebe230-scripts" (OuterVolumeSpecName: "scripts") pod "668eb706-10d5-4310-9c66-d8d77bebe230" (UID: "668eb706-10d5-4310-9c66-d8d77bebe230"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:44:24 crc kubenswrapper[4908]: I0131 07:44:24.275109 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/668eb706-10d5-4310-9c66-d8d77bebe230-kube-api-access-5xl2t" (OuterVolumeSpecName: "kube-api-access-5xl2t") pod "668eb706-10d5-4310-9c66-d8d77bebe230" (UID: "668eb706-10d5-4310-9c66-d8d77bebe230"). InnerVolumeSpecName "kube-api-access-5xl2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:44:24 crc kubenswrapper[4908]: I0131 07:44:24.295263 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/668eb706-10d5-4310-9c66-d8d77bebe230-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "668eb706-10d5-4310-9c66-d8d77bebe230" (UID: "668eb706-10d5-4310-9c66-d8d77bebe230"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:44:24 crc kubenswrapper[4908]: I0131 07:44:24.302223 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/668eb706-10d5-4310-9c66-d8d77bebe230-config-data" (OuterVolumeSpecName: "config-data") pod "668eb706-10d5-4310-9c66-d8d77bebe230" (UID: "668eb706-10d5-4310-9c66-d8d77bebe230"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:44:24 crc kubenswrapper[4908]: I0131 07:44:24.373369 4908 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/668eb706-10d5-4310-9c66-d8d77bebe230-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:24 crc kubenswrapper[4908]: I0131 07:44:24.373406 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xl2t\" (UniqueName: \"kubernetes.io/projected/668eb706-10d5-4310-9c66-d8d77bebe230-kube-api-access-5xl2t\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:24 crc kubenswrapper[4908]: I0131 07:44:24.373417 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/668eb706-10d5-4310-9c66-d8d77bebe230-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:24 crc kubenswrapper[4908]: I0131 07:44:24.373428 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/668eb706-10d5-4310-9c66-d8d77bebe230-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:24 crc kubenswrapper[4908]: I0131 07:44:24.879070 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7ksrc" event={"ID":"668eb706-10d5-4310-9c66-d8d77bebe230","Type":"ContainerDied","Data":"27f4536d37640b1afd6cc8abe5ea444a718fbe0e28a1e18ac8b61940705a2a03"} Jan 31 07:44:24 crc kubenswrapper[4908]: I0131 07:44:24.879409 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27f4536d37640b1afd6cc8abe5ea444a718fbe0e28a1e18ac8b61940705a2a03" Jan 31 07:44:24 crc kubenswrapper[4908]: I0131 07:44:24.879138 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7ksrc" Jan 31 07:44:24 crc kubenswrapper[4908]: I0131 07:44:24.977488 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 07:44:24 crc kubenswrapper[4908]: E0131 07:44:24.977860 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="668eb706-10d5-4310-9c66-d8d77bebe230" containerName="nova-cell0-conductor-db-sync" Jan 31 07:44:24 crc kubenswrapper[4908]: I0131 07:44:24.977875 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="668eb706-10d5-4310-9c66-d8d77bebe230" containerName="nova-cell0-conductor-db-sync" Jan 31 07:44:24 crc kubenswrapper[4908]: I0131 07:44:24.978121 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="668eb706-10d5-4310-9c66-d8d77bebe230" containerName="nova-cell0-conductor-db-sync" Jan 31 07:44:24 crc kubenswrapper[4908]: I0131 07:44:24.978882 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 31 07:44:24 crc kubenswrapper[4908]: I0131 07:44:24.981325 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-6c4f4" Jan 31 07:44:24 crc kubenswrapper[4908]: I0131 07:44:24.981677 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 31 07:44:24 crc kubenswrapper[4908]: I0131 07:44:24.996282 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 07:44:25 crc kubenswrapper[4908]: I0131 07:44:25.083600 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f7e63c4-f87f-4cab-be77-7da55fcddd87-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7f7e63c4-f87f-4cab-be77-7da55fcddd87\") " pod="openstack/nova-cell0-conductor-0" Jan 31 07:44:25 crc kubenswrapper[4908]: I0131 07:44:25.083712 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kpzz\" (UniqueName: \"kubernetes.io/projected/7f7e63c4-f87f-4cab-be77-7da55fcddd87-kube-api-access-8kpzz\") pod \"nova-cell0-conductor-0\" (UID: \"7f7e63c4-f87f-4cab-be77-7da55fcddd87\") " pod="openstack/nova-cell0-conductor-0" Jan 31 07:44:25 crc kubenswrapper[4908]: I0131 07:44:25.083742 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f7e63c4-f87f-4cab-be77-7da55fcddd87-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7f7e63c4-f87f-4cab-be77-7da55fcddd87\") " pod="openstack/nova-cell0-conductor-0" Jan 31 07:44:25 crc kubenswrapper[4908]: I0131 07:44:25.185040 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f7e63c4-f87f-4cab-be77-7da55fcddd87-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7f7e63c4-f87f-4cab-be77-7da55fcddd87\") " pod="openstack/nova-cell0-conductor-0" Jan 31 07:44:25 crc kubenswrapper[4908]: I0131 07:44:25.185142 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kpzz\" (UniqueName: \"kubernetes.io/projected/7f7e63c4-f87f-4cab-be77-7da55fcddd87-kube-api-access-8kpzz\") pod \"nova-cell0-conductor-0\" (UID: \"7f7e63c4-f87f-4cab-be77-7da55fcddd87\") " pod="openstack/nova-cell0-conductor-0" Jan 31 07:44:25 crc kubenswrapper[4908]: I0131 07:44:25.185171 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f7e63c4-f87f-4cab-be77-7da55fcddd87-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7f7e63c4-f87f-4cab-be77-7da55fcddd87\") " pod="openstack/nova-cell0-conductor-0" Jan 31 07:44:25 crc kubenswrapper[4908]: I0131 07:44:25.188487 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f7e63c4-f87f-4cab-be77-7da55fcddd87-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7f7e63c4-f87f-4cab-be77-7da55fcddd87\") " pod="openstack/nova-cell0-conductor-0" Jan 31 07:44:25 crc kubenswrapper[4908]: I0131 07:44:25.188627 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f7e63c4-f87f-4cab-be77-7da55fcddd87-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7f7e63c4-f87f-4cab-be77-7da55fcddd87\") " pod="openstack/nova-cell0-conductor-0" Jan 31 07:44:25 crc kubenswrapper[4908]: I0131 07:44:25.203742 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kpzz\" (UniqueName: \"kubernetes.io/projected/7f7e63c4-f87f-4cab-be77-7da55fcddd87-kube-api-access-8kpzz\") pod \"nova-cell0-conductor-0\" (UID: \"7f7e63c4-f87f-4cab-be77-7da55fcddd87\") " pod="openstack/nova-cell0-conductor-0" Jan 31 07:44:25 crc kubenswrapper[4908]: I0131 07:44:25.297348 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 31 07:44:25 crc kubenswrapper[4908]: I0131 07:44:25.737386 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 07:44:25 crc kubenswrapper[4908]: W0131 07:44:25.738312 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f7e63c4_f87f_4cab_be77_7da55fcddd87.slice/crio-3de2e200bd18e29d55f3c5494727f3f6e93b8dd656cb6681f4275e194dc15114 WatchSource:0}: Error finding container 3de2e200bd18e29d55f3c5494727f3f6e93b8dd656cb6681f4275e194dc15114: Status 404 returned error can't find the container with id 3de2e200bd18e29d55f3c5494727f3f6e93b8dd656cb6681f4275e194dc15114 Jan 31 07:44:25 crc kubenswrapper[4908]: I0131 07:44:25.889027 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7f7e63c4-f87f-4cab-be77-7da55fcddd87","Type":"ContainerStarted","Data":"3de2e200bd18e29d55f3c5494727f3f6e93b8dd656cb6681f4275e194dc15114"} Jan 31 07:44:26 crc kubenswrapper[4908]: I0131 07:44:26.896518 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7f7e63c4-f87f-4cab-be77-7da55fcddd87","Type":"ContainerStarted","Data":"fbb60c2fa57b1d9a4c858bc7e4e2811af2826375fb284b45251be2256cd9d703"} Jan 31 07:44:26 crc kubenswrapper[4908]: I0131 07:44:26.897631 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 31 07:44:26 crc kubenswrapper[4908]: I0131 07:44:26.915940 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.915922528 podStartE2EDuration="2.915922528s" podCreationTimestamp="2026-01-31 07:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:44:26.914518523 +0000 UTC m=+1373.530463177" watchObservedRunningTime="2026-01-31 07:44:26.915922528 +0000 UTC m=+1373.531867182" Jan 31 07:44:30 crc kubenswrapper[4908]: I0131 07:44:30.324192 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 31 07:44:30 crc kubenswrapper[4908]: I0131 07:44:30.789242 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-9wmzx"] Jan 31 07:44:30 crc kubenswrapper[4908]: I0131 07:44:30.790513 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9wmzx" Jan 31 07:44:30 crc kubenswrapper[4908]: I0131 07:44:30.792293 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 31 07:44:30 crc kubenswrapper[4908]: I0131 07:44:30.794023 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 31 07:44:30 crc kubenswrapper[4908]: I0131 07:44:30.800115 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-9wmzx"] Jan 31 07:44:30 crc kubenswrapper[4908]: I0131 07:44:30.878860 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcwkf\" (UniqueName: \"kubernetes.io/projected/5db9441f-9c32-4ef1-a91c-1e6e76b57a81-kube-api-access-hcwkf\") pod \"nova-cell0-cell-mapping-9wmzx\" (UID: \"5db9441f-9c32-4ef1-a91c-1e6e76b57a81\") " pod="openstack/nova-cell0-cell-mapping-9wmzx" Jan 31 07:44:30 crc kubenswrapper[4908]: I0131 07:44:30.878925 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db9441f-9c32-4ef1-a91c-1e6e76b57a81-scripts\") pod \"nova-cell0-cell-mapping-9wmzx\" (UID: \"5db9441f-9c32-4ef1-a91c-1e6e76b57a81\") " pod="openstack/nova-cell0-cell-mapping-9wmzx" Jan 31 07:44:30 crc kubenswrapper[4908]: I0131 07:44:30.878958 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db9441f-9c32-4ef1-a91c-1e6e76b57a81-config-data\") pod \"nova-cell0-cell-mapping-9wmzx\" (UID: \"5db9441f-9c32-4ef1-a91c-1e6e76b57a81\") " pod="openstack/nova-cell0-cell-mapping-9wmzx" Jan 31 07:44:30 crc kubenswrapper[4908]: I0131 07:44:30.879057 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db9441f-9c32-4ef1-a91c-1e6e76b57a81-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-9wmzx\" (UID: \"5db9441f-9c32-4ef1-a91c-1e6e76b57a81\") " pod="openstack/nova-cell0-cell-mapping-9wmzx" Jan 31 07:44:30 crc kubenswrapper[4908]: I0131 07:44:30.961284 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 07:44:30 crc kubenswrapper[4908]: I0131 07:44:30.962659 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:44:30 crc kubenswrapper[4908]: I0131 07:44:30.967720 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 31 07:44:30 crc kubenswrapper[4908]: I0131 07:44:30.980807 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcwkf\" (UniqueName: \"kubernetes.io/projected/5db9441f-9c32-4ef1-a91c-1e6e76b57a81-kube-api-access-hcwkf\") pod \"nova-cell0-cell-mapping-9wmzx\" (UID: \"5db9441f-9c32-4ef1-a91c-1e6e76b57a81\") " pod="openstack/nova-cell0-cell-mapping-9wmzx" Jan 31 07:44:30 crc kubenswrapper[4908]: I0131 07:44:30.980859 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db9441f-9c32-4ef1-a91c-1e6e76b57a81-scripts\") pod \"nova-cell0-cell-mapping-9wmzx\" (UID: \"5db9441f-9c32-4ef1-a91c-1e6e76b57a81\") " pod="openstack/nova-cell0-cell-mapping-9wmzx" Jan 31 07:44:30 crc kubenswrapper[4908]: I0131 07:44:30.980898 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db9441f-9c32-4ef1-a91c-1e6e76b57a81-config-data\") pod \"nova-cell0-cell-mapping-9wmzx\" (UID: \"5db9441f-9c32-4ef1-a91c-1e6e76b57a81\") " pod="openstack/nova-cell0-cell-mapping-9wmzx" Jan 31 07:44:30 crc kubenswrapper[4908]: I0131 07:44:30.980945 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db9441f-9c32-4ef1-a91c-1e6e76b57a81-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-9wmzx\" (UID: \"5db9441f-9c32-4ef1-a91c-1e6e76b57a81\") " pod="openstack/nova-cell0-cell-mapping-9wmzx" Jan 31 07:44:30 crc kubenswrapper[4908]: I0131 07:44:30.983622 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 07:44:30 crc kubenswrapper[4908]: I0131 07:44:30.991819 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db9441f-9c32-4ef1-a91c-1e6e76b57a81-scripts\") pod \"nova-cell0-cell-mapping-9wmzx\" (UID: \"5db9441f-9c32-4ef1-a91c-1e6e76b57a81\") " pod="openstack/nova-cell0-cell-mapping-9wmzx" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.009004 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db9441f-9c32-4ef1-a91c-1e6e76b57a81-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-9wmzx\" (UID: \"5db9441f-9c32-4ef1-a91c-1e6e76b57a81\") " pod="openstack/nova-cell0-cell-mapping-9wmzx" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.011936 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db9441f-9c32-4ef1-a91c-1e6e76b57a81-config-data\") pod \"nova-cell0-cell-mapping-9wmzx\" (UID: \"5db9441f-9c32-4ef1-a91c-1e6e76b57a81\") " pod="openstack/nova-cell0-cell-mapping-9wmzx" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.013041 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcwkf\" (UniqueName: \"kubernetes.io/projected/5db9441f-9c32-4ef1-a91c-1e6e76b57a81-kube-api-access-hcwkf\") pod \"nova-cell0-cell-mapping-9wmzx\" (UID: \"5db9441f-9c32-4ef1-a91c-1e6e76b57a81\") " pod="openstack/nova-cell0-cell-mapping-9wmzx" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.052051 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.053363 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.055863 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.072255 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.082889 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt8kt\" (UniqueName: \"kubernetes.io/projected/087d762e-9478-43bb-b605-785abab29e87-kube-api-access-vt8kt\") pod \"nova-cell1-novncproxy-0\" (UID: \"087d762e-9478-43bb-b605-785abab29e87\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.083072 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/087d762e-9478-43bb-b605-785abab29e87-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"087d762e-9478-43bb-b605-785abab29e87\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.083111 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/087d762e-9478-43bb-b605-785abab29e87-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"087d762e-9478-43bb-b605-785abab29e87\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.096848 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.100401 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.110578 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.113599 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9wmzx" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.172285 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.184349 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871262c6-52b5-4ed8-9f81-7f42677e3763-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"871262c6-52b5-4ed8-9f81-7f42677e3763\") " pod="openstack/nova-scheduler-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.184388 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cf61a75-5c89-40e2-aa0c-808ea9bf4d44-logs\") pod \"nova-metadata-0\" (UID: \"7cf61a75-5c89-40e2-aa0c-808ea9bf4d44\") " pod="openstack/nova-metadata-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.184515 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfwcq\" (UniqueName: \"kubernetes.io/projected/7cf61a75-5c89-40e2-aa0c-808ea9bf4d44-kube-api-access-hfwcq\") pod \"nova-metadata-0\" (UID: \"7cf61a75-5c89-40e2-aa0c-808ea9bf4d44\") " pod="openstack/nova-metadata-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.184568 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/087d762e-9478-43bb-b605-785abab29e87-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"087d762e-9478-43bb-b605-785abab29e87\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.184652 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/087d762e-9478-43bb-b605-785abab29e87-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"087d762e-9478-43bb-b605-785abab29e87\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.184736 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/871262c6-52b5-4ed8-9f81-7f42677e3763-config-data\") pod \"nova-scheduler-0\" (UID: \"871262c6-52b5-4ed8-9f81-7f42677e3763\") " pod="openstack/nova-scheduler-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.184789 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cf61a75-5c89-40e2-aa0c-808ea9bf4d44-config-data\") pod \"nova-metadata-0\" (UID: \"7cf61a75-5c89-40e2-aa0c-808ea9bf4d44\") " pod="openstack/nova-metadata-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.184838 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf61a75-5c89-40e2-aa0c-808ea9bf4d44-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7cf61a75-5c89-40e2-aa0c-808ea9bf4d44\") " pod="openstack/nova-metadata-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.184899 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt8kt\" (UniqueName: \"kubernetes.io/projected/087d762e-9478-43bb-b605-785abab29e87-kube-api-access-vt8kt\") pod \"nova-cell1-novncproxy-0\" (UID: \"087d762e-9478-43bb-b605-785abab29e87\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.185097 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsnsc\" (UniqueName: \"kubernetes.io/projected/871262c6-52b5-4ed8-9f81-7f42677e3763-kube-api-access-nsnsc\") pod \"nova-scheduler-0\" (UID: \"871262c6-52b5-4ed8-9f81-7f42677e3763\") " pod="openstack/nova-scheduler-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.189194 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/087d762e-9478-43bb-b605-785abab29e87-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"087d762e-9478-43bb-b605-785abab29e87\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.189882 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/087d762e-9478-43bb-b605-785abab29e87-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"087d762e-9478-43bb-b605-785abab29e87\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.209638 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt8kt\" (UniqueName: \"kubernetes.io/projected/087d762e-9478-43bb-b605-785abab29e87-kube-api-access-vt8kt\") pod \"nova-cell1-novncproxy-0\" (UID: \"087d762e-9478-43bb-b605-785abab29e87\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.236713 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.238199 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.249920 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.252080 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.270507 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-m4hwz"] Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.272303 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-m4hwz" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.286423 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.288156 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/871262c6-52b5-4ed8-9f81-7f42677e3763-config-data\") pod \"nova-scheduler-0\" (UID: \"871262c6-52b5-4ed8-9f81-7f42677e3763\") " pod="openstack/nova-scheduler-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.288236 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cf61a75-5c89-40e2-aa0c-808ea9bf4d44-config-data\") pod \"nova-metadata-0\" (UID: \"7cf61a75-5c89-40e2-aa0c-808ea9bf4d44\") " pod="openstack/nova-metadata-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.288270 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf61a75-5c89-40e2-aa0c-808ea9bf4d44-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7cf61a75-5c89-40e2-aa0c-808ea9bf4d44\") " pod="openstack/nova-metadata-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.288374 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsnsc\" (UniqueName: \"kubernetes.io/projected/871262c6-52b5-4ed8-9f81-7f42677e3763-kube-api-access-nsnsc\") pod \"nova-scheduler-0\" (UID: \"871262c6-52b5-4ed8-9f81-7f42677e3763\") " pod="openstack/nova-scheduler-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.288401 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871262c6-52b5-4ed8-9f81-7f42677e3763-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"871262c6-52b5-4ed8-9f81-7f42677e3763\") " pod="openstack/nova-scheduler-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.288420 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cf61a75-5c89-40e2-aa0c-808ea9bf4d44-logs\") pod \"nova-metadata-0\" (UID: \"7cf61a75-5c89-40e2-aa0c-808ea9bf4d44\") " pod="openstack/nova-metadata-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.288453 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfwcq\" (UniqueName: \"kubernetes.io/projected/7cf61a75-5c89-40e2-aa0c-808ea9bf4d44-kube-api-access-hfwcq\") pod \"nova-metadata-0\" (UID: \"7cf61a75-5c89-40e2-aa0c-808ea9bf4d44\") " pod="openstack/nova-metadata-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.292090 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/871262c6-52b5-4ed8-9f81-7f42677e3763-config-data\") pod \"nova-scheduler-0\" (UID: \"871262c6-52b5-4ed8-9f81-7f42677e3763\") " pod="openstack/nova-scheduler-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.293997 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cf61a75-5c89-40e2-aa0c-808ea9bf4d44-logs\") pod \"nova-metadata-0\" (UID: \"7cf61a75-5c89-40e2-aa0c-808ea9bf4d44\") " pod="openstack/nova-metadata-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.298538 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-m4hwz"] Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.298838 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871262c6-52b5-4ed8-9f81-7f42677e3763-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"871262c6-52b5-4ed8-9f81-7f42677e3763\") " pod="openstack/nova-scheduler-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.302262 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf61a75-5c89-40e2-aa0c-808ea9bf4d44-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7cf61a75-5c89-40e2-aa0c-808ea9bf4d44\") " pod="openstack/nova-metadata-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.314621 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cf61a75-5c89-40e2-aa0c-808ea9bf4d44-config-data\") pod \"nova-metadata-0\" (UID: \"7cf61a75-5c89-40e2-aa0c-808ea9bf4d44\") " pod="openstack/nova-metadata-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.315547 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsnsc\" (UniqueName: \"kubernetes.io/projected/871262c6-52b5-4ed8-9f81-7f42677e3763-kube-api-access-nsnsc\") pod \"nova-scheduler-0\" (UID: \"871262c6-52b5-4ed8-9f81-7f42677e3763\") " pod="openstack/nova-scheduler-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.327320 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfwcq\" (UniqueName: \"kubernetes.io/projected/7cf61a75-5c89-40e2-aa0c-808ea9bf4d44-kube-api-access-hfwcq\") pod \"nova-metadata-0\" (UID: \"7cf61a75-5c89-40e2-aa0c-808ea9bf4d44\") " pod="openstack/nova-metadata-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.388346 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.391659 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm5jw\" (UniqueName: \"kubernetes.io/projected/96c5593c-c77f-4dc7-9d32-d95b094fe1b3-kube-api-access-zm5jw\") pod \"dnsmasq-dns-8b8cf6657-m4hwz\" (UID: \"96c5593c-c77f-4dc7-9d32-d95b094fe1b3\") " pod="openstack/dnsmasq-dns-8b8cf6657-m4hwz" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.391769 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96c5593c-c77f-4dc7-9d32-d95b094fe1b3-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-m4hwz\" (UID: \"96c5593c-c77f-4dc7-9d32-d95b094fe1b3\") " pod="openstack/dnsmasq-dns-8b8cf6657-m4hwz" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.391792 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96c5593c-c77f-4dc7-9d32-d95b094fe1b3-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-m4hwz\" (UID: \"96c5593c-c77f-4dc7-9d32-d95b094fe1b3\") " pod="openstack/dnsmasq-dns-8b8cf6657-m4hwz" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.391811 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8137b82-8885-4e06-a9c0-226b099080e8-config-data\") pod \"nova-api-0\" (UID: \"c8137b82-8885-4e06-a9c0-226b099080e8\") " pod="openstack/nova-api-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.391849 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8137b82-8885-4e06-a9c0-226b099080e8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c8137b82-8885-4e06-a9c0-226b099080e8\") " pod="openstack/nova-api-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.391868 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8gg5\" (UniqueName: \"kubernetes.io/projected/c8137b82-8885-4e06-a9c0-226b099080e8-kube-api-access-r8gg5\") pod \"nova-api-0\" (UID: \"c8137b82-8885-4e06-a9c0-226b099080e8\") " pod="openstack/nova-api-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.392739 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96c5593c-c77f-4dc7-9d32-d95b094fe1b3-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-m4hwz\" (UID: \"96c5593c-c77f-4dc7-9d32-d95b094fe1b3\") " pod="openstack/dnsmasq-dns-8b8cf6657-m4hwz" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.392829 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8137b82-8885-4e06-a9c0-226b099080e8-logs\") pod \"nova-api-0\" (UID: \"c8137b82-8885-4e06-a9c0-226b099080e8\") " pod="openstack/nova-api-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.392883 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96c5593c-c77f-4dc7-9d32-d95b094fe1b3-config\") pod \"dnsmasq-dns-8b8cf6657-m4hwz\" (UID: \"96c5593c-c77f-4dc7-9d32-d95b094fe1b3\") " pod="openstack/dnsmasq-dns-8b8cf6657-m4hwz" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.442181 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.495260 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm5jw\" (UniqueName: \"kubernetes.io/projected/96c5593c-c77f-4dc7-9d32-d95b094fe1b3-kube-api-access-zm5jw\") pod \"dnsmasq-dns-8b8cf6657-m4hwz\" (UID: \"96c5593c-c77f-4dc7-9d32-d95b094fe1b3\") " pod="openstack/dnsmasq-dns-8b8cf6657-m4hwz" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.495339 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96c5593c-c77f-4dc7-9d32-d95b094fe1b3-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-m4hwz\" (UID: \"96c5593c-c77f-4dc7-9d32-d95b094fe1b3\") " pod="openstack/dnsmasq-dns-8b8cf6657-m4hwz" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.495362 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96c5593c-c77f-4dc7-9d32-d95b094fe1b3-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-m4hwz\" (UID: \"96c5593c-c77f-4dc7-9d32-d95b094fe1b3\") " pod="openstack/dnsmasq-dns-8b8cf6657-m4hwz" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.495380 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8137b82-8885-4e06-a9c0-226b099080e8-config-data\") pod \"nova-api-0\" (UID: \"c8137b82-8885-4e06-a9c0-226b099080e8\") " pod="openstack/nova-api-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.495408 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8137b82-8885-4e06-a9c0-226b099080e8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c8137b82-8885-4e06-a9c0-226b099080e8\") " pod="openstack/nova-api-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.495424 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8gg5\" (UniqueName: \"kubernetes.io/projected/c8137b82-8885-4e06-a9c0-226b099080e8-kube-api-access-r8gg5\") pod \"nova-api-0\" (UID: \"c8137b82-8885-4e06-a9c0-226b099080e8\") " pod="openstack/nova-api-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.495477 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96c5593c-c77f-4dc7-9d32-d95b094fe1b3-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-m4hwz\" (UID: \"96c5593c-c77f-4dc7-9d32-d95b094fe1b3\") " pod="openstack/dnsmasq-dns-8b8cf6657-m4hwz" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.495504 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8137b82-8885-4e06-a9c0-226b099080e8-logs\") pod \"nova-api-0\" (UID: \"c8137b82-8885-4e06-a9c0-226b099080e8\") " pod="openstack/nova-api-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.495528 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96c5593c-c77f-4dc7-9d32-d95b094fe1b3-config\") pod \"dnsmasq-dns-8b8cf6657-m4hwz\" (UID: \"96c5593c-c77f-4dc7-9d32-d95b094fe1b3\") " pod="openstack/dnsmasq-dns-8b8cf6657-m4hwz" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.496481 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96c5593c-c77f-4dc7-9d32-d95b094fe1b3-config\") pod \"dnsmasq-dns-8b8cf6657-m4hwz\" (UID: \"96c5593c-c77f-4dc7-9d32-d95b094fe1b3\") " pod="openstack/dnsmasq-dns-8b8cf6657-m4hwz" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.497526 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96c5593c-c77f-4dc7-9d32-d95b094fe1b3-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-m4hwz\" (UID: \"96c5593c-c77f-4dc7-9d32-d95b094fe1b3\") " pod="openstack/dnsmasq-dns-8b8cf6657-m4hwz" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.497724 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8137b82-8885-4e06-a9c0-226b099080e8-logs\") pod \"nova-api-0\" (UID: \"c8137b82-8885-4e06-a9c0-226b099080e8\") " pod="openstack/nova-api-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.498548 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96c5593c-c77f-4dc7-9d32-d95b094fe1b3-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-m4hwz\" (UID: \"96c5593c-c77f-4dc7-9d32-d95b094fe1b3\") " pod="openstack/dnsmasq-dns-8b8cf6657-m4hwz" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.501793 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96c5593c-c77f-4dc7-9d32-d95b094fe1b3-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-m4hwz\" (UID: \"96c5593c-c77f-4dc7-9d32-d95b094fe1b3\") " pod="openstack/dnsmasq-dns-8b8cf6657-m4hwz" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.502565 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8137b82-8885-4e06-a9c0-226b099080e8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c8137b82-8885-4e06-a9c0-226b099080e8\") " pod="openstack/nova-api-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.515033 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8137b82-8885-4e06-a9c0-226b099080e8-config-data\") pod \"nova-api-0\" (UID: \"c8137b82-8885-4e06-a9c0-226b099080e8\") " pod="openstack/nova-api-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.525849 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm5jw\" (UniqueName: \"kubernetes.io/projected/96c5593c-c77f-4dc7-9d32-d95b094fe1b3-kube-api-access-zm5jw\") pod \"dnsmasq-dns-8b8cf6657-m4hwz\" (UID: \"96c5593c-c77f-4dc7-9d32-d95b094fe1b3\") " pod="openstack/dnsmasq-dns-8b8cf6657-m4hwz" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.533838 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8gg5\" (UniqueName: \"kubernetes.io/projected/c8137b82-8885-4e06-a9c0-226b099080e8-kube-api-access-r8gg5\") pod \"nova-api-0\" (UID: \"c8137b82-8885-4e06-a9c0-226b099080e8\") " pod="openstack/nova-api-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.616517 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.643211 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-m4hwz" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.783499 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-9wmzx"] Jan 31 07:44:31 crc kubenswrapper[4908]: W0131 07:44:31.789302 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5db9441f_9c32_4ef1_a91c_1e6e76b57a81.slice/crio-a49bcffbc6f9bb921362799dbef552879819d3f1e09fd8c2c3ad247e1df94bc8 WatchSource:0}: Error finding container a49bcffbc6f9bb921362799dbef552879819d3f1e09fd8c2c3ad247e1df94bc8: Status 404 returned error can't find the container with id a49bcffbc6f9bb921362799dbef552879819d3f1e09fd8c2c3ad247e1df94bc8 Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.896639 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 07:44:31 crc kubenswrapper[4908]: W0131 07:44:31.912205 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod087d762e_9478_43bb_b605_785abab29e87.slice/crio-5759b9b3e2a214219282a2ffc5d6545ed512954e8bb6052be63f360bf773cbc1 WatchSource:0}: Error finding container 5759b9b3e2a214219282a2ffc5d6545ed512954e8bb6052be63f360bf773cbc1: Status 404 returned error can't find the container with id 5759b9b3e2a214219282a2ffc5d6545ed512954e8bb6052be63f360bf773cbc1 Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.917039 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-x7zxw"] Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.918103 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-x7zxw" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.921740 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.921873 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.980699 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-x7zxw"] Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.981327 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"087d762e-9478-43bb-b605-785abab29e87","Type":"ContainerStarted","Data":"5759b9b3e2a214219282a2ffc5d6545ed512954e8bb6052be63f360bf773cbc1"} Jan 31 07:44:31 crc kubenswrapper[4908]: I0131 07:44:31.981703 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9wmzx" event={"ID":"5db9441f-9c32-4ef1-a91c-1e6e76b57a81","Type":"ContainerStarted","Data":"a49bcffbc6f9bb921362799dbef552879819d3f1e09fd8c2c3ad247e1df94bc8"} Jan 31 07:44:32 crc kubenswrapper[4908]: I0131 07:44:32.010001 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/905d2170-5f0c-4ee0-86b9-d659c80ad9f7-config-data\") pod \"nova-cell1-conductor-db-sync-x7zxw\" (UID: \"905d2170-5f0c-4ee0-86b9-d659c80ad9f7\") " pod="openstack/nova-cell1-conductor-db-sync-x7zxw" Jan 31 07:44:32 crc kubenswrapper[4908]: I0131 07:44:32.010063 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/905d2170-5f0c-4ee0-86b9-d659c80ad9f7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-x7zxw\" (UID: \"905d2170-5f0c-4ee0-86b9-d659c80ad9f7\") " pod="openstack/nova-cell1-conductor-db-sync-x7zxw" Jan 31 07:44:32 crc kubenswrapper[4908]: I0131 07:44:32.010107 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/905d2170-5f0c-4ee0-86b9-d659c80ad9f7-scripts\") pod \"nova-cell1-conductor-db-sync-x7zxw\" (UID: \"905d2170-5f0c-4ee0-86b9-d659c80ad9f7\") " pod="openstack/nova-cell1-conductor-db-sync-x7zxw" Jan 31 07:44:32 crc kubenswrapper[4908]: I0131 07:44:32.010168 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6sbp\" (UniqueName: \"kubernetes.io/projected/905d2170-5f0c-4ee0-86b9-d659c80ad9f7-kube-api-access-q6sbp\") pod \"nova-cell1-conductor-db-sync-x7zxw\" (UID: \"905d2170-5f0c-4ee0-86b9-d659c80ad9f7\") " pod="openstack/nova-cell1-conductor-db-sync-x7zxw" Jan 31 07:44:32 crc kubenswrapper[4908]: W0131 07:44:32.043542 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cf61a75_5c89_40e2_aa0c_808ea9bf4d44.slice/crio-fc99d8b5d05373bc75ffb5264dac2297faa560cecb9ad3da15275946e65ff3bc WatchSource:0}: Error finding container fc99d8b5d05373bc75ffb5264dac2297faa560cecb9ad3da15275946e65ff3bc: Status 404 returned error can't find the container with id fc99d8b5d05373bc75ffb5264dac2297faa560cecb9ad3da15275946e65ff3bc Jan 31 07:44:32 crc kubenswrapper[4908]: I0131 07:44:32.047207 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 07:44:32 crc kubenswrapper[4908]: I0131 07:44:32.060699 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:44:32 crc kubenswrapper[4908]: I0131 07:44:32.112964 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/905d2170-5f0c-4ee0-86b9-d659c80ad9f7-scripts\") pod \"nova-cell1-conductor-db-sync-x7zxw\" (UID: \"905d2170-5f0c-4ee0-86b9-d659c80ad9f7\") " pod="openstack/nova-cell1-conductor-db-sync-x7zxw" Jan 31 07:44:32 crc kubenswrapper[4908]: I0131 07:44:32.113033 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6sbp\" (UniqueName: \"kubernetes.io/projected/905d2170-5f0c-4ee0-86b9-d659c80ad9f7-kube-api-access-q6sbp\") pod \"nova-cell1-conductor-db-sync-x7zxw\" (UID: \"905d2170-5f0c-4ee0-86b9-d659c80ad9f7\") " pod="openstack/nova-cell1-conductor-db-sync-x7zxw" Jan 31 07:44:32 crc kubenswrapper[4908]: I0131 07:44:32.113170 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/905d2170-5f0c-4ee0-86b9-d659c80ad9f7-config-data\") pod \"nova-cell1-conductor-db-sync-x7zxw\" (UID: \"905d2170-5f0c-4ee0-86b9-d659c80ad9f7\") " pod="openstack/nova-cell1-conductor-db-sync-x7zxw" Jan 31 07:44:32 crc kubenswrapper[4908]: I0131 07:44:32.113194 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/905d2170-5f0c-4ee0-86b9-d659c80ad9f7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-x7zxw\" (UID: \"905d2170-5f0c-4ee0-86b9-d659c80ad9f7\") " pod="openstack/nova-cell1-conductor-db-sync-x7zxw" Jan 31 07:44:32 crc kubenswrapper[4908]: I0131 07:44:32.118644 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/905d2170-5f0c-4ee0-86b9-d659c80ad9f7-scripts\") pod \"nova-cell1-conductor-db-sync-x7zxw\" (UID: \"905d2170-5f0c-4ee0-86b9-d659c80ad9f7\") " pod="openstack/nova-cell1-conductor-db-sync-x7zxw" Jan 31 07:44:32 crc kubenswrapper[4908]: I0131 07:44:32.119326 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/905d2170-5f0c-4ee0-86b9-d659c80ad9f7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-x7zxw\" (UID: \"905d2170-5f0c-4ee0-86b9-d659c80ad9f7\") " pod="openstack/nova-cell1-conductor-db-sync-x7zxw" Jan 31 07:44:32 crc kubenswrapper[4908]: I0131 07:44:32.119681 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/905d2170-5f0c-4ee0-86b9-d659c80ad9f7-config-data\") pod \"nova-cell1-conductor-db-sync-x7zxw\" (UID: \"905d2170-5f0c-4ee0-86b9-d659c80ad9f7\") " pod="openstack/nova-cell1-conductor-db-sync-x7zxw" Jan 31 07:44:32 crc kubenswrapper[4908]: I0131 07:44:32.132636 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6sbp\" (UniqueName: \"kubernetes.io/projected/905d2170-5f0c-4ee0-86b9-d659c80ad9f7-kube-api-access-q6sbp\") pod \"nova-cell1-conductor-db-sync-x7zxw\" (UID: \"905d2170-5f0c-4ee0-86b9-d659c80ad9f7\") " pod="openstack/nova-cell1-conductor-db-sync-x7zxw" Jan 31 07:44:32 crc kubenswrapper[4908]: I0131 07:44:32.207411 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-m4hwz"] Jan 31 07:44:32 crc kubenswrapper[4908]: I0131 07:44:32.215573 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:44:32 crc kubenswrapper[4908]: W0131 07:44:32.217465 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8137b82_8885_4e06_a9c0_226b099080e8.slice/crio-5fc4d9aa042ecb734cf5faa44e89b4fb14f2831fc8af5287c14f216833a61aa6 WatchSource:0}: Error finding container 5fc4d9aa042ecb734cf5faa44e89b4fb14f2831fc8af5287c14f216833a61aa6: Status 404 returned error can't find the container with id 5fc4d9aa042ecb734cf5faa44e89b4fb14f2831fc8af5287c14f216833a61aa6 Jan 31 07:44:32 crc kubenswrapper[4908]: I0131 07:44:32.251450 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-x7zxw" Jan 31 07:44:32 crc kubenswrapper[4908]: I0131 07:44:32.769080 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-x7zxw"] Jan 31 07:44:32 crc kubenswrapper[4908]: I0131 07:44:32.966055 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-x7zxw" event={"ID":"905d2170-5f0c-4ee0-86b9-d659c80ad9f7","Type":"ContainerStarted","Data":"90fd9664b230b304a2cdfabbac2b9ff4b0d5078416e6527ab66d6fbb187cd89f"} Jan 31 07:44:32 crc kubenswrapper[4908]: I0131 07:44:32.966096 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-x7zxw" event={"ID":"905d2170-5f0c-4ee0-86b9-d659c80ad9f7","Type":"ContainerStarted","Data":"d3e2bce01a8837d789fad30a234997bd48827b1d30ed9d3201484076e2dd39dc"} Jan 31 07:44:32 crc kubenswrapper[4908]: I0131 07:44:32.967005 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c8137b82-8885-4e06-a9c0-226b099080e8","Type":"ContainerStarted","Data":"5fc4d9aa042ecb734cf5faa44e89b4fb14f2831fc8af5287c14f216833a61aa6"} Jan 31 07:44:32 crc kubenswrapper[4908]: I0131 07:44:32.968463 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7cf61a75-5c89-40e2-aa0c-808ea9bf4d44","Type":"ContainerStarted","Data":"fc99d8b5d05373bc75ffb5264dac2297faa560cecb9ad3da15275946e65ff3bc"} Jan 31 07:44:32 crc kubenswrapper[4908]: I0131 07:44:32.969491 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"871262c6-52b5-4ed8-9f81-7f42677e3763","Type":"ContainerStarted","Data":"b25cbcc2c122defb0b544bcef89386eb061d32fc3d9246d5d1ec6fd1e20744b4"} Jan 31 07:44:32 crc kubenswrapper[4908]: I0131 07:44:32.971076 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9wmzx" event={"ID":"5db9441f-9c32-4ef1-a91c-1e6e76b57a81","Type":"ContainerStarted","Data":"7f4a49a40ac208b54feea15b6d7d43ff1b3c2bc69f05951587dc554e55a7ab0d"} Jan 31 07:44:32 crc kubenswrapper[4908]: I0131 07:44:32.973836 4908 generic.go:334] "Generic (PLEG): container finished" podID="96c5593c-c77f-4dc7-9d32-d95b094fe1b3" containerID="3c598a1705b0d0f7cc6b724347ae1843421936f95767e26e760b8b85bfb87b92" exitCode=0 Jan 31 07:44:32 crc kubenswrapper[4908]: I0131 07:44:32.973920 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-m4hwz" event={"ID":"96c5593c-c77f-4dc7-9d32-d95b094fe1b3","Type":"ContainerDied","Data":"3c598a1705b0d0f7cc6b724347ae1843421936f95767e26e760b8b85bfb87b92"} Jan 31 07:44:32 crc kubenswrapper[4908]: I0131 07:44:32.973948 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-m4hwz" event={"ID":"96c5593c-c77f-4dc7-9d32-d95b094fe1b3","Type":"ContainerStarted","Data":"b663f267fce0fa6d9d7ef87b753cf3607622da67a308519ed1ac2d7e591c8fec"} Jan 31 07:44:33 crc kubenswrapper[4908]: I0131 07:44:33.015069 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-9wmzx" podStartSLOduration=3.015051435 podStartE2EDuration="3.015051435s" podCreationTimestamp="2026-01-31 07:44:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:44:32.990779614 +0000 UTC m=+1379.606724298" watchObservedRunningTime="2026-01-31 07:44:33.015051435 +0000 UTC m=+1379.630996089" Jan 31 07:44:33 crc kubenswrapper[4908]: I0131 07:44:33.987131 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-m4hwz" event={"ID":"96c5593c-c77f-4dc7-9d32-d95b094fe1b3","Type":"ContainerStarted","Data":"485ae5a2d2d226f7b88468a57785dbdecee0eb45b29ec35b121558efdb484fb3"} Jan 31 07:44:33 crc kubenswrapper[4908]: I0131 07:44:33.987847 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b8cf6657-m4hwz" Jan 31 07:44:34 crc kubenswrapper[4908]: I0131 07:44:34.012890 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b8cf6657-m4hwz" podStartSLOduration=3.012864638 podStartE2EDuration="3.012864638s" podCreationTimestamp="2026-01-31 07:44:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:44:34.003561508 +0000 UTC m=+1380.619506162" watchObservedRunningTime="2026-01-31 07:44:34.012864638 +0000 UTC m=+1380.628809282" Jan 31 07:44:34 crc kubenswrapper[4908]: I0131 07:44:34.019651 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-x7zxw" podStartSLOduration=3.019636256 podStartE2EDuration="3.019636256s" podCreationTimestamp="2026-01-31 07:44:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:44:34.016754895 +0000 UTC m=+1380.632699549" watchObservedRunningTime="2026-01-31 07:44:34.019636256 +0000 UTC m=+1380.635580910" Jan 31 07:44:34 crc kubenswrapper[4908]: I0131 07:44:34.449442 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 07:44:34 crc kubenswrapper[4908]: I0131 07:44:34.462860 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:44:40 crc kubenswrapper[4908]: I0131 07:44:40.047901 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7cf61a75-5c89-40e2-aa0c-808ea9bf4d44","Type":"ContainerStarted","Data":"fc4b261f2d6c22b6c12934eadb8a8aa7aaac059a8e3779602a8e4a74b5496125"} Jan 31 07:44:40 crc kubenswrapper[4908]: I0131 07:44:40.048495 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7cf61a75-5c89-40e2-aa0c-808ea9bf4d44","Type":"ContainerStarted","Data":"1f64b5f6efe229b2b4f36e5eab164e8c9ff5efe13de709be77167021e0365285"} Jan 31 07:44:40 crc kubenswrapper[4908]: I0131 07:44:40.048146 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7cf61a75-5c89-40e2-aa0c-808ea9bf4d44" containerName="nova-metadata-metadata" containerID="cri-o://fc4b261f2d6c22b6c12934eadb8a8aa7aaac059a8e3779602a8e4a74b5496125" gracePeriod=30 Jan 31 07:44:40 crc kubenswrapper[4908]: I0131 07:44:40.048034 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7cf61a75-5c89-40e2-aa0c-808ea9bf4d44" containerName="nova-metadata-log" containerID="cri-o://1f64b5f6efe229b2b4f36e5eab164e8c9ff5efe13de709be77167021e0365285" gracePeriod=30 Jan 31 07:44:40 crc kubenswrapper[4908]: I0131 07:44:40.051322 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"871262c6-52b5-4ed8-9f81-7f42677e3763","Type":"ContainerStarted","Data":"6f619b06045411b6decb3364d81efd4a65d4b6b5a93142cf032f3b9b18d016cb"} Jan 31 07:44:40 crc kubenswrapper[4908]: I0131 07:44:40.058503 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"087d762e-9478-43bb-b605-785abab29e87","Type":"ContainerStarted","Data":"b8726aaf218aa30ac788bc3684339fc4f8a70ed41a2d8a472060b5274863bbea"} Jan 31 07:44:40 crc kubenswrapper[4908]: I0131 07:44:40.058594 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="087d762e-9478-43bb-b605-785abab29e87" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://b8726aaf218aa30ac788bc3684339fc4f8a70ed41a2d8a472060b5274863bbea" gracePeriod=30 Jan 31 07:44:40 crc kubenswrapper[4908]: I0131 07:44:40.064633 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c8137b82-8885-4e06-a9c0-226b099080e8","Type":"ContainerStarted","Data":"15e40d8342a2bc5a434cf9e0ded5da122f15a81f9ea8df4281d2f84434b99999"} Jan 31 07:44:40 crc kubenswrapper[4908]: I0131 07:44:40.064684 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c8137b82-8885-4e06-a9c0-226b099080e8","Type":"ContainerStarted","Data":"fd6d19a76f36af98a5aba3b0ab34de301aa46e6fc2173ee478e204039e1edd57"} Jan 31 07:44:40 crc kubenswrapper[4908]: I0131 07:44:40.078861 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.3943736749999998 podStartE2EDuration="9.078841585s" podCreationTimestamp="2026-01-31 07:44:31 +0000 UTC" firstStartedPulling="2026-01-31 07:44:32.050782182 +0000 UTC m=+1378.666726836" lastFinishedPulling="2026-01-31 07:44:38.735250092 +0000 UTC m=+1385.351194746" observedRunningTime="2026-01-31 07:44:40.078073166 +0000 UTC m=+1386.694017830" watchObservedRunningTime="2026-01-31 07:44:40.078841585 +0000 UTC m=+1386.694786239" Jan 31 07:44:40 crc kubenswrapper[4908]: I0131 07:44:40.095242 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.405467249 podStartE2EDuration="10.09522402s" podCreationTimestamp="2026-01-31 07:44:30 +0000 UTC" firstStartedPulling="2026-01-31 07:44:32.045502301 +0000 UTC m=+1378.661446955" lastFinishedPulling="2026-01-31 07:44:38.735259072 +0000 UTC m=+1385.351203726" observedRunningTime="2026-01-31 07:44:40.092174205 +0000 UTC m=+1386.708118859" watchObservedRunningTime="2026-01-31 07:44:40.09522402 +0000 UTC m=+1386.711168674" Jan 31 07:44:40 crc kubenswrapper[4908]: I0131 07:44:40.120341 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.291015856 podStartE2EDuration="10.120309811s" podCreationTimestamp="2026-01-31 07:44:30 +0000 UTC" firstStartedPulling="2026-01-31 07:44:31.925837519 +0000 UTC m=+1378.541782173" lastFinishedPulling="2026-01-31 07:44:38.755131474 +0000 UTC m=+1385.371076128" observedRunningTime="2026-01-31 07:44:40.110387826 +0000 UTC m=+1386.726332500" watchObservedRunningTime="2026-01-31 07:44:40.120309811 +0000 UTC m=+1386.736254455" Jan 31 07:44:40 crc kubenswrapper[4908]: I0131 07:44:40.140419 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.602651261 podStartE2EDuration="9.140399489s" podCreationTimestamp="2026-01-31 07:44:31 +0000 UTC" firstStartedPulling="2026-01-31 07:44:32.220325649 +0000 UTC m=+1378.836270303" lastFinishedPulling="2026-01-31 07:44:38.758073867 +0000 UTC m=+1385.374018531" observedRunningTime="2026-01-31 07:44:40.132106023 +0000 UTC m=+1386.748050677" watchObservedRunningTime="2026-01-31 07:44:40.140399489 +0000 UTC m=+1386.756344153" Jan 31 07:44:40 crc kubenswrapper[4908]: I0131 07:44:40.166089 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 31 07:44:40 crc kubenswrapper[4908]: I0131 07:44:40.431044 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:44:40 crc kubenswrapper[4908]: I0131 07:44:40.431096 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:44:40 crc kubenswrapper[4908]: I0131 07:44:40.620751 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 07:44:40 crc kubenswrapper[4908]: I0131 07:44:40.675803 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf61a75-5c89-40e2-aa0c-808ea9bf4d44-combined-ca-bundle\") pod \"7cf61a75-5c89-40e2-aa0c-808ea9bf4d44\" (UID: \"7cf61a75-5c89-40e2-aa0c-808ea9bf4d44\") " Jan 31 07:44:40 crc kubenswrapper[4908]: I0131 07:44:40.675900 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cf61a75-5c89-40e2-aa0c-808ea9bf4d44-logs\") pod \"7cf61a75-5c89-40e2-aa0c-808ea9bf4d44\" (UID: \"7cf61a75-5c89-40e2-aa0c-808ea9bf4d44\") " Jan 31 07:44:40 crc kubenswrapper[4908]: I0131 07:44:40.676074 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cf61a75-5c89-40e2-aa0c-808ea9bf4d44-config-data\") pod \"7cf61a75-5c89-40e2-aa0c-808ea9bf4d44\" (UID: \"7cf61a75-5c89-40e2-aa0c-808ea9bf4d44\") " Jan 31 07:44:40 crc kubenswrapper[4908]: I0131 07:44:40.676145 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfwcq\" (UniqueName: \"kubernetes.io/projected/7cf61a75-5c89-40e2-aa0c-808ea9bf4d44-kube-api-access-hfwcq\") pod \"7cf61a75-5c89-40e2-aa0c-808ea9bf4d44\" (UID: \"7cf61a75-5c89-40e2-aa0c-808ea9bf4d44\") " Jan 31 07:44:40 crc kubenswrapper[4908]: I0131 07:44:40.676364 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cf61a75-5c89-40e2-aa0c-808ea9bf4d44-logs" (OuterVolumeSpecName: "logs") pod "7cf61a75-5c89-40e2-aa0c-808ea9bf4d44" (UID: "7cf61a75-5c89-40e2-aa0c-808ea9bf4d44"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:44:40 crc kubenswrapper[4908]: I0131 07:44:40.676660 4908 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cf61a75-5c89-40e2-aa0c-808ea9bf4d44-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:40 crc kubenswrapper[4908]: I0131 07:44:40.685456 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cf61a75-5c89-40e2-aa0c-808ea9bf4d44-kube-api-access-hfwcq" (OuterVolumeSpecName: "kube-api-access-hfwcq") pod "7cf61a75-5c89-40e2-aa0c-808ea9bf4d44" (UID: "7cf61a75-5c89-40e2-aa0c-808ea9bf4d44"). InnerVolumeSpecName "kube-api-access-hfwcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:44:40 crc kubenswrapper[4908]: I0131 07:44:40.702368 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cf61a75-5c89-40e2-aa0c-808ea9bf4d44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cf61a75-5c89-40e2-aa0c-808ea9bf4d44" (UID: "7cf61a75-5c89-40e2-aa0c-808ea9bf4d44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:44:40 crc kubenswrapper[4908]: I0131 07:44:40.707024 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cf61a75-5c89-40e2-aa0c-808ea9bf4d44-config-data" (OuterVolumeSpecName: "config-data") pod "7cf61a75-5c89-40e2-aa0c-808ea9bf4d44" (UID: "7cf61a75-5c89-40e2-aa0c-808ea9bf4d44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:44:40 crc kubenswrapper[4908]: I0131 07:44:40.778205 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf61a75-5c89-40e2-aa0c-808ea9bf4d44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:40 crc kubenswrapper[4908]: I0131 07:44:40.778244 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cf61a75-5c89-40e2-aa0c-808ea9bf4d44-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:40 crc kubenswrapper[4908]: I0131 07:44:40.778255 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfwcq\" (UniqueName: \"kubernetes.io/projected/7cf61a75-5c89-40e2-aa0c-808ea9bf4d44-kube-api-access-hfwcq\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.076181 4908 generic.go:334] "Generic (PLEG): container finished" podID="7cf61a75-5c89-40e2-aa0c-808ea9bf4d44" containerID="fc4b261f2d6c22b6c12934eadb8a8aa7aaac059a8e3779602a8e4a74b5496125" exitCode=0 Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.076214 4908 generic.go:334] "Generic (PLEG): container finished" podID="7cf61a75-5c89-40e2-aa0c-808ea9bf4d44" containerID="1f64b5f6efe229b2b4f36e5eab164e8c9ff5efe13de709be77167021e0365285" exitCode=143 Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.076254 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7cf61a75-5c89-40e2-aa0c-808ea9bf4d44","Type":"ContainerDied","Data":"fc4b261f2d6c22b6c12934eadb8a8aa7aaac059a8e3779602a8e4a74b5496125"} Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.076297 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7cf61a75-5c89-40e2-aa0c-808ea9bf4d44","Type":"ContainerDied","Data":"1f64b5f6efe229b2b4f36e5eab164e8c9ff5efe13de709be77167021e0365285"} Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.076264 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.076308 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7cf61a75-5c89-40e2-aa0c-808ea9bf4d44","Type":"ContainerDied","Data":"fc99d8b5d05373bc75ffb5264dac2297faa560cecb9ad3da15275946e65ff3bc"} Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.076324 4908 scope.go:117] "RemoveContainer" containerID="fc4b261f2d6c22b6c12934eadb8a8aa7aaac059a8e3779602a8e4a74b5496125" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.078382 4908 generic.go:334] "Generic (PLEG): container finished" podID="5db9441f-9c32-4ef1-a91c-1e6e76b57a81" containerID="7f4a49a40ac208b54feea15b6d7d43ff1b3c2bc69f05951587dc554e55a7ab0d" exitCode=0 Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.078456 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9wmzx" event={"ID":"5db9441f-9c32-4ef1-a91c-1e6e76b57a81","Type":"ContainerDied","Data":"7f4a49a40ac208b54feea15b6d7d43ff1b3c2bc69f05951587dc554e55a7ab0d"} Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.098090 4908 scope.go:117] "RemoveContainer" containerID="1f64b5f6efe229b2b4f36e5eab164e8c9ff5efe13de709be77167021e0365285" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.125226 4908 scope.go:117] "RemoveContainer" containerID="fc4b261f2d6c22b6c12934eadb8a8aa7aaac059a8e3779602a8e4a74b5496125" Jan 31 07:44:41 crc kubenswrapper[4908]: E0131 07:44:41.127191 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc4b261f2d6c22b6c12934eadb8a8aa7aaac059a8e3779602a8e4a74b5496125\": container with ID starting with fc4b261f2d6c22b6c12934eadb8a8aa7aaac059a8e3779602a8e4a74b5496125 not found: ID does not exist" containerID="fc4b261f2d6c22b6c12934eadb8a8aa7aaac059a8e3779602a8e4a74b5496125" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.127247 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc4b261f2d6c22b6c12934eadb8a8aa7aaac059a8e3779602a8e4a74b5496125"} err="failed to get container status \"fc4b261f2d6c22b6c12934eadb8a8aa7aaac059a8e3779602a8e4a74b5496125\": rpc error: code = NotFound desc = could not find container \"fc4b261f2d6c22b6c12934eadb8a8aa7aaac059a8e3779602a8e4a74b5496125\": container with ID starting with fc4b261f2d6c22b6c12934eadb8a8aa7aaac059a8e3779602a8e4a74b5496125 not found: ID does not exist" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.127275 4908 scope.go:117] "RemoveContainer" containerID="1f64b5f6efe229b2b4f36e5eab164e8c9ff5efe13de709be77167021e0365285" Jan 31 07:44:41 crc kubenswrapper[4908]: E0131 07:44:41.127553 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f64b5f6efe229b2b4f36e5eab164e8c9ff5efe13de709be77167021e0365285\": container with ID starting with 1f64b5f6efe229b2b4f36e5eab164e8c9ff5efe13de709be77167021e0365285 not found: ID does not exist" containerID="1f64b5f6efe229b2b4f36e5eab164e8c9ff5efe13de709be77167021e0365285" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.127581 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f64b5f6efe229b2b4f36e5eab164e8c9ff5efe13de709be77167021e0365285"} err="failed to get container status \"1f64b5f6efe229b2b4f36e5eab164e8c9ff5efe13de709be77167021e0365285\": rpc error: code = NotFound desc = could not find container \"1f64b5f6efe229b2b4f36e5eab164e8c9ff5efe13de709be77167021e0365285\": container with ID starting with 1f64b5f6efe229b2b4f36e5eab164e8c9ff5efe13de709be77167021e0365285 not found: ID does not exist" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.127597 4908 scope.go:117] "RemoveContainer" containerID="fc4b261f2d6c22b6c12934eadb8a8aa7aaac059a8e3779602a8e4a74b5496125" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.127886 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc4b261f2d6c22b6c12934eadb8a8aa7aaac059a8e3779602a8e4a74b5496125"} err="failed to get container status \"fc4b261f2d6c22b6c12934eadb8a8aa7aaac059a8e3779602a8e4a74b5496125\": rpc error: code = NotFound desc = could not find container \"fc4b261f2d6c22b6c12934eadb8a8aa7aaac059a8e3779602a8e4a74b5496125\": container with ID starting with fc4b261f2d6c22b6c12934eadb8a8aa7aaac059a8e3779602a8e4a74b5496125 not found: ID does not exist" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.127908 4908 scope.go:117] "RemoveContainer" containerID="1f64b5f6efe229b2b4f36e5eab164e8c9ff5efe13de709be77167021e0365285" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.133704 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f64b5f6efe229b2b4f36e5eab164e8c9ff5efe13de709be77167021e0365285"} err="failed to get container status \"1f64b5f6efe229b2b4f36e5eab164e8c9ff5efe13de709be77167021e0365285\": rpc error: code = NotFound desc = could not find container \"1f64b5f6efe229b2b4f36e5eab164e8c9ff5efe13de709be77167021e0365285\": container with ID starting with 1f64b5f6efe229b2b4f36e5eab164e8c9ff5efe13de709be77167021e0365285 not found: ID does not exist" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.137536 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.148460 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.193274 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:44:41 crc kubenswrapper[4908]: E0131 07:44:41.194155 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cf61a75-5c89-40e2-aa0c-808ea9bf4d44" containerName="nova-metadata-metadata" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.194196 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cf61a75-5c89-40e2-aa0c-808ea9bf4d44" containerName="nova-metadata-metadata" Jan 31 07:44:41 crc kubenswrapper[4908]: E0131 07:44:41.194218 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cf61a75-5c89-40e2-aa0c-808ea9bf4d44" containerName="nova-metadata-log" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.194227 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cf61a75-5c89-40e2-aa0c-808ea9bf4d44" containerName="nova-metadata-log" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.194557 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cf61a75-5c89-40e2-aa0c-808ea9bf4d44" containerName="nova-metadata-log" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.194594 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cf61a75-5c89-40e2-aa0c-808ea9bf4d44" containerName="nova-metadata-metadata" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.196236 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.199500 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.199746 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.203616 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.287343 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.297143 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e206a48c-4a7d-45cd-8fab-ada48fd1618d-logs\") pod \"nova-metadata-0\" (UID: \"e206a48c-4a7d-45cd-8fab-ada48fd1618d\") " pod="openstack/nova-metadata-0" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.297263 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e206a48c-4a7d-45cd-8fab-ada48fd1618d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e206a48c-4a7d-45cd-8fab-ada48fd1618d\") " pod="openstack/nova-metadata-0" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.297307 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd8mw\" (UniqueName: \"kubernetes.io/projected/e206a48c-4a7d-45cd-8fab-ada48fd1618d-kube-api-access-dd8mw\") pod \"nova-metadata-0\" (UID: \"e206a48c-4a7d-45cd-8fab-ada48fd1618d\") " pod="openstack/nova-metadata-0" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.297346 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e206a48c-4a7d-45cd-8fab-ada48fd1618d-config-data\") pod \"nova-metadata-0\" (UID: \"e206a48c-4a7d-45cd-8fab-ada48fd1618d\") " pod="openstack/nova-metadata-0" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.297525 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e206a48c-4a7d-45cd-8fab-ada48fd1618d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e206a48c-4a7d-45cd-8fab-ada48fd1618d\") " pod="openstack/nova-metadata-0" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.390238 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.390302 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.399930 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e206a48c-4a7d-45cd-8fab-ada48fd1618d-logs\") pod \"nova-metadata-0\" (UID: \"e206a48c-4a7d-45cd-8fab-ada48fd1618d\") " pod="openstack/nova-metadata-0" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.400036 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e206a48c-4a7d-45cd-8fab-ada48fd1618d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e206a48c-4a7d-45cd-8fab-ada48fd1618d\") " pod="openstack/nova-metadata-0" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.400091 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd8mw\" (UniqueName: \"kubernetes.io/projected/e206a48c-4a7d-45cd-8fab-ada48fd1618d-kube-api-access-dd8mw\") pod \"nova-metadata-0\" (UID: \"e206a48c-4a7d-45cd-8fab-ada48fd1618d\") " pod="openstack/nova-metadata-0" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.400123 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e206a48c-4a7d-45cd-8fab-ada48fd1618d-config-data\") pod \"nova-metadata-0\" (UID: \"e206a48c-4a7d-45cd-8fab-ada48fd1618d\") " pod="openstack/nova-metadata-0" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.400184 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e206a48c-4a7d-45cd-8fab-ada48fd1618d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e206a48c-4a7d-45cd-8fab-ada48fd1618d\") " pod="openstack/nova-metadata-0" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.401142 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e206a48c-4a7d-45cd-8fab-ada48fd1618d-logs\") pod \"nova-metadata-0\" (UID: \"e206a48c-4a7d-45cd-8fab-ada48fd1618d\") " pod="openstack/nova-metadata-0" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.404129 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e206a48c-4a7d-45cd-8fab-ada48fd1618d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e206a48c-4a7d-45cd-8fab-ada48fd1618d\") " pod="openstack/nova-metadata-0" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.404304 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e206a48c-4a7d-45cd-8fab-ada48fd1618d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e206a48c-4a7d-45cd-8fab-ada48fd1618d\") " pod="openstack/nova-metadata-0" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.404803 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e206a48c-4a7d-45cd-8fab-ada48fd1618d-config-data\") pod \"nova-metadata-0\" (UID: \"e206a48c-4a7d-45cd-8fab-ada48fd1618d\") " pod="openstack/nova-metadata-0" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.420333 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd8mw\" (UniqueName: \"kubernetes.io/projected/e206a48c-4a7d-45cd-8fab-ada48fd1618d-kube-api-access-dd8mw\") pod \"nova-metadata-0\" (UID: \"e206a48c-4a7d-45cd-8fab-ada48fd1618d\") " pod="openstack/nova-metadata-0" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.422448 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.518666 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.617646 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.617694 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.644914 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b8cf6657-m4hwz" Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.711605 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-bwjnj"] Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.711863 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58db5546cc-bwjnj" podUID="a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8" containerName="dnsmasq-dns" containerID="cri-o://86896221467e4c66b1a2d59f2489fabebb17d706b2f1a59ed744661c2cdef141" gracePeriod=10 Jan 31 07:44:41 crc kubenswrapper[4908]: I0131 07:44:41.952177 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cf61a75-5c89-40e2-aa0c-808ea9bf4d44" path="/var/lib/kubelet/pods/7cf61a75-5c89-40e2-aa0c-808ea9bf4d44/volumes" Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.013163 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:44:42 crc kubenswrapper[4908]: W0131 07:44:42.020091 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode206a48c_4a7d_45cd_8fab_ada48fd1618d.slice/crio-5144fea9514603e3320f9df2a7a88d7ed4d2e43d70290feb174da7008bb00ed8 WatchSource:0}: Error finding container 5144fea9514603e3320f9df2a7a88d7ed4d2e43d70290feb174da7008bb00ed8: Status 404 returned error can't find the container with id 5144fea9514603e3320f9df2a7a88d7ed4d2e43d70290feb174da7008bb00ed8 Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.097771 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e206a48c-4a7d-45cd-8fab-ada48fd1618d","Type":"ContainerStarted","Data":"5144fea9514603e3320f9df2a7a88d7ed4d2e43d70290feb174da7008bb00ed8"} Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.107301 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58db5546cc-bwjnj" podUID="a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.158:5353: connect: connection refused" Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.137707 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.533441 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9wmzx" Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.625618 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db9441f-9c32-4ef1-a91c-1e6e76b57a81-scripts\") pod \"5db9441f-9c32-4ef1-a91c-1e6e76b57a81\" (UID: \"5db9441f-9c32-4ef1-a91c-1e6e76b57a81\") " Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.625739 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db9441f-9c32-4ef1-a91c-1e6e76b57a81-combined-ca-bundle\") pod \"5db9441f-9c32-4ef1-a91c-1e6e76b57a81\" (UID: \"5db9441f-9c32-4ef1-a91c-1e6e76b57a81\") " Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.625789 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcwkf\" (UniqueName: \"kubernetes.io/projected/5db9441f-9c32-4ef1-a91c-1e6e76b57a81-kube-api-access-hcwkf\") pod \"5db9441f-9c32-4ef1-a91c-1e6e76b57a81\" (UID: \"5db9441f-9c32-4ef1-a91c-1e6e76b57a81\") " Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.625869 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db9441f-9c32-4ef1-a91c-1e6e76b57a81-config-data\") pod \"5db9441f-9c32-4ef1-a91c-1e6e76b57a81\" (UID: \"5db9441f-9c32-4ef1-a91c-1e6e76b57a81\") " Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.641738 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db9441f-9c32-4ef1-a91c-1e6e76b57a81-scripts" (OuterVolumeSpecName: "scripts") pod "5db9441f-9c32-4ef1-a91c-1e6e76b57a81" (UID: "5db9441f-9c32-4ef1-a91c-1e6e76b57a81"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.642747 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5db9441f-9c32-4ef1-a91c-1e6e76b57a81-kube-api-access-hcwkf" (OuterVolumeSpecName: "kube-api-access-hcwkf") pod "5db9441f-9c32-4ef1-a91c-1e6e76b57a81" (UID: "5db9441f-9c32-4ef1-a91c-1e6e76b57a81"). InnerVolumeSpecName "kube-api-access-hcwkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.677115 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db9441f-9c32-4ef1-a91c-1e6e76b57a81-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5db9441f-9c32-4ef1-a91c-1e6e76b57a81" (UID: "5db9441f-9c32-4ef1-a91c-1e6e76b57a81"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.701319 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c8137b82-8885-4e06-a9c0-226b099080e8" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.178:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.701608 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c8137b82-8885-4e06-a9c0-226b099080e8" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.178:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.722603 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db9441f-9c32-4ef1-a91c-1e6e76b57a81-config-data" (OuterVolumeSpecName: "config-data") pod "5db9441f-9c32-4ef1-a91c-1e6e76b57a81" (UID: "5db9441f-9c32-4ef1-a91c-1e6e76b57a81"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.727536 4908 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db9441f-9c32-4ef1-a91c-1e6e76b57a81-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.727564 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db9441f-9c32-4ef1-a91c-1e6e76b57a81-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.727575 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcwkf\" (UniqueName: \"kubernetes.io/projected/5db9441f-9c32-4ef1-a91c-1e6e76b57a81-kube-api-access-hcwkf\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.727583 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db9441f-9c32-4ef1-a91c-1e6e76b57a81-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.729121 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-bwjnj" Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.832609 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8-ovsdbserver-nb\") pod \"a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8\" (UID: \"a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8\") " Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.832661 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8-dns-svc\") pod \"a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8\" (UID: \"a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8\") " Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.832701 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8-ovsdbserver-sb\") pod \"a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8\" (UID: \"a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8\") " Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.832760 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8-config\") pod \"a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8\" (UID: \"a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8\") " Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.832890 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nczgt\" (UniqueName: \"kubernetes.io/projected/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8-kube-api-access-nczgt\") pod \"a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8\" (UID: \"a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8\") " Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.844746 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8-kube-api-access-nczgt" (OuterVolumeSpecName: "kube-api-access-nczgt") pod "a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8" (UID: "a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8"). InnerVolumeSpecName "kube-api-access-nczgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.877806 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8" (UID: "a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.887534 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8-config" (OuterVolumeSpecName: "config") pod "a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8" (UID: "a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.898609 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8" (UID: "a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.915055 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8" (UID: "a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.934952 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nczgt\" (UniqueName: \"kubernetes.io/projected/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8-kube-api-access-nczgt\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.935004 4908 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.935016 4908 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.935025 4908 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:42 crc kubenswrapper[4908]: I0131 07:44:42.935034 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:43 crc kubenswrapper[4908]: I0131 07:44:43.105696 4908 generic.go:334] "Generic (PLEG): container finished" podID="a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8" containerID="86896221467e4c66b1a2d59f2489fabebb17d706b2f1a59ed744661c2cdef141" exitCode=0 Jan 31 07:44:43 crc kubenswrapper[4908]: I0131 07:44:43.105763 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-bwjnj" event={"ID":"a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8","Type":"ContainerDied","Data":"86896221467e4c66b1a2d59f2489fabebb17d706b2f1a59ed744661c2cdef141"} Jan 31 07:44:43 crc kubenswrapper[4908]: I0131 07:44:43.105766 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-bwjnj" Jan 31 07:44:43 crc kubenswrapper[4908]: I0131 07:44:43.105794 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-bwjnj" event={"ID":"a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8","Type":"ContainerDied","Data":"b013292294a58c6455ab1167fed981da0ce56139eadd4de9c2ffac6ec6941e81"} Jan 31 07:44:43 crc kubenswrapper[4908]: I0131 07:44:43.105822 4908 scope.go:117] "RemoveContainer" containerID="86896221467e4c66b1a2d59f2489fabebb17d706b2f1a59ed744661c2cdef141" Jan 31 07:44:43 crc kubenswrapper[4908]: I0131 07:44:43.107660 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e206a48c-4a7d-45cd-8fab-ada48fd1618d","Type":"ContainerStarted","Data":"4d0de9e42432c38a6fd8fe5bd64930e27ae7575eb23b426043b1f3315199f86b"} Jan 31 07:44:43 crc kubenswrapper[4908]: I0131 07:44:43.107706 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e206a48c-4a7d-45cd-8fab-ada48fd1618d","Type":"ContainerStarted","Data":"915c80c54e252450fc700445c187d06efd153965433bfc4474e318bf80bfaccf"} Jan 31 07:44:43 crc kubenswrapper[4908]: I0131 07:44:43.108814 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9wmzx" Jan 31 07:44:43 crc kubenswrapper[4908]: I0131 07:44:43.108868 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9wmzx" event={"ID":"5db9441f-9c32-4ef1-a91c-1e6e76b57a81","Type":"ContainerDied","Data":"a49bcffbc6f9bb921362799dbef552879819d3f1e09fd8c2c3ad247e1df94bc8"} Jan 31 07:44:43 crc kubenswrapper[4908]: I0131 07:44:43.108919 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a49bcffbc6f9bb921362799dbef552879819d3f1e09fd8c2c3ad247e1df94bc8" Jan 31 07:44:43 crc kubenswrapper[4908]: I0131 07:44:43.133936 4908 scope.go:117] "RemoveContainer" containerID="90ff1278705d541ac5e35455fe84779b88fd28e316f6bb37a5f7f0c7dff28645" Jan 31 07:44:43 crc kubenswrapper[4908]: I0131 07:44:43.162025 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 07:44:43 crc kubenswrapper[4908]: I0131 07:44:43.162292 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="5c66c6a9-7173-46fc-b95a-b14d535e1b84" containerName="kube-state-metrics" containerID="cri-o://491710e61f70d1e90b5278ad1f61faf918dc050225b5ff0fc94242649bebb3a9" gracePeriod=30 Jan 31 07:44:43 crc kubenswrapper[4908]: I0131 07:44:43.164144 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.164125547 podStartE2EDuration="2.164125547s" podCreationTimestamp="2026-01-31 07:44:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:44:43.148718116 +0000 UTC m=+1389.764662780" watchObservedRunningTime="2026-01-31 07:44:43.164125547 +0000 UTC m=+1389.780070201" Jan 31 07:44:43 crc kubenswrapper[4908]: I0131 07:44:43.174205 4908 scope.go:117] "RemoveContainer" containerID="86896221467e4c66b1a2d59f2489fabebb17d706b2f1a59ed744661c2cdef141" Jan 31 07:44:43 crc kubenswrapper[4908]: E0131 07:44:43.174733 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86896221467e4c66b1a2d59f2489fabebb17d706b2f1a59ed744661c2cdef141\": container with ID starting with 86896221467e4c66b1a2d59f2489fabebb17d706b2f1a59ed744661c2cdef141 not found: ID does not exist" containerID="86896221467e4c66b1a2d59f2489fabebb17d706b2f1a59ed744661c2cdef141" Jan 31 07:44:43 crc kubenswrapper[4908]: I0131 07:44:43.174770 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86896221467e4c66b1a2d59f2489fabebb17d706b2f1a59ed744661c2cdef141"} err="failed to get container status \"86896221467e4c66b1a2d59f2489fabebb17d706b2f1a59ed744661c2cdef141\": rpc error: code = NotFound desc = could not find container \"86896221467e4c66b1a2d59f2489fabebb17d706b2f1a59ed744661c2cdef141\": container with ID starting with 86896221467e4c66b1a2d59f2489fabebb17d706b2f1a59ed744661c2cdef141 not found: ID does not exist" Jan 31 07:44:43 crc kubenswrapper[4908]: I0131 07:44:43.174795 4908 scope.go:117] "RemoveContainer" containerID="90ff1278705d541ac5e35455fe84779b88fd28e316f6bb37a5f7f0c7dff28645" Jan 31 07:44:43 crc kubenswrapper[4908]: E0131 07:44:43.175414 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90ff1278705d541ac5e35455fe84779b88fd28e316f6bb37a5f7f0c7dff28645\": container with ID starting with 90ff1278705d541ac5e35455fe84779b88fd28e316f6bb37a5f7f0c7dff28645 not found: ID does not exist" containerID="90ff1278705d541ac5e35455fe84779b88fd28e316f6bb37a5f7f0c7dff28645" Jan 31 07:44:43 crc kubenswrapper[4908]: I0131 07:44:43.175442 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90ff1278705d541ac5e35455fe84779b88fd28e316f6bb37a5f7f0c7dff28645"} err="failed to get container status \"90ff1278705d541ac5e35455fe84779b88fd28e316f6bb37a5f7f0c7dff28645\": rpc error: code = NotFound desc = could not find container \"90ff1278705d541ac5e35455fe84779b88fd28e316f6bb37a5f7f0c7dff28645\": container with ID starting with 90ff1278705d541ac5e35455fe84779b88fd28e316f6bb37a5f7f0c7dff28645 not found: ID does not exist" Jan 31 07:44:43 crc kubenswrapper[4908]: I0131 07:44:43.176799 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-bwjnj"] Jan 31 07:44:43 crc kubenswrapper[4908]: I0131 07:44:43.184952 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-bwjnj"] Jan 31 07:44:43 crc kubenswrapper[4908]: I0131 07:44:43.315202 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:44:43 crc kubenswrapper[4908]: I0131 07:44:43.315400 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c8137b82-8885-4e06-a9c0-226b099080e8" containerName="nova-api-log" containerID="cri-o://fd6d19a76f36af98a5aba3b0ab34de301aa46e6fc2173ee478e204039e1edd57" gracePeriod=30 Jan 31 07:44:43 crc kubenswrapper[4908]: I0131 07:44:43.315776 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c8137b82-8885-4e06-a9c0-226b099080e8" containerName="nova-api-api" containerID="cri-o://15e40d8342a2bc5a434cf9e0ded5da122f15a81f9ea8df4281d2f84434b99999" gracePeriod=30 Jan 31 07:44:43 crc kubenswrapper[4908]: I0131 07:44:43.338448 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 07:44:43 crc kubenswrapper[4908]: I0131 07:44:43.388874 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:44:43 crc kubenswrapper[4908]: I0131 07:44:43.635667 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 07:44:43 crc kubenswrapper[4908]: I0131 07:44:43.755511 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pv7t\" (UniqueName: \"kubernetes.io/projected/5c66c6a9-7173-46fc-b95a-b14d535e1b84-kube-api-access-8pv7t\") pod \"5c66c6a9-7173-46fc-b95a-b14d535e1b84\" (UID: \"5c66c6a9-7173-46fc-b95a-b14d535e1b84\") " Jan 31 07:44:43 crc kubenswrapper[4908]: I0131 07:44:43.763927 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c66c6a9-7173-46fc-b95a-b14d535e1b84-kube-api-access-8pv7t" (OuterVolumeSpecName: "kube-api-access-8pv7t") pod "5c66c6a9-7173-46fc-b95a-b14d535e1b84" (UID: "5c66c6a9-7173-46fc-b95a-b14d535e1b84"). InnerVolumeSpecName "kube-api-access-8pv7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:44:43 crc kubenswrapper[4908]: I0131 07:44:43.857414 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pv7t\" (UniqueName: \"kubernetes.io/projected/5c66c6a9-7173-46fc-b95a-b14d535e1b84-kube-api-access-8pv7t\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:43 crc kubenswrapper[4908]: I0131 07:44:43.955262 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8" path="/var/lib/kubelet/pods/a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8/volumes" Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.120877 4908 generic.go:334] "Generic (PLEG): container finished" podID="c8137b82-8885-4e06-a9c0-226b099080e8" containerID="fd6d19a76f36af98a5aba3b0ab34de301aa46e6fc2173ee478e204039e1edd57" exitCode=143 Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.120947 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c8137b82-8885-4e06-a9c0-226b099080e8","Type":"ContainerDied","Data":"fd6d19a76f36af98a5aba3b0ab34de301aa46e6fc2173ee478e204039e1edd57"} Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.124344 4908 generic.go:334] "Generic (PLEG): container finished" podID="5c66c6a9-7173-46fc-b95a-b14d535e1b84" containerID="491710e61f70d1e90b5278ad1f61faf918dc050225b5ff0fc94242649bebb3a9" exitCode=2 Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.124428 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.124434 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5c66c6a9-7173-46fc-b95a-b14d535e1b84","Type":"ContainerDied","Data":"491710e61f70d1e90b5278ad1f61faf918dc050225b5ff0fc94242649bebb3a9"} Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.124462 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5c66c6a9-7173-46fc-b95a-b14d535e1b84","Type":"ContainerDied","Data":"84261ff1dd6830f5593e9af64fb3ffcbf4bd60de50d608a93ada3ec087f48828"} Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.124480 4908 scope.go:117] "RemoveContainer" containerID="491710e61f70d1e90b5278ad1f61faf918dc050225b5ff0fc94242649bebb3a9" Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.150917 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.152960 4908 scope.go:117] "RemoveContainer" containerID="491710e61f70d1e90b5278ad1f61faf918dc050225b5ff0fc94242649bebb3a9" Jan 31 07:44:44 crc kubenswrapper[4908]: E0131 07:44:44.153650 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"491710e61f70d1e90b5278ad1f61faf918dc050225b5ff0fc94242649bebb3a9\": container with ID starting with 491710e61f70d1e90b5278ad1f61faf918dc050225b5ff0fc94242649bebb3a9 not found: ID does not exist" containerID="491710e61f70d1e90b5278ad1f61faf918dc050225b5ff0fc94242649bebb3a9" Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.153688 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"491710e61f70d1e90b5278ad1f61faf918dc050225b5ff0fc94242649bebb3a9"} err="failed to get container status \"491710e61f70d1e90b5278ad1f61faf918dc050225b5ff0fc94242649bebb3a9\": rpc error: code = NotFound desc = could not find container \"491710e61f70d1e90b5278ad1f61faf918dc050225b5ff0fc94242649bebb3a9\": container with ID starting with 491710e61f70d1e90b5278ad1f61faf918dc050225b5ff0fc94242649bebb3a9 not found: ID does not exist" Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.159292 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.179674 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 07:44:44 crc kubenswrapper[4908]: E0131 07:44:44.180171 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c66c6a9-7173-46fc-b95a-b14d535e1b84" containerName="kube-state-metrics" Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.180192 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c66c6a9-7173-46fc-b95a-b14d535e1b84" containerName="kube-state-metrics" Jan 31 07:44:44 crc kubenswrapper[4908]: E0131 07:44:44.180209 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8" containerName="init" Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.180217 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8" containerName="init" Jan 31 07:44:44 crc kubenswrapper[4908]: E0131 07:44:44.183191 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db9441f-9c32-4ef1-a91c-1e6e76b57a81" containerName="nova-manage" Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.183240 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db9441f-9c32-4ef1-a91c-1e6e76b57a81" containerName="nova-manage" Jan 31 07:44:44 crc kubenswrapper[4908]: E0131 07:44:44.183315 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8" containerName="dnsmasq-dns" Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.183328 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8" containerName="dnsmasq-dns" Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.183739 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db9441f-9c32-4ef1-a91c-1e6e76b57a81" containerName="nova-manage" Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.183772 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c66c6a9-7173-46fc-b95a-b14d535e1b84" containerName="kube-state-metrics" Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.183791 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="a84ead28-57fe-4e0d-b8c6-70fdb7ec4be8" containerName="dnsmasq-dns" Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.184578 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.187331 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.187422 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.198762 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.264471 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9eadc7d2-1530-480d-b152-1e13e0d78eac-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9eadc7d2-1530-480d-b152-1e13e0d78eac\") " pod="openstack/kube-state-metrics-0" Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.264558 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eadc7d2-1530-480d-b152-1e13e0d78eac-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9eadc7d2-1530-480d-b152-1e13e0d78eac\") " pod="openstack/kube-state-metrics-0" Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.264629 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eadc7d2-1530-480d-b152-1e13e0d78eac-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9eadc7d2-1530-480d-b152-1e13e0d78eac\") " pod="openstack/kube-state-metrics-0" Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.264689 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w48h4\" (UniqueName: \"kubernetes.io/projected/9eadc7d2-1530-480d-b152-1e13e0d78eac-kube-api-access-w48h4\") pod \"kube-state-metrics-0\" (UID: \"9eadc7d2-1530-480d-b152-1e13e0d78eac\") " pod="openstack/kube-state-metrics-0" Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.363739 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.364082 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="025fff27-1ca8-4619-b60d-ee68f0a7fff8" containerName="ceilometer-central-agent" containerID="cri-o://f69c16f2521faf6aa05d6d07e6ae6ae62a43a2a0489e91485c8a90167a709741" gracePeriod=30 Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.364164 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="025fff27-1ca8-4619-b60d-ee68f0a7fff8" containerName="ceilometer-notification-agent" containerID="cri-o://ed0a0c6c0d7420e018f093d06d439faea400831b79c869697fdc7c2aad095fff" gracePeriod=30 Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.364194 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="025fff27-1ca8-4619-b60d-ee68f0a7fff8" containerName="proxy-httpd" containerID="cri-o://db2b1974dd95dc85249afd7582688b0a84393aa6791ea87b6435ea0030341ac4" gracePeriod=30 Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.364194 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="025fff27-1ca8-4619-b60d-ee68f0a7fff8" containerName="sg-core" containerID="cri-o://d02b051f220915ce0dcf0ac5914b13a410bf24a92985b40c6f63d5e3fdbd89bf" gracePeriod=30 Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.366093 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eadc7d2-1530-480d-b152-1e13e0d78eac-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9eadc7d2-1530-480d-b152-1e13e0d78eac\") " pod="openstack/kube-state-metrics-0" Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.366177 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eadc7d2-1530-480d-b152-1e13e0d78eac-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9eadc7d2-1530-480d-b152-1e13e0d78eac\") " pod="openstack/kube-state-metrics-0" Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.366273 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w48h4\" (UniqueName: \"kubernetes.io/projected/9eadc7d2-1530-480d-b152-1e13e0d78eac-kube-api-access-w48h4\") pod \"kube-state-metrics-0\" (UID: \"9eadc7d2-1530-480d-b152-1e13e0d78eac\") " pod="openstack/kube-state-metrics-0" Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.366331 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9eadc7d2-1530-480d-b152-1e13e0d78eac-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9eadc7d2-1530-480d-b152-1e13e0d78eac\") " pod="openstack/kube-state-metrics-0" Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.382307 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9eadc7d2-1530-480d-b152-1e13e0d78eac-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9eadc7d2-1530-480d-b152-1e13e0d78eac\") " pod="openstack/kube-state-metrics-0" Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.382398 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eadc7d2-1530-480d-b152-1e13e0d78eac-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9eadc7d2-1530-480d-b152-1e13e0d78eac\") " pod="openstack/kube-state-metrics-0" Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.385689 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eadc7d2-1530-480d-b152-1e13e0d78eac-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9eadc7d2-1530-480d-b152-1e13e0d78eac\") " pod="openstack/kube-state-metrics-0" Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.388584 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w48h4\" (UniqueName: \"kubernetes.io/projected/9eadc7d2-1530-480d-b152-1e13e0d78eac-kube-api-access-w48h4\") pod \"kube-state-metrics-0\" (UID: \"9eadc7d2-1530-480d-b152-1e13e0d78eac\") " pod="openstack/kube-state-metrics-0" Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.505619 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 07:44:44 crc kubenswrapper[4908]: I0131 07:44:44.940093 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 07:44:45 crc kubenswrapper[4908]: I0131 07:44:45.134360 4908 generic.go:334] "Generic (PLEG): container finished" podID="025fff27-1ca8-4619-b60d-ee68f0a7fff8" containerID="db2b1974dd95dc85249afd7582688b0a84393aa6791ea87b6435ea0030341ac4" exitCode=0 Jan 31 07:44:45 crc kubenswrapper[4908]: I0131 07:44:45.134649 4908 generic.go:334] "Generic (PLEG): container finished" podID="025fff27-1ca8-4619-b60d-ee68f0a7fff8" containerID="d02b051f220915ce0dcf0ac5914b13a410bf24a92985b40c6f63d5e3fdbd89bf" exitCode=2 Jan 31 07:44:45 crc kubenswrapper[4908]: I0131 07:44:45.134660 4908 generic.go:334] "Generic (PLEG): container finished" podID="025fff27-1ca8-4619-b60d-ee68f0a7fff8" containerID="f69c16f2521faf6aa05d6d07e6ae6ae62a43a2a0489e91485c8a90167a709741" exitCode=0 Jan 31 07:44:45 crc kubenswrapper[4908]: I0131 07:44:45.134420 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"025fff27-1ca8-4619-b60d-ee68f0a7fff8","Type":"ContainerDied","Data":"db2b1974dd95dc85249afd7582688b0a84393aa6791ea87b6435ea0030341ac4"} Jan 31 07:44:45 crc kubenswrapper[4908]: I0131 07:44:45.134743 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"025fff27-1ca8-4619-b60d-ee68f0a7fff8","Type":"ContainerDied","Data":"d02b051f220915ce0dcf0ac5914b13a410bf24a92985b40c6f63d5e3fdbd89bf"} Jan 31 07:44:45 crc kubenswrapper[4908]: I0131 07:44:45.134759 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"025fff27-1ca8-4619-b60d-ee68f0a7fff8","Type":"ContainerDied","Data":"f69c16f2521faf6aa05d6d07e6ae6ae62a43a2a0489e91485c8a90167a709741"} Jan 31 07:44:45 crc kubenswrapper[4908]: I0131 07:44:45.137480 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9eadc7d2-1530-480d-b152-1e13e0d78eac","Type":"ContainerStarted","Data":"ab67ace6494979b8f4ab12ba1627b5a3e9525b1fc1a4f15a71ab656896fc3743"} Jan 31 07:44:45 crc kubenswrapper[4908]: I0131 07:44:45.137646 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e206a48c-4a7d-45cd-8fab-ada48fd1618d" containerName="nova-metadata-log" containerID="cri-o://915c80c54e252450fc700445c187d06efd153965433bfc4474e318bf80bfaccf" gracePeriod=30 Jan 31 07:44:45 crc kubenswrapper[4908]: I0131 07:44:45.137717 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e206a48c-4a7d-45cd-8fab-ada48fd1618d" containerName="nova-metadata-metadata" containerID="cri-o://4d0de9e42432c38a6fd8fe5bd64930e27ae7575eb23b426043b1f3315199f86b" gracePeriod=30 Jan 31 07:44:45 crc kubenswrapper[4908]: I0131 07:44:45.138198 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="871262c6-52b5-4ed8-9f81-7f42677e3763" containerName="nova-scheduler-scheduler" containerID="cri-o://6f619b06045411b6decb3364d81efd4a65d4b6b5a93142cf032f3b9b18d016cb" gracePeriod=30 Jan 31 07:44:45 crc kubenswrapper[4908]: I0131 07:44:45.953546 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c66c6a9-7173-46fc-b95a-b14d535e1b84" path="/var/lib/kubelet/pods/5c66c6a9-7173-46fc-b95a-b14d535e1b84/volumes" Jan 31 07:44:46 crc kubenswrapper[4908]: I0131 07:44:46.149019 4908 generic.go:334] "Generic (PLEG): container finished" podID="e206a48c-4a7d-45cd-8fab-ada48fd1618d" containerID="4d0de9e42432c38a6fd8fe5bd64930e27ae7575eb23b426043b1f3315199f86b" exitCode=0 Jan 31 07:44:46 crc kubenswrapper[4908]: I0131 07:44:46.149051 4908 generic.go:334] "Generic (PLEG): container finished" podID="e206a48c-4a7d-45cd-8fab-ada48fd1618d" containerID="915c80c54e252450fc700445c187d06efd153965433bfc4474e318bf80bfaccf" exitCode=143 Jan 31 07:44:46 crc kubenswrapper[4908]: I0131 07:44:46.149098 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e206a48c-4a7d-45cd-8fab-ada48fd1618d","Type":"ContainerDied","Data":"4d0de9e42432c38a6fd8fe5bd64930e27ae7575eb23b426043b1f3315199f86b"} Jan 31 07:44:46 crc kubenswrapper[4908]: I0131 07:44:46.149126 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e206a48c-4a7d-45cd-8fab-ada48fd1618d","Type":"ContainerDied","Data":"915c80c54e252450fc700445c187d06efd153965433bfc4474e318bf80bfaccf"} Jan 31 07:44:46 crc kubenswrapper[4908]: I0131 07:44:46.149136 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e206a48c-4a7d-45cd-8fab-ada48fd1618d","Type":"ContainerDied","Data":"5144fea9514603e3320f9df2a7a88d7ed4d2e43d70290feb174da7008bb00ed8"} Jan 31 07:44:46 crc kubenswrapper[4908]: I0131 07:44:46.149146 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5144fea9514603e3320f9df2a7a88d7ed4d2e43d70290feb174da7008bb00ed8" Jan 31 07:44:46 crc kubenswrapper[4908]: I0131 07:44:46.150674 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9eadc7d2-1530-480d-b152-1e13e0d78eac","Type":"ContainerStarted","Data":"f962014d9877b4fc9e995a70d60232a6e5307d858ad5b622e99af0c72b593346"} Jan 31 07:44:46 crc kubenswrapper[4908]: I0131 07:44:46.150894 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 31 07:44:46 crc kubenswrapper[4908]: I0131 07:44:46.217809 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 07:44:46 crc kubenswrapper[4908]: I0131 07:44:46.243657 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.564519533 podStartE2EDuration="2.243635297s" podCreationTimestamp="2026-01-31 07:44:44 +0000 UTC" firstStartedPulling="2026-01-31 07:44:44.947875918 +0000 UTC m=+1391.563820572" lastFinishedPulling="2026-01-31 07:44:45.626991682 +0000 UTC m=+1392.242936336" observedRunningTime="2026-01-31 07:44:46.167417521 +0000 UTC m=+1392.783362175" watchObservedRunningTime="2026-01-31 07:44:46.243635297 +0000 UTC m=+1392.859579951" Jan 31 07:44:46 crc kubenswrapper[4908]: I0131 07:44:46.299661 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e206a48c-4a7d-45cd-8fab-ada48fd1618d-nova-metadata-tls-certs\") pod \"e206a48c-4a7d-45cd-8fab-ada48fd1618d\" (UID: \"e206a48c-4a7d-45cd-8fab-ada48fd1618d\") " Jan 31 07:44:46 crc kubenswrapper[4908]: I0131 07:44:46.299710 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e206a48c-4a7d-45cd-8fab-ada48fd1618d-combined-ca-bundle\") pod \"e206a48c-4a7d-45cd-8fab-ada48fd1618d\" (UID: \"e206a48c-4a7d-45cd-8fab-ada48fd1618d\") " Jan 31 07:44:46 crc kubenswrapper[4908]: I0131 07:44:46.299799 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd8mw\" (UniqueName: \"kubernetes.io/projected/e206a48c-4a7d-45cd-8fab-ada48fd1618d-kube-api-access-dd8mw\") pod \"e206a48c-4a7d-45cd-8fab-ada48fd1618d\" (UID: \"e206a48c-4a7d-45cd-8fab-ada48fd1618d\") " Jan 31 07:44:46 crc kubenswrapper[4908]: I0131 07:44:46.299853 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e206a48c-4a7d-45cd-8fab-ada48fd1618d-logs\") pod \"e206a48c-4a7d-45cd-8fab-ada48fd1618d\" (UID: \"e206a48c-4a7d-45cd-8fab-ada48fd1618d\") " Jan 31 07:44:46 crc kubenswrapper[4908]: I0131 07:44:46.299897 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e206a48c-4a7d-45cd-8fab-ada48fd1618d-config-data\") pod \"e206a48c-4a7d-45cd-8fab-ada48fd1618d\" (UID: \"e206a48c-4a7d-45cd-8fab-ada48fd1618d\") " Jan 31 07:44:46 crc kubenswrapper[4908]: I0131 07:44:46.300290 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e206a48c-4a7d-45cd-8fab-ada48fd1618d-logs" (OuterVolumeSpecName: "logs") pod "e206a48c-4a7d-45cd-8fab-ada48fd1618d" (UID: "e206a48c-4a7d-45cd-8fab-ada48fd1618d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:44:46 crc kubenswrapper[4908]: I0131 07:44:46.300609 4908 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e206a48c-4a7d-45cd-8fab-ada48fd1618d-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:46 crc kubenswrapper[4908]: I0131 07:44:46.310217 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e206a48c-4a7d-45cd-8fab-ada48fd1618d-kube-api-access-dd8mw" (OuterVolumeSpecName: "kube-api-access-dd8mw") pod "e206a48c-4a7d-45cd-8fab-ada48fd1618d" (UID: "e206a48c-4a7d-45cd-8fab-ada48fd1618d"). InnerVolumeSpecName "kube-api-access-dd8mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:44:46 crc kubenswrapper[4908]: I0131 07:44:46.342871 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e206a48c-4a7d-45cd-8fab-ada48fd1618d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e206a48c-4a7d-45cd-8fab-ada48fd1618d" (UID: "e206a48c-4a7d-45cd-8fab-ada48fd1618d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:44:46 crc kubenswrapper[4908]: I0131 07:44:46.353951 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e206a48c-4a7d-45cd-8fab-ada48fd1618d-config-data" (OuterVolumeSpecName: "config-data") pod "e206a48c-4a7d-45cd-8fab-ada48fd1618d" (UID: "e206a48c-4a7d-45cd-8fab-ada48fd1618d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:44:46 crc kubenswrapper[4908]: I0131 07:44:46.379326 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e206a48c-4a7d-45cd-8fab-ada48fd1618d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e206a48c-4a7d-45cd-8fab-ada48fd1618d" (UID: "e206a48c-4a7d-45cd-8fab-ada48fd1618d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:44:46 crc kubenswrapper[4908]: E0131 07:44:46.392051 4908 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f619b06045411b6decb3364d81efd4a65d4b6b5a93142cf032f3b9b18d016cb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 07:44:46 crc kubenswrapper[4908]: E0131 07:44:46.394196 4908 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f619b06045411b6decb3364d81efd4a65d4b6b5a93142cf032f3b9b18d016cb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 07:44:46 crc kubenswrapper[4908]: E0131 07:44:46.395267 4908 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f619b06045411b6decb3364d81efd4a65d4b6b5a93142cf032f3b9b18d016cb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 07:44:46 crc kubenswrapper[4908]: E0131 07:44:46.395326 4908 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="871262c6-52b5-4ed8-9f81-7f42677e3763" containerName="nova-scheduler-scheduler" Jan 31 07:44:46 crc kubenswrapper[4908]: I0131 07:44:46.402126 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dd8mw\" (UniqueName: \"kubernetes.io/projected/e206a48c-4a7d-45cd-8fab-ada48fd1618d-kube-api-access-dd8mw\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:46 crc kubenswrapper[4908]: I0131 07:44:46.402166 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e206a48c-4a7d-45cd-8fab-ada48fd1618d-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:46 crc kubenswrapper[4908]: I0131 07:44:46.402180 4908 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e206a48c-4a7d-45cd-8fab-ada48fd1618d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:46 crc kubenswrapper[4908]: I0131 07:44:46.402191 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e206a48c-4a7d-45cd-8fab-ada48fd1618d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:47 crc kubenswrapper[4908]: I0131 07:44:47.163450 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 07:44:47 crc kubenswrapper[4908]: I0131 07:44:47.206367 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:44:47 crc kubenswrapper[4908]: I0131 07:44:47.221682 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:44:47 crc kubenswrapper[4908]: I0131 07:44:47.232558 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:44:47 crc kubenswrapper[4908]: E0131 07:44:47.233011 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e206a48c-4a7d-45cd-8fab-ada48fd1618d" containerName="nova-metadata-log" Jan 31 07:44:47 crc kubenswrapper[4908]: I0131 07:44:47.233036 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="e206a48c-4a7d-45cd-8fab-ada48fd1618d" containerName="nova-metadata-log" Jan 31 07:44:47 crc kubenswrapper[4908]: E0131 07:44:47.233061 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e206a48c-4a7d-45cd-8fab-ada48fd1618d" containerName="nova-metadata-metadata" Jan 31 07:44:47 crc kubenswrapper[4908]: I0131 07:44:47.233070 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="e206a48c-4a7d-45cd-8fab-ada48fd1618d" containerName="nova-metadata-metadata" Jan 31 07:44:47 crc kubenswrapper[4908]: I0131 07:44:47.233271 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="e206a48c-4a7d-45cd-8fab-ada48fd1618d" containerName="nova-metadata-log" Jan 31 07:44:47 crc kubenswrapper[4908]: I0131 07:44:47.233315 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="e206a48c-4a7d-45cd-8fab-ada48fd1618d" containerName="nova-metadata-metadata" Jan 31 07:44:47 crc kubenswrapper[4908]: I0131 07:44:47.234320 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 07:44:47 crc kubenswrapper[4908]: I0131 07:44:47.238816 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 31 07:44:47 crc kubenswrapper[4908]: I0131 07:44:47.239157 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 07:44:47 crc kubenswrapper[4908]: I0131 07:44:47.245771 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:44:47 crc kubenswrapper[4908]: I0131 07:44:47.318014 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f879a45d-24c6-4508-829b-a3cdbcda3a33-config-data\") pod \"nova-metadata-0\" (UID: \"f879a45d-24c6-4508-829b-a3cdbcda3a33\") " pod="openstack/nova-metadata-0" Jan 31 07:44:47 crc kubenswrapper[4908]: I0131 07:44:47.318089 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f879a45d-24c6-4508-829b-a3cdbcda3a33-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f879a45d-24c6-4508-829b-a3cdbcda3a33\") " pod="openstack/nova-metadata-0" Jan 31 07:44:47 crc kubenswrapper[4908]: I0131 07:44:47.318114 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f879a45d-24c6-4508-829b-a3cdbcda3a33-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f879a45d-24c6-4508-829b-a3cdbcda3a33\") " pod="openstack/nova-metadata-0" Jan 31 07:44:47 crc kubenswrapper[4908]: I0131 07:44:47.318162 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwgtj\" (UniqueName: \"kubernetes.io/projected/f879a45d-24c6-4508-829b-a3cdbcda3a33-kube-api-access-dwgtj\") pod \"nova-metadata-0\" (UID: \"f879a45d-24c6-4508-829b-a3cdbcda3a33\") " pod="openstack/nova-metadata-0" Jan 31 07:44:47 crc kubenswrapper[4908]: I0131 07:44:47.318249 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f879a45d-24c6-4508-829b-a3cdbcda3a33-logs\") pod \"nova-metadata-0\" (UID: \"f879a45d-24c6-4508-829b-a3cdbcda3a33\") " pod="openstack/nova-metadata-0" Jan 31 07:44:47 crc kubenswrapper[4908]: I0131 07:44:47.419492 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f879a45d-24c6-4508-829b-a3cdbcda3a33-config-data\") pod \"nova-metadata-0\" (UID: \"f879a45d-24c6-4508-829b-a3cdbcda3a33\") " pod="openstack/nova-metadata-0" Jan 31 07:44:47 crc kubenswrapper[4908]: I0131 07:44:47.419573 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f879a45d-24c6-4508-829b-a3cdbcda3a33-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f879a45d-24c6-4508-829b-a3cdbcda3a33\") " pod="openstack/nova-metadata-0" Jan 31 07:44:47 crc kubenswrapper[4908]: I0131 07:44:47.419608 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f879a45d-24c6-4508-829b-a3cdbcda3a33-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f879a45d-24c6-4508-829b-a3cdbcda3a33\") " pod="openstack/nova-metadata-0" Jan 31 07:44:47 crc kubenswrapper[4908]: I0131 07:44:47.419652 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwgtj\" (UniqueName: \"kubernetes.io/projected/f879a45d-24c6-4508-829b-a3cdbcda3a33-kube-api-access-dwgtj\") pod \"nova-metadata-0\" (UID: \"f879a45d-24c6-4508-829b-a3cdbcda3a33\") " pod="openstack/nova-metadata-0" Jan 31 07:44:47 crc kubenswrapper[4908]: I0131 07:44:47.420144 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f879a45d-24c6-4508-829b-a3cdbcda3a33-logs\") pod \"nova-metadata-0\" (UID: \"f879a45d-24c6-4508-829b-a3cdbcda3a33\") " pod="openstack/nova-metadata-0" Jan 31 07:44:47 crc kubenswrapper[4908]: I0131 07:44:47.420494 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f879a45d-24c6-4508-829b-a3cdbcda3a33-logs\") pod \"nova-metadata-0\" (UID: \"f879a45d-24c6-4508-829b-a3cdbcda3a33\") " pod="openstack/nova-metadata-0" Jan 31 07:44:47 crc kubenswrapper[4908]: I0131 07:44:47.424758 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f879a45d-24c6-4508-829b-a3cdbcda3a33-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f879a45d-24c6-4508-829b-a3cdbcda3a33\") " pod="openstack/nova-metadata-0" Jan 31 07:44:47 crc kubenswrapper[4908]: I0131 07:44:47.425368 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f879a45d-24c6-4508-829b-a3cdbcda3a33-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f879a45d-24c6-4508-829b-a3cdbcda3a33\") " pod="openstack/nova-metadata-0" Jan 31 07:44:47 crc kubenswrapper[4908]: I0131 07:44:47.428671 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f879a45d-24c6-4508-829b-a3cdbcda3a33-config-data\") pod \"nova-metadata-0\" (UID: \"f879a45d-24c6-4508-829b-a3cdbcda3a33\") " pod="openstack/nova-metadata-0" Jan 31 07:44:47 crc kubenswrapper[4908]: I0131 07:44:47.444622 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwgtj\" (UniqueName: \"kubernetes.io/projected/f879a45d-24c6-4508-829b-a3cdbcda3a33-kube-api-access-dwgtj\") pod \"nova-metadata-0\" (UID: \"f879a45d-24c6-4508-829b-a3cdbcda3a33\") " pod="openstack/nova-metadata-0" Jan 31 07:44:47 crc kubenswrapper[4908]: I0131 07:44:47.558431 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 07:44:47 crc kubenswrapper[4908]: I0131 07:44:47.965392 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e206a48c-4a7d-45cd-8fab-ada48fd1618d" path="/var/lib/kubelet/pods/e206a48c-4a7d-45cd-8fab-ada48fd1618d/volumes" Jan 31 07:44:48 crc kubenswrapper[4908]: E0131 07:44:48.015064 4908 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod025fff27_1ca8_4619_b60d_ee68f0a7fff8.slice/crio-conmon-ed0a0c6c0d7420e018f093d06d439faea400831b79c869697fdc7c2aad095fff.scope\": RecentStats: unable to find data in memory cache]" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.061313 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.154566 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.177217 4908 generic.go:334] "Generic (PLEG): container finished" podID="025fff27-1ca8-4619-b60d-ee68f0a7fff8" containerID="ed0a0c6c0d7420e018f093d06d439faea400831b79c869697fdc7c2aad095fff" exitCode=0 Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.177297 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"025fff27-1ca8-4619-b60d-ee68f0a7fff8","Type":"ContainerDied","Data":"ed0a0c6c0d7420e018f093d06d439faea400831b79c869697fdc7c2aad095fff"} Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.181249 4908 generic.go:334] "Generic (PLEG): container finished" podID="c8137b82-8885-4e06-a9c0-226b099080e8" containerID="15e40d8342a2bc5a434cf9e0ded5da122f15a81f9ea8df4281d2f84434b99999" exitCode=0 Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.181336 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.181339 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c8137b82-8885-4e06-a9c0-226b099080e8","Type":"ContainerDied","Data":"15e40d8342a2bc5a434cf9e0ded5da122f15a81f9ea8df4281d2f84434b99999"} Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.181520 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c8137b82-8885-4e06-a9c0-226b099080e8","Type":"ContainerDied","Data":"5fc4d9aa042ecb734cf5faa44e89b4fb14f2831fc8af5287c14f216833a61aa6"} Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.181594 4908 scope.go:117] "RemoveContainer" containerID="15e40d8342a2bc5a434cf9e0ded5da122f15a81f9ea8df4281d2f84434b99999" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.183234 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f879a45d-24c6-4508-829b-a3cdbcda3a33","Type":"ContainerStarted","Data":"fefc03f386cee19c2ad515cf87023d51101ff202e117a17167565ed6f4a400fb"} Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.195104 4908 generic.go:334] "Generic (PLEG): container finished" podID="871262c6-52b5-4ed8-9f81-7f42677e3763" containerID="6f619b06045411b6decb3364d81efd4a65d4b6b5a93142cf032f3b9b18d016cb" exitCode=0 Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.195155 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"871262c6-52b5-4ed8-9f81-7f42677e3763","Type":"ContainerDied","Data":"6f619b06045411b6decb3364d81efd4a65d4b6b5a93142cf032f3b9b18d016cb"} Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.231563 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8137b82-8885-4e06-a9c0-226b099080e8-logs\") pod \"c8137b82-8885-4e06-a9c0-226b099080e8\" (UID: \"c8137b82-8885-4e06-a9c0-226b099080e8\") " Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.231624 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8gg5\" (UniqueName: \"kubernetes.io/projected/c8137b82-8885-4e06-a9c0-226b099080e8-kube-api-access-r8gg5\") pod \"c8137b82-8885-4e06-a9c0-226b099080e8\" (UID: \"c8137b82-8885-4e06-a9c0-226b099080e8\") " Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.231678 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8137b82-8885-4e06-a9c0-226b099080e8-combined-ca-bundle\") pod \"c8137b82-8885-4e06-a9c0-226b099080e8\" (UID: \"c8137b82-8885-4e06-a9c0-226b099080e8\") " Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.231851 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8137b82-8885-4e06-a9c0-226b099080e8-config-data\") pod \"c8137b82-8885-4e06-a9c0-226b099080e8\" (UID: \"c8137b82-8885-4e06-a9c0-226b099080e8\") " Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.233342 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8137b82-8885-4e06-a9c0-226b099080e8-logs" (OuterVolumeSpecName: "logs") pod "c8137b82-8885-4e06-a9c0-226b099080e8" (UID: "c8137b82-8885-4e06-a9c0-226b099080e8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.238175 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8137b82-8885-4e06-a9c0-226b099080e8-kube-api-access-r8gg5" (OuterVolumeSpecName: "kube-api-access-r8gg5") pod "c8137b82-8885-4e06-a9c0-226b099080e8" (UID: "c8137b82-8885-4e06-a9c0-226b099080e8"). InnerVolumeSpecName "kube-api-access-r8gg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.260025 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8137b82-8885-4e06-a9c0-226b099080e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8137b82-8885-4e06-a9c0-226b099080e8" (UID: "c8137b82-8885-4e06-a9c0-226b099080e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.264354 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8137b82-8885-4e06-a9c0-226b099080e8-config-data" (OuterVolumeSpecName: "config-data") pod "c8137b82-8885-4e06-a9c0-226b099080e8" (UID: "c8137b82-8885-4e06-a9c0-226b099080e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.289584 4908 scope.go:117] "RemoveContainer" containerID="fd6d19a76f36af98a5aba3b0ab34de301aa46e6fc2173ee478e204039e1edd57" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.300928 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.307568 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.309137 4908 scope.go:117] "RemoveContainer" containerID="15e40d8342a2bc5a434cf9e0ded5da122f15a81f9ea8df4281d2f84434b99999" Jan 31 07:44:48 crc kubenswrapper[4908]: E0131 07:44:48.309500 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15e40d8342a2bc5a434cf9e0ded5da122f15a81f9ea8df4281d2f84434b99999\": container with ID starting with 15e40d8342a2bc5a434cf9e0ded5da122f15a81f9ea8df4281d2f84434b99999 not found: ID does not exist" containerID="15e40d8342a2bc5a434cf9e0ded5da122f15a81f9ea8df4281d2f84434b99999" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.309529 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15e40d8342a2bc5a434cf9e0ded5da122f15a81f9ea8df4281d2f84434b99999"} err="failed to get container status \"15e40d8342a2bc5a434cf9e0ded5da122f15a81f9ea8df4281d2f84434b99999\": rpc error: code = NotFound desc = could not find container \"15e40d8342a2bc5a434cf9e0ded5da122f15a81f9ea8df4281d2f84434b99999\": container with ID starting with 15e40d8342a2bc5a434cf9e0ded5da122f15a81f9ea8df4281d2f84434b99999 not found: ID does not exist" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.309552 4908 scope.go:117] "RemoveContainer" containerID="fd6d19a76f36af98a5aba3b0ab34de301aa46e6fc2173ee478e204039e1edd57" Jan 31 07:44:48 crc kubenswrapper[4908]: E0131 07:44:48.309776 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd6d19a76f36af98a5aba3b0ab34de301aa46e6fc2173ee478e204039e1edd57\": container with ID starting with fd6d19a76f36af98a5aba3b0ab34de301aa46e6fc2173ee478e204039e1edd57 not found: ID does not exist" containerID="fd6d19a76f36af98a5aba3b0ab34de301aa46e6fc2173ee478e204039e1edd57" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.309797 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd6d19a76f36af98a5aba3b0ab34de301aa46e6fc2173ee478e204039e1edd57"} err="failed to get container status \"fd6d19a76f36af98a5aba3b0ab34de301aa46e6fc2173ee478e204039e1edd57\": rpc error: code = NotFound desc = could not find container \"fd6d19a76f36af98a5aba3b0ab34de301aa46e6fc2173ee478e204039e1edd57\": container with ID starting with fd6d19a76f36af98a5aba3b0ab34de301aa46e6fc2173ee478e204039e1edd57 not found: ID does not exist" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.332901 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/025fff27-1ca8-4619-b60d-ee68f0a7fff8-sg-core-conf-yaml\") pod \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\" (UID: \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\") " Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.332946 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/025fff27-1ca8-4619-b60d-ee68f0a7fff8-log-httpd\") pod \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\" (UID: \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\") " Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.332995 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871262c6-52b5-4ed8-9f81-7f42677e3763-combined-ca-bundle\") pod \"871262c6-52b5-4ed8-9f81-7f42677e3763\" (UID: \"871262c6-52b5-4ed8-9f81-7f42677e3763\") " Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.333034 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsnsc\" (UniqueName: \"kubernetes.io/projected/871262c6-52b5-4ed8-9f81-7f42677e3763-kube-api-access-nsnsc\") pod \"871262c6-52b5-4ed8-9f81-7f42677e3763\" (UID: \"871262c6-52b5-4ed8-9f81-7f42677e3763\") " Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.333067 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/025fff27-1ca8-4619-b60d-ee68f0a7fff8-config-data\") pod \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\" (UID: \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\") " Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.333126 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/025fff27-1ca8-4619-b60d-ee68f0a7fff8-scripts\") pod \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\" (UID: \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\") " Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.333156 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/025fff27-1ca8-4619-b60d-ee68f0a7fff8-combined-ca-bundle\") pod \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\" (UID: \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\") " Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.333215 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2br4\" (UniqueName: \"kubernetes.io/projected/025fff27-1ca8-4619-b60d-ee68f0a7fff8-kube-api-access-v2br4\") pod \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\" (UID: \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\") " Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.333243 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/871262c6-52b5-4ed8-9f81-7f42677e3763-config-data\") pod \"871262c6-52b5-4ed8-9f81-7f42677e3763\" (UID: \"871262c6-52b5-4ed8-9f81-7f42677e3763\") " Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.333280 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/025fff27-1ca8-4619-b60d-ee68f0a7fff8-run-httpd\") pod \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\" (UID: \"025fff27-1ca8-4619-b60d-ee68f0a7fff8\") " Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.333591 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8137b82-8885-4e06-a9c0-226b099080e8-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.333610 4908 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8137b82-8885-4e06-a9c0-226b099080e8-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.333622 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8gg5\" (UniqueName: \"kubernetes.io/projected/c8137b82-8885-4e06-a9c0-226b099080e8-kube-api-access-r8gg5\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.333634 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8137b82-8885-4e06-a9c0-226b099080e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.334025 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/025fff27-1ca8-4619-b60d-ee68f0a7fff8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "025fff27-1ca8-4619-b60d-ee68f0a7fff8" (UID: "025fff27-1ca8-4619-b60d-ee68f0a7fff8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.338196 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/025fff27-1ca8-4619-b60d-ee68f0a7fff8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "025fff27-1ca8-4619-b60d-ee68f0a7fff8" (UID: "025fff27-1ca8-4619-b60d-ee68f0a7fff8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.340206 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/025fff27-1ca8-4619-b60d-ee68f0a7fff8-scripts" (OuterVolumeSpecName: "scripts") pod "025fff27-1ca8-4619-b60d-ee68f0a7fff8" (UID: "025fff27-1ca8-4619-b60d-ee68f0a7fff8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.369410 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/025fff27-1ca8-4619-b60d-ee68f0a7fff8-kube-api-access-v2br4" (OuterVolumeSpecName: "kube-api-access-v2br4") pod "025fff27-1ca8-4619-b60d-ee68f0a7fff8" (UID: "025fff27-1ca8-4619-b60d-ee68f0a7fff8"). InnerVolumeSpecName "kube-api-access-v2br4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.370233 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/871262c6-52b5-4ed8-9f81-7f42677e3763-kube-api-access-nsnsc" (OuterVolumeSpecName: "kube-api-access-nsnsc") pod "871262c6-52b5-4ed8-9f81-7f42677e3763" (UID: "871262c6-52b5-4ed8-9f81-7f42677e3763"). InnerVolumeSpecName "kube-api-access-nsnsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.435553 4908 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/025fff27-1ca8-4619-b60d-ee68f0a7fff8-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.435579 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2br4\" (UniqueName: \"kubernetes.io/projected/025fff27-1ca8-4619-b60d-ee68f0a7fff8-kube-api-access-v2br4\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.435591 4908 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/025fff27-1ca8-4619-b60d-ee68f0a7fff8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.435601 4908 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/025fff27-1ca8-4619-b60d-ee68f0a7fff8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.435611 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsnsc\" (UniqueName: \"kubernetes.io/projected/871262c6-52b5-4ed8-9f81-7f42677e3763-kube-api-access-nsnsc\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.438786 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/871262c6-52b5-4ed8-9f81-7f42677e3763-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "871262c6-52b5-4ed8-9f81-7f42677e3763" (UID: "871262c6-52b5-4ed8-9f81-7f42677e3763"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.451232 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/025fff27-1ca8-4619-b60d-ee68f0a7fff8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "025fff27-1ca8-4619-b60d-ee68f0a7fff8" (UID: "025fff27-1ca8-4619-b60d-ee68f0a7fff8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.457626 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/871262c6-52b5-4ed8-9f81-7f42677e3763-config-data" (OuterVolumeSpecName: "config-data") pod "871262c6-52b5-4ed8-9f81-7f42677e3763" (UID: "871262c6-52b5-4ed8-9f81-7f42677e3763"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.482639 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/025fff27-1ca8-4619-b60d-ee68f0a7fff8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "025fff27-1ca8-4619-b60d-ee68f0a7fff8" (UID: "025fff27-1ca8-4619-b60d-ee68f0a7fff8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.518797 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/025fff27-1ca8-4619-b60d-ee68f0a7fff8-config-data" (OuterVolumeSpecName: "config-data") pod "025fff27-1ca8-4619-b60d-ee68f0a7fff8" (UID: "025fff27-1ca8-4619-b60d-ee68f0a7fff8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.525294 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.542959 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/871262c6-52b5-4ed8-9f81-7f42677e3763-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.543023 4908 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/025fff27-1ca8-4619-b60d-ee68f0a7fff8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.543038 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871262c6-52b5-4ed8-9f81-7f42677e3763-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.543051 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/025fff27-1ca8-4619-b60d-ee68f0a7fff8-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.543062 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/025fff27-1ca8-4619-b60d-ee68f0a7fff8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.550138 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.557607 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 07:44:48 crc kubenswrapper[4908]: E0131 07:44:48.558157 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025fff27-1ca8-4619-b60d-ee68f0a7fff8" containerName="sg-core" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.558189 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="025fff27-1ca8-4619-b60d-ee68f0a7fff8" containerName="sg-core" Jan 31 07:44:48 crc kubenswrapper[4908]: E0131 07:44:48.558242 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8137b82-8885-4e06-a9c0-226b099080e8" containerName="nova-api-log" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.558252 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8137b82-8885-4e06-a9c0-226b099080e8" containerName="nova-api-log" Jan 31 07:44:48 crc kubenswrapper[4908]: E0131 07:44:48.558269 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="871262c6-52b5-4ed8-9f81-7f42677e3763" containerName="nova-scheduler-scheduler" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.558278 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="871262c6-52b5-4ed8-9f81-7f42677e3763" containerName="nova-scheduler-scheduler" Jan 31 07:44:48 crc kubenswrapper[4908]: E0131 07:44:48.558289 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025fff27-1ca8-4619-b60d-ee68f0a7fff8" containerName="proxy-httpd" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.558297 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="025fff27-1ca8-4619-b60d-ee68f0a7fff8" containerName="proxy-httpd" Jan 31 07:44:48 crc kubenswrapper[4908]: E0131 07:44:48.558313 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8137b82-8885-4e06-a9c0-226b099080e8" containerName="nova-api-api" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.558321 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8137b82-8885-4e06-a9c0-226b099080e8" containerName="nova-api-api" Jan 31 07:44:48 crc kubenswrapper[4908]: E0131 07:44:48.558338 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025fff27-1ca8-4619-b60d-ee68f0a7fff8" containerName="ceilometer-notification-agent" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.558348 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="025fff27-1ca8-4619-b60d-ee68f0a7fff8" containerName="ceilometer-notification-agent" Jan 31 07:44:48 crc kubenswrapper[4908]: E0131 07:44:48.558360 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025fff27-1ca8-4619-b60d-ee68f0a7fff8" containerName="ceilometer-central-agent" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.558369 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="025fff27-1ca8-4619-b60d-ee68f0a7fff8" containerName="ceilometer-central-agent" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.558569 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="025fff27-1ca8-4619-b60d-ee68f0a7fff8" containerName="ceilometer-central-agent" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.558610 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="871262c6-52b5-4ed8-9f81-7f42677e3763" containerName="nova-scheduler-scheduler" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.558628 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="025fff27-1ca8-4619-b60d-ee68f0a7fff8" containerName="sg-core" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.558642 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="025fff27-1ca8-4619-b60d-ee68f0a7fff8" containerName="ceilometer-notification-agent" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.558656 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8137b82-8885-4e06-a9c0-226b099080e8" containerName="nova-api-api" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.558667 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="025fff27-1ca8-4619-b60d-ee68f0a7fff8" containerName="proxy-httpd" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.558684 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8137b82-8885-4e06-a9c0-226b099080e8" containerName="nova-api-log" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.559843 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.565752 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.566118 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.644365 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281\") " pod="openstack/nova-api-0" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.644431 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281-config-data\") pod \"nova-api-0\" (UID: \"1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281\") " pod="openstack/nova-api-0" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.644469 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s5xk\" (UniqueName: \"kubernetes.io/projected/1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281-kube-api-access-4s5xk\") pod \"nova-api-0\" (UID: \"1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281\") " pod="openstack/nova-api-0" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.644566 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281-logs\") pod \"nova-api-0\" (UID: \"1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281\") " pod="openstack/nova-api-0" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.747214 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281-logs\") pod \"nova-api-0\" (UID: \"1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281\") " pod="openstack/nova-api-0" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.747306 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281\") " pod="openstack/nova-api-0" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.747989 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281-config-data\") pod \"nova-api-0\" (UID: \"1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281\") " pod="openstack/nova-api-0" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.748062 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s5xk\" (UniqueName: \"kubernetes.io/projected/1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281-kube-api-access-4s5xk\") pod \"nova-api-0\" (UID: \"1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281\") " pod="openstack/nova-api-0" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.748223 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281-logs\") pod \"nova-api-0\" (UID: \"1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281\") " pod="openstack/nova-api-0" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.751762 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281-config-data\") pod \"nova-api-0\" (UID: \"1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281\") " pod="openstack/nova-api-0" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.766835 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281\") " pod="openstack/nova-api-0" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.770441 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s5xk\" (UniqueName: \"kubernetes.io/projected/1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281-kube-api-access-4s5xk\") pod \"nova-api-0\" (UID: \"1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281\") " pod="openstack/nova-api-0" Jan 31 07:44:48 crc kubenswrapper[4908]: I0131 07:44:48.879719 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.208780 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"025fff27-1ca8-4619-b60d-ee68f0a7fff8","Type":"ContainerDied","Data":"ab856df615f74a5d69d2def74ae0f7329b1e229e5f4f734490a666a0c039aa3a"} Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.208843 4908 scope.go:117] "RemoveContainer" containerID="db2b1974dd95dc85249afd7582688b0a84393aa6791ea87b6435ea0030341ac4" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.209049 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.217480 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f879a45d-24c6-4508-829b-a3cdbcda3a33","Type":"ContainerStarted","Data":"40c3de0948c325a528f21357bd8c1b86cf63be9fad76f50818c3a538ee3046f0"} Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.217539 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f879a45d-24c6-4508-829b-a3cdbcda3a33","Type":"ContainerStarted","Data":"dfaa8c3908d93737adf349c6bce0b7bd4065e1a4dd6784b465dfcf255479bb57"} Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.220188 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"871262c6-52b5-4ed8-9f81-7f42677e3763","Type":"ContainerDied","Data":"b25cbcc2c122defb0b544bcef89386eb061d32fc3d9246d5d1ec6fd1e20744b4"} Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.220253 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.236921 4908 scope.go:117] "RemoveContainer" containerID="d02b051f220915ce0dcf0ac5914b13a410bf24a92985b40c6f63d5e3fdbd89bf" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.255182 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.255142615 podStartE2EDuration="2.255142615s" podCreationTimestamp="2026-01-31 07:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:44:49.237465277 +0000 UTC m=+1395.853409971" watchObservedRunningTime="2026-01-31 07:44:49.255142615 +0000 UTC m=+1395.871087259" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.270422 4908 scope.go:117] "RemoveContainer" containerID="ed0a0c6c0d7420e018f093d06d439faea400831b79c869697fdc7c2aad095fff" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.274635 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.291515 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.307555 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.322309 4908 scope.go:117] "RemoveContainer" containerID="f69c16f2521faf6aa05d6d07e6ae6ae62a43a2a0489e91485c8a90167a709741" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.334544 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.347044 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.349871 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.352246 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.357044 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.357309 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.359164 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.360635 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.361356 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.362132 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.363797 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4052ef1-dec0-41b1-816b-44386da0b2b0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " pod="openstack/ceilometer-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.363860 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4052ef1-dec0-41b1-816b-44386da0b2b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " pod="openstack/ceilometer-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.363898 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4052ef1-dec0-41b1-816b-44386da0b2b0-log-httpd\") pod \"ceilometer-0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " pod="openstack/ceilometer-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.363928 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4052ef1-dec0-41b1-816b-44386da0b2b0-scripts\") pod \"ceilometer-0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " pod="openstack/ceilometer-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.363948 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9rjw\" (UniqueName: \"kubernetes.io/projected/d4052ef1-dec0-41b1-816b-44386da0b2b0-kube-api-access-w9rjw\") pod \"ceilometer-0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " pod="openstack/ceilometer-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.364013 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4052ef1-dec0-41b1-816b-44386da0b2b0-config-data\") pod \"ceilometer-0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " pod="openstack/ceilometer-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.364041 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4052ef1-dec0-41b1-816b-44386da0b2b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " pod="openstack/ceilometer-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.364081 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4052ef1-dec0-41b1-816b-44386da0b2b0-run-httpd\") pod \"ceilometer-0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " pod="openstack/ceilometer-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.365560 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.367540 4908 scope.go:117] "RemoveContainer" containerID="6f619b06045411b6decb3364d81efd4a65d4b6b5a93142cf032f3b9b18d016cb" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.374572 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:44:49 crc kubenswrapper[4908]: W0131 07:44:49.375730 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fc06b3d_2daf_4dbd_8dd7_0cba97ba7281.slice/crio-cd01b26857fd128c06a18791d306e07d7f736c4b966c81d0f662ace825c3a568 WatchSource:0}: Error finding container cd01b26857fd128c06a18791d306e07d7f736c4b966c81d0f662ace825c3a568: Status 404 returned error can't find the container with id cd01b26857fd128c06a18791d306e07d7f736c4b966c81d0f662ace825c3a568 Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.465383 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4052ef1-dec0-41b1-816b-44386da0b2b0-config-data\") pod \"ceilometer-0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " pod="openstack/ceilometer-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.465455 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4052ef1-dec0-41b1-816b-44386da0b2b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " pod="openstack/ceilometer-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.465499 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4052ef1-dec0-41b1-816b-44386da0b2b0-run-httpd\") pod \"ceilometer-0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " pod="openstack/ceilometer-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.465538 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23524731-05d4-4f72-b7cb-95bc9004706a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"23524731-05d4-4f72-b7cb-95bc9004706a\") " pod="openstack/nova-scheduler-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.465580 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23524731-05d4-4f72-b7cb-95bc9004706a-config-data\") pod \"nova-scheduler-0\" (UID: \"23524731-05d4-4f72-b7cb-95bc9004706a\") " pod="openstack/nova-scheduler-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.465610 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4052ef1-dec0-41b1-816b-44386da0b2b0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " pod="openstack/ceilometer-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.465711 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4052ef1-dec0-41b1-816b-44386da0b2b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " pod="openstack/ceilometer-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.465754 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4052ef1-dec0-41b1-816b-44386da0b2b0-log-httpd\") pod \"ceilometer-0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " pod="openstack/ceilometer-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.465799 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4052ef1-dec0-41b1-816b-44386da0b2b0-scripts\") pod \"ceilometer-0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " pod="openstack/ceilometer-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.465820 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9rjw\" (UniqueName: \"kubernetes.io/projected/d4052ef1-dec0-41b1-816b-44386da0b2b0-kube-api-access-w9rjw\") pod \"ceilometer-0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " pod="openstack/ceilometer-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.465872 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bx2z\" (UniqueName: \"kubernetes.io/projected/23524731-05d4-4f72-b7cb-95bc9004706a-kube-api-access-7bx2z\") pod \"nova-scheduler-0\" (UID: \"23524731-05d4-4f72-b7cb-95bc9004706a\") " pod="openstack/nova-scheduler-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.467432 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4052ef1-dec0-41b1-816b-44386da0b2b0-log-httpd\") pod \"ceilometer-0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " pod="openstack/ceilometer-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.467797 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4052ef1-dec0-41b1-816b-44386da0b2b0-run-httpd\") pod \"ceilometer-0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " pod="openstack/ceilometer-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.472287 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4052ef1-dec0-41b1-816b-44386da0b2b0-scripts\") pod \"ceilometer-0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " pod="openstack/ceilometer-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.472652 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4052ef1-dec0-41b1-816b-44386da0b2b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " pod="openstack/ceilometer-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.473681 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4052ef1-dec0-41b1-816b-44386da0b2b0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " pod="openstack/ceilometer-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.473859 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4052ef1-dec0-41b1-816b-44386da0b2b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " pod="openstack/ceilometer-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.475054 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4052ef1-dec0-41b1-816b-44386da0b2b0-config-data\") pod \"ceilometer-0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " pod="openstack/ceilometer-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.485418 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9rjw\" (UniqueName: \"kubernetes.io/projected/d4052ef1-dec0-41b1-816b-44386da0b2b0-kube-api-access-w9rjw\") pod \"ceilometer-0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " pod="openstack/ceilometer-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.567285 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bx2z\" (UniqueName: \"kubernetes.io/projected/23524731-05d4-4f72-b7cb-95bc9004706a-kube-api-access-7bx2z\") pod \"nova-scheduler-0\" (UID: \"23524731-05d4-4f72-b7cb-95bc9004706a\") " pod="openstack/nova-scheduler-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.567399 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23524731-05d4-4f72-b7cb-95bc9004706a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"23524731-05d4-4f72-b7cb-95bc9004706a\") " pod="openstack/nova-scheduler-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.567432 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23524731-05d4-4f72-b7cb-95bc9004706a-config-data\") pod \"nova-scheduler-0\" (UID: \"23524731-05d4-4f72-b7cb-95bc9004706a\") " pod="openstack/nova-scheduler-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.572178 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23524731-05d4-4f72-b7cb-95bc9004706a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"23524731-05d4-4f72-b7cb-95bc9004706a\") " pod="openstack/nova-scheduler-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.572440 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23524731-05d4-4f72-b7cb-95bc9004706a-config-data\") pod \"nova-scheduler-0\" (UID: \"23524731-05d4-4f72-b7cb-95bc9004706a\") " pod="openstack/nova-scheduler-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.593936 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bx2z\" (UniqueName: \"kubernetes.io/projected/23524731-05d4-4f72-b7cb-95bc9004706a-kube-api-access-7bx2z\") pod \"nova-scheduler-0\" (UID: \"23524731-05d4-4f72-b7cb-95bc9004706a\") " pod="openstack/nova-scheduler-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.759261 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.825033 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.960060 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="025fff27-1ca8-4619-b60d-ee68f0a7fff8" path="/var/lib/kubelet/pods/025fff27-1ca8-4619-b60d-ee68f0a7fff8/volumes" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.962565 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="871262c6-52b5-4ed8-9f81-7f42677e3763" path="/var/lib/kubelet/pods/871262c6-52b5-4ed8-9f81-7f42677e3763/volumes" Jan 31 07:44:49 crc kubenswrapper[4908]: I0131 07:44:49.963931 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8137b82-8885-4e06-a9c0-226b099080e8" path="/var/lib/kubelet/pods/c8137b82-8885-4e06-a9c0-226b099080e8/volumes" Jan 31 07:44:50 crc kubenswrapper[4908]: I0131 07:44:50.233674 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281","Type":"ContainerStarted","Data":"76e6a9fdf5a65d5daddfc502032b87d371e3cee8bdf8ea53c612efeaa3b8a719"} Jan 31 07:44:50 crc kubenswrapper[4908]: I0131 07:44:50.234069 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281","Type":"ContainerStarted","Data":"2a9727ae158882231710147cc192ffc6b14b9495cbfd98afbd2c1f7f9f9fc42b"} Jan 31 07:44:50 crc kubenswrapper[4908]: I0131 07:44:50.234092 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281","Type":"ContainerStarted","Data":"cd01b26857fd128c06a18791d306e07d7f736c4b966c81d0f662ace825c3a568"} Jan 31 07:44:50 crc kubenswrapper[4908]: I0131 07:44:50.283458 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:44:50 crc kubenswrapper[4908]: W0131 07:44:50.382454 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23524731_05d4_4f72_b7cb_95bc9004706a.slice/crio-54669ab3a5475959bc651d02068cf22394142bd3250ce12001164aeb55721466 WatchSource:0}: Error finding container 54669ab3a5475959bc651d02068cf22394142bd3250ce12001164aeb55721466: Status 404 returned error can't find the container with id 54669ab3a5475959bc651d02068cf22394142bd3250ce12001164aeb55721466 Jan 31 07:44:50 crc kubenswrapper[4908]: I0131 07:44:50.383695 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 07:44:51 crc kubenswrapper[4908]: I0131 07:44:51.246602 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"23524731-05d4-4f72-b7cb-95bc9004706a","Type":"ContainerStarted","Data":"e250bd7103af642db66be41dd2071ccb6bcdfa66515d4a6c3c92114bf793cced"} Jan 31 07:44:51 crc kubenswrapper[4908]: I0131 07:44:51.246934 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"23524731-05d4-4f72-b7cb-95bc9004706a","Type":"ContainerStarted","Data":"54669ab3a5475959bc651d02068cf22394142bd3250ce12001164aeb55721466"} Jan 31 07:44:51 crc kubenswrapper[4908]: I0131 07:44:51.248503 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4052ef1-dec0-41b1-816b-44386da0b2b0","Type":"ContainerStarted","Data":"87f8427517be8472c2a44f2ee459273127eab08c35559e77a66867d3c173d0d3"} Jan 31 07:44:51 crc kubenswrapper[4908]: I0131 07:44:51.268875 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.268854868 podStartE2EDuration="2.268854868s" podCreationTimestamp="2026-01-31 07:44:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:44:51.267209127 +0000 UTC m=+1397.883153781" watchObservedRunningTime="2026-01-31 07:44:51.268854868 +0000 UTC m=+1397.884799542" Jan 31 07:44:51 crc kubenswrapper[4908]: I0131 07:44:51.298581 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.298562843 podStartE2EDuration="3.298562843s" podCreationTimestamp="2026-01-31 07:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:44:51.286281659 +0000 UTC m=+1397.902226313" watchObservedRunningTime="2026-01-31 07:44:51.298562843 +0000 UTC m=+1397.914507497" Jan 31 07:44:52 crc kubenswrapper[4908]: I0131 07:44:52.260283 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4052ef1-dec0-41b1-816b-44386da0b2b0","Type":"ContainerStarted","Data":"0d50e5f5ca32079a55a3d93a930622a3cb2757c3d0e634afa6edd0f0fc284c51"} Jan 31 07:44:52 crc kubenswrapper[4908]: I0131 07:44:52.260510 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4052ef1-dec0-41b1-816b-44386da0b2b0","Type":"ContainerStarted","Data":"763c4af5c9363863bc5ef8feb3b25e4b0192ccc85a4e69cb788a10ef98247250"} Jan 31 07:44:52 crc kubenswrapper[4908]: I0131 07:44:52.559959 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 07:44:52 crc kubenswrapper[4908]: I0131 07:44:52.560038 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 07:44:53 crc kubenswrapper[4908]: I0131 07:44:53.270400 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4052ef1-dec0-41b1-816b-44386da0b2b0","Type":"ContainerStarted","Data":"303319d4f0804551bc848ec0ce1062391964c0da1b6e444a7b0d8ee67eb4e6d5"} Jan 31 07:44:54 crc kubenswrapper[4908]: I0131 07:44:54.517060 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 31 07:44:54 crc kubenswrapper[4908]: I0131 07:44:54.826068 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 31 07:44:56 crc kubenswrapper[4908]: I0131 07:44:56.304089 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4052ef1-dec0-41b1-816b-44386da0b2b0","Type":"ContainerStarted","Data":"d882624de1d04d55a1d6c0745405773db20253614de00c3f5ea8261446796dae"} Jan 31 07:44:56 crc kubenswrapper[4908]: I0131 07:44:56.305507 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 07:44:57 crc kubenswrapper[4908]: I0131 07:44:57.559843 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 07:44:57 crc kubenswrapper[4908]: I0131 07:44:57.560123 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 07:44:58 crc kubenswrapper[4908]: I0131 07:44:58.574114 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f879a45d-24c6-4508-829b-a3cdbcda3a33" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.183:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 07:44:58 crc kubenswrapper[4908]: I0131 07:44:58.574227 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f879a45d-24c6-4508-829b-a3cdbcda3a33" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.183:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 07:44:58 crc kubenswrapper[4908]: I0131 07:44:58.879995 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 07:44:58 crc kubenswrapper[4908]: I0131 07:44:58.880068 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 07:44:59 crc kubenswrapper[4908]: I0131 07:44:59.825606 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 31 07:44:59 crc kubenswrapper[4908]: I0131 07:44:59.856575 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 31 07:44:59 crc kubenswrapper[4908]: I0131 07:44:59.875955 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.605832491 podStartE2EDuration="10.875940004s" podCreationTimestamp="2026-01-31 07:44:49 +0000 UTC" firstStartedPulling="2026-01-31 07:44:50.282480478 +0000 UTC m=+1396.898425132" lastFinishedPulling="2026-01-31 07:44:55.552587991 +0000 UTC m=+1402.168532645" observedRunningTime="2026-01-31 07:44:56.3329399 +0000 UTC m=+1402.948884554" watchObservedRunningTime="2026-01-31 07:44:59.875940004 +0000 UTC m=+1406.491884658" Jan 31 07:44:59 crc kubenswrapper[4908]: I0131 07:44:59.963119 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.184:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 07:44:59 crc kubenswrapper[4908]: I0131 07:44:59.963138 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.184:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 07:45:00 crc kubenswrapper[4908]: I0131 07:45:00.152371 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497425-d2kxm"] Jan 31 07:45:00 crc kubenswrapper[4908]: I0131 07:45:00.153718 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-d2kxm" Jan 31 07:45:00 crc kubenswrapper[4908]: I0131 07:45:00.155899 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 07:45:00 crc kubenswrapper[4908]: I0131 07:45:00.156015 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 07:45:00 crc kubenswrapper[4908]: I0131 07:45:00.165664 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497425-d2kxm"] Jan 31 07:45:00 crc kubenswrapper[4908]: I0131 07:45:00.180911 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m79r9\" (UniqueName: \"kubernetes.io/projected/53cb8abd-5768-468b-b2b1-2e2667692cf9-kube-api-access-m79r9\") pod \"collect-profiles-29497425-d2kxm\" (UID: \"53cb8abd-5768-468b-b2b1-2e2667692cf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-d2kxm" Jan 31 07:45:00 crc kubenswrapper[4908]: I0131 07:45:00.181092 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53cb8abd-5768-468b-b2b1-2e2667692cf9-secret-volume\") pod \"collect-profiles-29497425-d2kxm\" (UID: \"53cb8abd-5768-468b-b2b1-2e2667692cf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-d2kxm" Jan 31 07:45:00 crc kubenswrapper[4908]: I0131 07:45:00.181202 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53cb8abd-5768-468b-b2b1-2e2667692cf9-config-volume\") pod \"collect-profiles-29497425-d2kxm\" (UID: \"53cb8abd-5768-468b-b2b1-2e2667692cf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-d2kxm" Jan 31 07:45:00 crc kubenswrapper[4908]: I0131 07:45:00.282953 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m79r9\" (UniqueName: \"kubernetes.io/projected/53cb8abd-5768-468b-b2b1-2e2667692cf9-kube-api-access-m79r9\") pod \"collect-profiles-29497425-d2kxm\" (UID: \"53cb8abd-5768-468b-b2b1-2e2667692cf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-d2kxm" Jan 31 07:45:00 crc kubenswrapper[4908]: I0131 07:45:00.283286 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53cb8abd-5768-468b-b2b1-2e2667692cf9-secret-volume\") pod \"collect-profiles-29497425-d2kxm\" (UID: \"53cb8abd-5768-468b-b2b1-2e2667692cf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-d2kxm" Jan 31 07:45:00 crc kubenswrapper[4908]: I0131 07:45:00.283409 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53cb8abd-5768-468b-b2b1-2e2667692cf9-config-volume\") pod \"collect-profiles-29497425-d2kxm\" (UID: \"53cb8abd-5768-468b-b2b1-2e2667692cf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-d2kxm" Jan 31 07:45:00 crc kubenswrapper[4908]: I0131 07:45:00.284273 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53cb8abd-5768-468b-b2b1-2e2667692cf9-config-volume\") pod \"collect-profiles-29497425-d2kxm\" (UID: \"53cb8abd-5768-468b-b2b1-2e2667692cf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-d2kxm" Jan 31 07:45:00 crc kubenswrapper[4908]: I0131 07:45:00.299957 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53cb8abd-5768-468b-b2b1-2e2667692cf9-secret-volume\") pod \"collect-profiles-29497425-d2kxm\" (UID: \"53cb8abd-5768-468b-b2b1-2e2667692cf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-d2kxm" Jan 31 07:45:00 crc kubenswrapper[4908]: I0131 07:45:00.300351 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m79r9\" (UniqueName: \"kubernetes.io/projected/53cb8abd-5768-468b-b2b1-2e2667692cf9-kube-api-access-m79r9\") pod \"collect-profiles-29497425-d2kxm\" (UID: \"53cb8abd-5768-468b-b2b1-2e2667692cf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-d2kxm" Jan 31 07:45:00 crc kubenswrapper[4908]: I0131 07:45:00.339412 4908 generic.go:334] "Generic (PLEG): container finished" podID="905d2170-5f0c-4ee0-86b9-d659c80ad9f7" containerID="90fd9664b230b304a2cdfabbac2b9ff4b0d5078416e6527ab66d6fbb187cd89f" exitCode=0 Jan 31 07:45:00 crc kubenswrapper[4908]: I0131 07:45:00.339473 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-x7zxw" event={"ID":"905d2170-5f0c-4ee0-86b9-d659c80ad9f7","Type":"ContainerDied","Data":"90fd9664b230b304a2cdfabbac2b9ff4b0d5078416e6527ab66d6fbb187cd89f"} Jan 31 07:45:00 crc kubenswrapper[4908]: I0131 07:45:00.378148 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 31 07:45:00 crc kubenswrapper[4908]: I0131 07:45:00.475692 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-d2kxm" Jan 31 07:45:01 crc kubenswrapper[4908]: I0131 07:45:01.042457 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497425-d2kxm"] Jan 31 07:45:01 crc kubenswrapper[4908]: I0131 07:45:01.355134 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-d2kxm" event={"ID":"53cb8abd-5768-468b-b2b1-2e2667692cf9","Type":"ContainerStarted","Data":"0a35cfad048bb698539770c06b992e39a357d5db3b6918364c50c30c696a6270"} Jan 31 07:45:01 crc kubenswrapper[4908]: I0131 07:45:01.355197 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-d2kxm" event={"ID":"53cb8abd-5768-468b-b2b1-2e2667692cf9","Type":"ContainerStarted","Data":"6ade300a79c4d015e0b80658320819c797dae982e32caa63e1db729b8663951f"} Jan 31 07:45:01 crc kubenswrapper[4908]: I0131 07:45:01.376996 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-d2kxm" podStartSLOduration=1.376956294 podStartE2EDuration="1.376956294s" podCreationTimestamp="2026-01-31 07:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:45:01.369616512 +0000 UTC m=+1407.985561176" watchObservedRunningTime="2026-01-31 07:45:01.376956294 +0000 UTC m=+1407.992900948" Jan 31 07:45:01 crc kubenswrapper[4908]: I0131 07:45:01.725303 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-x7zxw" Jan 31 07:45:01 crc kubenswrapper[4908]: I0131 07:45:01.910670 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/905d2170-5f0c-4ee0-86b9-d659c80ad9f7-config-data\") pod \"905d2170-5f0c-4ee0-86b9-d659c80ad9f7\" (UID: \"905d2170-5f0c-4ee0-86b9-d659c80ad9f7\") " Jan 31 07:45:01 crc kubenswrapper[4908]: I0131 07:45:01.910825 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6sbp\" (UniqueName: \"kubernetes.io/projected/905d2170-5f0c-4ee0-86b9-d659c80ad9f7-kube-api-access-q6sbp\") pod \"905d2170-5f0c-4ee0-86b9-d659c80ad9f7\" (UID: \"905d2170-5f0c-4ee0-86b9-d659c80ad9f7\") " Jan 31 07:45:01 crc kubenswrapper[4908]: I0131 07:45:01.910860 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/905d2170-5f0c-4ee0-86b9-d659c80ad9f7-scripts\") pod \"905d2170-5f0c-4ee0-86b9-d659c80ad9f7\" (UID: \"905d2170-5f0c-4ee0-86b9-d659c80ad9f7\") " Jan 31 07:45:01 crc kubenswrapper[4908]: I0131 07:45:01.910957 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/905d2170-5f0c-4ee0-86b9-d659c80ad9f7-combined-ca-bundle\") pod \"905d2170-5f0c-4ee0-86b9-d659c80ad9f7\" (UID: \"905d2170-5f0c-4ee0-86b9-d659c80ad9f7\") " Jan 31 07:45:01 crc kubenswrapper[4908]: I0131 07:45:01.916418 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/905d2170-5f0c-4ee0-86b9-d659c80ad9f7-scripts" (OuterVolumeSpecName: "scripts") pod "905d2170-5f0c-4ee0-86b9-d659c80ad9f7" (UID: "905d2170-5f0c-4ee0-86b9-d659c80ad9f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:45:01 crc kubenswrapper[4908]: I0131 07:45:01.920176 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/905d2170-5f0c-4ee0-86b9-d659c80ad9f7-kube-api-access-q6sbp" (OuterVolumeSpecName: "kube-api-access-q6sbp") pod "905d2170-5f0c-4ee0-86b9-d659c80ad9f7" (UID: "905d2170-5f0c-4ee0-86b9-d659c80ad9f7"). InnerVolumeSpecName "kube-api-access-q6sbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:45:01 crc kubenswrapper[4908]: I0131 07:45:01.940098 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/905d2170-5f0c-4ee0-86b9-d659c80ad9f7-config-data" (OuterVolumeSpecName: "config-data") pod "905d2170-5f0c-4ee0-86b9-d659c80ad9f7" (UID: "905d2170-5f0c-4ee0-86b9-d659c80ad9f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:45:01 crc kubenswrapper[4908]: I0131 07:45:01.942108 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/905d2170-5f0c-4ee0-86b9-d659c80ad9f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "905d2170-5f0c-4ee0-86b9-d659c80ad9f7" (UID: "905d2170-5f0c-4ee0-86b9-d659c80ad9f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:45:02 crc kubenswrapper[4908]: I0131 07:45:02.013440 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6sbp\" (UniqueName: \"kubernetes.io/projected/905d2170-5f0c-4ee0-86b9-d659c80ad9f7-kube-api-access-q6sbp\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:02 crc kubenswrapper[4908]: I0131 07:45:02.013698 4908 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/905d2170-5f0c-4ee0-86b9-d659c80ad9f7-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:02 crc kubenswrapper[4908]: I0131 07:45:02.013708 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/905d2170-5f0c-4ee0-86b9-d659c80ad9f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:02 crc kubenswrapper[4908]: I0131 07:45:02.013716 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/905d2170-5f0c-4ee0-86b9-d659c80ad9f7-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:02 crc kubenswrapper[4908]: I0131 07:45:02.379263 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-x7zxw" event={"ID":"905d2170-5f0c-4ee0-86b9-d659c80ad9f7","Type":"ContainerDied","Data":"d3e2bce01a8837d789fad30a234997bd48827b1d30ed9d3201484076e2dd39dc"} Jan 31 07:45:02 crc kubenswrapper[4908]: I0131 07:45:02.379294 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-x7zxw" Jan 31 07:45:02 crc kubenswrapper[4908]: I0131 07:45:02.379317 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3e2bce01a8837d789fad30a234997bd48827b1d30ed9d3201484076e2dd39dc" Jan 31 07:45:02 crc kubenswrapper[4908]: I0131 07:45:02.385737 4908 generic.go:334] "Generic (PLEG): container finished" podID="53cb8abd-5768-468b-b2b1-2e2667692cf9" containerID="0a35cfad048bb698539770c06b992e39a357d5db3b6918364c50c30c696a6270" exitCode=0 Jan 31 07:45:02 crc kubenswrapper[4908]: I0131 07:45:02.385788 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-d2kxm" event={"ID":"53cb8abd-5768-468b-b2b1-2e2667692cf9","Type":"ContainerDied","Data":"0a35cfad048bb698539770c06b992e39a357d5db3b6918364c50c30c696a6270"} Jan 31 07:45:02 crc kubenswrapper[4908]: I0131 07:45:02.439616 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 31 07:45:02 crc kubenswrapper[4908]: E0131 07:45:02.440106 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="905d2170-5f0c-4ee0-86b9-d659c80ad9f7" containerName="nova-cell1-conductor-db-sync" Jan 31 07:45:02 crc kubenswrapper[4908]: I0131 07:45:02.440132 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="905d2170-5f0c-4ee0-86b9-d659c80ad9f7" containerName="nova-cell1-conductor-db-sync" Jan 31 07:45:02 crc kubenswrapper[4908]: I0131 07:45:02.440341 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="905d2170-5f0c-4ee0-86b9-d659c80ad9f7" containerName="nova-cell1-conductor-db-sync" Jan 31 07:45:02 crc kubenswrapper[4908]: I0131 07:45:02.441092 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 31 07:45:02 crc kubenswrapper[4908]: I0131 07:45:02.443262 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 31 07:45:02 crc kubenswrapper[4908]: I0131 07:45:02.449772 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 31 07:45:02 crc kubenswrapper[4908]: I0131 07:45:02.524049 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54f0b1fc-48c2-4f04-8087-b5d94b8168b2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"54f0b1fc-48c2-4f04-8087-b5d94b8168b2\") " pod="openstack/nova-cell1-conductor-0" Jan 31 07:45:02 crc kubenswrapper[4908]: I0131 07:45:02.524126 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54f0b1fc-48c2-4f04-8087-b5d94b8168b2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"54f0b1fc-48c2-4f04-8087-b5d94b8168b2\") " pod="openstack/nova-cell1-conductor-0" Jan 31 07:45:02 crc kubenswrapper[4908]: I0131 07:45:02.524350 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbpmv\" (UniqueName: \"kubernetes.io/projected/54f0b1fc-48c2-4f04-8087-b5d94b8168b2-kube-api-access-lbpmv\") pod \"nova-cell1-conductor-0\" (UID: \"54f0b1fc-48c2-4f04-8087-b5d94b8168b2\") " pod="openstack/nova-cell1-conductor-0" Jan 31 07:45:02 crc kubenswrapper[4908]: I0131 07:45:02.625927 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbpmv\" (UniqueName: \"kubernetes.io/projected/54f0b1fc-48c2-4f04-8087-b5d94b8168b2-kube-api-access-lbpmv\") pod \"nova-cell1-conductor-0\" (UID: \"54f0b1fc-48c2-4f04-8087-b5d94b8168b2\") " pod="openstack/nova-cell1-conductor-0" Jan 31 07:45:02 crc kubenswrapper[4908]: I0131 07:45:02.626051 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54f0b1fc-48c2-4f04-8087-b5d94b8168b2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"54f0b1fc-48c2-4f04-8087-b5d94b8168b2\") " pod="openstack/nova-cell1-conductor-0" Jan 31 07:45:02 crc kubenswrapper[4908]: I0131 07:45:02.626079 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54f0b1fc-48c2-4f04-8087-b5d94b8168b2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"54f0b1fc-48c2-4f04-8087-b5d94b8168b2\") " pod="openstack/nova-cell1-conductor-0" Jan 31 07:45:02 crc kubenswrapper[4908]: I0131 07:45:02.631199 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54f0b1fc-48c2-4f04-8087-b5d94b8168b2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"54f0b1fc-48c2-4f04-8087-b5d94b8168b2\") " pod="openstack/nova-cell1-conductor-0" Jan 31 07:45:02 crc kubenswrapper[4908]: I0131 07:45:02.631213 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54f0b1fc-48c2-4f04-8087-b5d94b8168b2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"54f0b1fc-48c2-4f04-8087-b5d94b8168b2\") " pod="openstack/nova-cell1-conductor-0" Jan 31 07:45:02 crc kubenswrapper[4908]: I0131 07:45:02.649008 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbpmv\" (UniqueName: \"kubernetes.io/projected/54f0b1fc-48c2-4f04-8087-b5d94b8168b2-kube-api-access-lbpmv\") pod \"nova-cell1-conductor-0\" (UID: \"54f0b1fc-48c2-4f04-8087-b5d94b8168b2\") " pod="openstack/nova-cell1-conductor-0" Jan 31 07:45:02 crc kubenswrapper[4908]: I0131 07:45:02.766418 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 31 07:45:03 crc kubenswrapper[4908]: I0131 07:45:03.174759 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 31 07:45:03 crc kubenswrapper[4908]: I0131 07:45:03.395250 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"54f0b1fc-48c2-4f04-8087-b5d94b8168b2","Type":"ContainerStarted","Data":"e2aa51cb5dda91afac6102cfb5616e9219594c9fefc29f775c5c775770b28b1b"} Jan 31 07:45:03 crc kubenswrapper[4908]: I0131 07:45:03.395585 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"54f0b1fc-48c2-4f04-8087-b5d94b8168b2","Type":"ContainerStarted","Data":"4504e9126a51f0699d3d3d13296e10196a15de411349f719ef22953a63206c76"} Jan 31 07:45:03 crc kubenswrapper[4908]: I0131 07:45:03.726032 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-d2kxm" Jan 31 07:45:03 crc kubenswrapper[4908]: I0131 07:45:03.749218 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m79r9\" (UniqueName: \"kubernetes.io/projected/53cb8abd-5768-468b-b2b1-2e2667692cf9-kube-api-access-m79r9\") pod \"53cb8abd-5768-468b-b2b1-2e2667692cf9\" (UID: \"53cb8abd-5768-468b-b2b1-2e2667692cf9\") " Jan 31 07:45:03 crc kubenswrapper[4908]: I0131 07:45:03.749294 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53cb8abd-5768-468b-b2b1-2e2667692cf9-secret-volume\") pod \"53cb8abd-5768-468b-b2b1-2e2667692cf9\" (UID: \"53cb8abd-5768-468b-b2b1-2e2667692cf9\") " Jan 31 07:45:03 crc kubenswrapper[4908]: I0131 07:45:03.749448 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53cb8abd-5768-468b-b2b1-2e2667692cf9-config-volume\") pod \"53cb8abd-5768-468b-b2b1-2e2667692cf9\" (UID: \"53cb8abd-5768-468b-b2b1-2e2667692cf9\") " Jan 31 07:45:03 crc kubenswrapper[4908]: I0131 07:45:03.751346 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53cb8abd-5768-468b-b2b1-2e2667692cf9-config-volume" (OuterVolumeSpecName: "config-volume") pod "53cb8abd-5768-468b-b2b1-2e2667692cf9" (UID: "53cb8abd-5768-468b-b2b1-2e2667692cf9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:45:03 crc kubenswrapper[4908]: I0131 07:45:03.755931 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53cb8abd-5768-468b-b2b1-2e2667692cf9-kube-api-access-m79r9" (OuterVolumeSpecName: "kube-api-access-m79r9") pod "53cb8abd-5768-468b-b2b1-2e2667692cf9" (UID: "53cb8abd-5768-468b-b2b1-2e2667692cf9"). InnerVolumeSpecName "kube-api-access-m79r9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:45:03 crc kubenswrapper[4908]: I0131 07:45:03.756400 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53cb8abd-5768-468b-b2b1-2e2667692cf9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "53cb8abd-5768-468b-b2b1-2e2667692cf9" (UID: "53cb8abd-5768-468b-b2b1-2e2667692cf9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:45:03 crc kubenswrapper[4908]: I0131 07:45:03.852380 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m79r9\" (UniqueName: \"kubernetes.io/projected/53cb8abd-5768-468b-b2b1-2e2667692cf9-kube-api-access-m79r9\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:03 crc kubenswrapper[4908]: I0131 07:45:03.852670 4908 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53cb8abd-5768-468b-b2b1-2e2667692cf9-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:03 crc kubenswrapper[4908]: I0131 07:45:03.852680 4908 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53cb8abd-5768-468b-b2b1-2e2667692cf9-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:04 crc kubenswrapper[4908]: I0131 07:45:04.420848 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-d2kxm" event={"ID":"53cb8abd-5768-468b-b2b1-2e2667692cf9","Type":"ContainerDied","Data":"6ade300a79c4d015e0b80658320819c797dae982e32caa63e1db729b8663951f"} Jan 31 07:45:04 crc kubenswrapper[4908]: I0131 07:45:04.420890 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ade300a79c4d015e0b80658320819c797dae982e32caa63e1db729b8663951f" Jan 31 07:45:04 crc kubenswrapper[4908]: I0131 07:45:04.420904 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-d2kxm" Jan 31 07:45:04 crc kubenswrapper[4908]: I0131 07:45:04.421413 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 31 07:45:04 crc kubenswrapper[4908]: I0131 07:45:04.450364 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.450346402 podStartE2EDuration="2.450346402s" podCreationTimestamp="2026-01-31 07:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:45:04.444539958 +0000 UTC m=+1411.060484622" watchObservedRunningTime="2026-01-31 07:45:04.450346402 +0000 UTC m=+1411.066291056" Jan 31 07:45:07 crc kubenswrapper[4908]: I0131 07:45:07.563449 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 07:45:07 crc kubenswrapper[4908]: I0131 07:45:07.565581 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 07:45:07 crc kubenswrapper[4908]: I0131 07:45:07.569661 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 07:45:08 crc kubenswrapper[4908]: I0131 07:45:08.463528 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 07:45:08 crc kubenswrapper[4908]: I0131 07:45:08.886046 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 07:45:08 crc kubenswrapper[4908]: I0131 07:45:08.887246 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 07:45:08 crc kubenswrapper[4908]: I0131 07:45:08.890214 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 07:45:08 crc kubenswrapper[4908]: I0131 07:45:08.892552 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 07:45:09 crc kubenswrapper[4908]: I0131 07:45:09.482898 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 07:45:09 crc kubenswrapper[4908]: I0131 07:45:09.489299 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 07:45:09 crc kubenswrapper[4908]: I0131 07:45:09.692463 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-gjqxv"] Jan 31 07:45:09 crc kubenswrapper[4908]: E0131 07:45:09.692972 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53cb8abd-5768-468b-b2b1-2e2667692cf9" containerName="collect-profiles" Jan 31 07:45:09 crc kubenswrapper[4908]: I0131 07:45:09.693008 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="53cb8abd-5768-468b-b2b1-2e2667692cf9" containerName="collect-profiles" Jan 31 07:45:09 crc kubenswrapper[4908]: I0131 07:45:09.693231 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="53cb8abd-5768-468b-b2b1-2e2667692cf9" containerName="collect-profiles" Jan 31 07:45:09 crc kubenswrapper[4908]: I0131 07:45:09.694465 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-gjqxv" Jan 31 07:45:09 crc kubenswrapper[4908]: I0131 07:45:09.707402 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-gjqxv"] Jan 31 07:45:09 crc kubenswrapper[4908]: I0131 07:45:09.862595 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-gjqxv\" (UID: \"48ebbbad-0bef-4b23-b229-eba1eb8dfd6c\") " pod="openstack/dnsmasq-dns-68d4b6d797-gjqxv" Jan 31 07:45:09 crc kubenswrapper[4908]: I0131 07:45:09.862838 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-gjqxv\" (UID: \"48ebbbad-0bef-4b23-b229-eba1eb8dfd6c\") " pod="openstack/dnsmasq-dns-68d4b6d797-gjqxv" Jan 31 07:45:09 crc kubenswrapper[4908]: I0131 07:45:09.862870 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9257\" (UniqueName: \"kubernetes.io/projected/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c-kube-api-access-z9257\") pod \"dnsmasq-dns-68d4b6d797-gjqxv\" (UID: \"48ebbbad-0bef-4b23-b229-eba1eb8dfd6c\") " pod="openstack/dnsmasq-dns-68d4b6d797-gjqxv" Jan 31 07:45:09 crc kubenswrapper[4908]: I0131 07:45:09.862889 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c-config\") pod \"dnsmasq-dns-68d4b6d797-gjqxv\" (UID: \"48ebbbad-0bef-4b23-b229-eba1eb8dfd6c\") " pod="openstack/dnsmasq-dns-68d4b6d797-gjqxv" Jan 31 07:45:09 crc kubenswrapper[4908]: I0131 07:45:09.862921 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-gjqxv\" (UID: \"48ebbbad-0bef-4b23-b229-eba1eb8dfd6c\") " pod="openstack/dnsmasq-dns-68d4b6d797-gjqxv" Jan 31 07:45:09 crc kubenswrapper[4908]: I0131 07:45:09.964313 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-gjqxv\" (UID: \"48ebbbad-0bef-4b23-b229-eba1eb8dfd6c\") " pod="openstack/dnsmasq-dns-68d4b6d797-gjqxv" Jan 31 07:45:09 crc kubenswrapper[4908]: I0131 07:45:09.964361 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-gjqxv\" (UID: \"48ebbbad-0bef-4b23-b229-eba1eb8dfd6c\") " pod="openstack/dnsmasq-dns-68d4b6d797-gjqxv" Jan 31 07:45:09 crc kubenswrapper[4908]: I0131 07:45:09.964394 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9257\" (UniqueName: \"kubernetes.io/projected/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c-kube-api-access-z9257\") pod \"dnsmasq-dns-68d4b6d797-gjqxv\" (UID: \"48ebbbad-0bef-4b23-b229-eba1eb8dfd6c\") " pod="openstack/dnsmasq-dns-68d4b6d797-gjqxv" Jan 31 07:45:09 crc kubenswrapper[4908]: I0131 07:45:09.964416 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c-config\") pod \"dnsmasq-dns-68d4b6d797-gjqxv\" (UID: \"48ebbbad-0bef-4b23-b229-eba1eb8dfd6c\") " pod="openstack/dnsmasq-dns-68d4b6d797-gjqxv" Jan 31 07:45:09 crc kubenswrapper[4908]: I0131 07:45:09.964442 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-gjqxv\" (UID: \"48ebbbad-0bef-4b23-b229-eba1eb8dfd6c\") " pod="openstack/dnsmasq-dns-68d4b6d797-gjqxv" Jan 31 07:45:09 crc kubenswrapper[4908]: I0131 07:45:09.965255 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-gjqxv\" (UID: \"48ebbbad-0bef-4b23-b229-eba1eb8dfd6c\") " pod="openstack/dnsmasq-dns-68d4b6d797-gjqxv" Jan 31 07:45:09 crc kubenswrapper[4908]: I0131 07:45:09.965265 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-gjqxv\" (UID: \"48ebbbad-0bef-4b23-b229-eba1eb8dfd6c\") " pod="openstack/dnsmasq-dns-68d4b6d797-gjqxv" Jan 31 07:45:09 crc kubenswrapper[4908]: I0131 07:45:09.965903 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c-config\") pod \"dnsmasq-dns-68d4b6d797-gjqxv\" (UID: \"48ebbbad-0bef-4b23-b229-eba1eb8dfd6c\") " pod="openstack/dnsmasq-dns-68d4b6d797-gjqxv" Jan 31 07:45:09 crc kubenswrapper[4908]: I0131 07:45:09.969498 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-gjqxv\" (UID: \"48ebbbad-0bef-4b23-b229-eba1eb8dfd6c\") " pod="openstack/dnsmasq-dns-68d4b6d797-gjqxv" Jan 31 07:45:09 crc kubenswrapper[4908]: I0131 07:45:09.993372 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9257\" (UniqueName: \"kubernetes.io/projected/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c-kube-api-access-z9257\") pod \"dnsmasq-dns-68d4b6d797-gjqxv\" (UID: \"48ebbbad-0bef-4b23-b229-eba1eb8dfd6c\") " pod="openstack/dnsmasq-dns-68d4b6d797-gjqxv" Jan 31 07:45:10 crc kubenswrapper[4908]: I0131 07:45:10.036660 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-gjqxv" Jan 31 07:45:10 crc kubenswrapper[4908]: I0131 07:45:10.430665 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:45:10 crc kubenswrapper[4908]: I0131 07:45:10.430957 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:45:10 crc kubenswrapper[4908]: I0131 07:45:10.486964 4908 generic.go:334] "Generic (PLEG): container finished" podID="087d762e-9478-43bb-b605-785abab29e87" containerID="b8726aaf218aa30ac788bc3684339fc4f8a70ed41a2d8a472060b5274863bbea" exitCode=137 Jan 31 07:45:10 crc kubenswrapper[4908]: I0131 07:45:10.487075 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"087d762e-9478-43bb-b605-785abab29e87","Type":"ContainerDied","Data":"b8726aaf218aa30ac788bc3684339fc4f8a70ed41a2d8a472060b5274863bbea"} Jan 31 07:45:10 crc kubenswrapper[4908]: I0131 07:45:10.508323 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-gjqxv"] Jan 31 07:45:10 crc kubenswrapper[4908]: W0131 07:45:10.514052 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48ebbbad_0bef_4b23_b229_eba1eb8dfd6c.slice/crio-255ffa39874c3e21bf702f0ad4a8f65da4e6254abe3efb6c3cfd6e60519479c1 WatchSource:0}: Error finding container 255ffa39874c3e21bf702f0ad4a8f65da4e6254abe3efb6c3cfd6e60519479c1: Status 404 returned error can't find the container with id 255ffa39874c3e21bf702f0ad4a8f65da4e6254abe3efb6c3cfd6e60519479c1 Jan 31 07:45:10 crc kubenswrapper[4908]: I0131 07:45:10.558326 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:45:10 crc kubenswrapper[4908]: I0131 07:45:10.677435 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/087d762e-9478-43bb-b605-785abab29e87-combined-ca-bundle\") pod \"087d762e-9478-43bb-b605-785abab29e87\" (UID: \"087d762e-9478-43bb-b605-785abab29e87\") " Jan 31 07:45:10 crc kubenswrapper[4908]: I0131 07:45:10.677498 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/087d762e-9478-43bb-b605-785abab29e87-config-data\") pod \"087d762e-9478-43bb-b605-785abab29e87\" (UID: \"087d762e-9478-43bb-b605-785abab29e87\") " Jan 31 07:45:10 crc kubenswrapper[4908]: I0131 07:45:10.677622 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt8kt\" (UniqueName: \"kubernetes.io/projected/087d762e-9478-43bb-b605-785abab29e87-kube-api-access-vt8kt\") pod \"087d762e-9478-43bb-b605-785abab29e87\" (UID: \"087d762e-9478-43bb-b605-785abab29e87\") " Jan 31 07:45:10 crc kubenswrapper[4908]: I0131 07:45:10.682741 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/087d762e-9478-43bb-b605-785abab29e87-kube-api-access-vt8kt" (OuterVolumeSpecName: "kube-api-access-vt8kt") pod "087d762e-9478-43bb-b605-785abab29e87" (UID: "087d762e-9478-43bb-b605-785abab29e87"). InnerVolumeSpecName "kube-api-access-vt8kt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:45:10 crc kubenswrapper[4908]: I0131 07:45:10.707610 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/087d762e-9478-43bb-b605-785abab29e87-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "087d762e-9478-43bb-b605-785abab29e87" (UID: "087d762e-9478-43bb-b605-785abab29e87"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:45:10 crc kubenswrapper[4908]: I0131 07:45:10.709407 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/087d762e-9478-43bb-b605-785abab29e87-config-data" (OuterVolumeSpecName: "config-data") pod "087d762e-9478-43bb-b605-785abab29e87" (UID: "087d762e-9478-43bb-b605-785abab29e87"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:45:10 crc kubenswrapper[4908]: I0131 07:45:10.779521 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/087d762e-9478-43bb-b605-785abab29e87-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:10 crc kubenswrapper[4908]: I0131 07:45:10.779558 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt8kt\" (UniqueName: \"kubernetes.io/projected/087d762e-9478-43bb-b605-785abab29e87-kube-api-access-vt8kt\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:10 crc kubenswrapper[4908]: I0131 07:45:10.779569 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/087d762e-9478-43bb-b605-785abab29e87-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.497008 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"087d762e-9478-43bb-b605-785abab29e87","Type":"ContainerDied","Data":"5759b9b3e2a214219282a2ffc5d6545ed512954e8bb6052be63f360bf773cbc1"} Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.497312 4908 scope.go:117] "RemoveContainer" containerID="b8726aaf218aa30ac788bc3684339fc4f8a70ed41a2d8a472060b5274863bbea" Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.497040 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.499249 4908 generic.go:334] "Generic (PLEG): container finished" podID="48ebbbad-0bef-4b23-b229-eba1eb8dfd6c" containerID="453b9889b3198c44ed9724aa0533a7871e1f549fba4a0585cb4f741292407c26" exitCode=0 Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.499308 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-gjqxv" event={"ID":"48ebbbad-0bef-4b23-b229-eba1eb8dfd6c","Type":"ContainerDied","Data":"453b9889b3198c44ed9724aa0533a7871e1f549fba4a0585cb4f741292407c26"} Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.499462 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-gjqxv" event={"ID":"48ebbbad-0bef-4b23-b229-eba1eb8dfd6c","Type":"ContainerStarted","Data":"255ffa39874c3e21bf702f0ad4a8f65da4e6254abe3efb6c3cfd6e60519479c1"} Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.702972 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.711150 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.726191 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 07:45:11 crc kubenswrapper[4908]: E0131 07:45:11.726596 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="087d762e-9478-43bb-b605-785abab29e87" containerName="nova-cell1-novncproxy-novncproxy" Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.726616 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="087d762e-9478-43bb-b605-785abab29e87" containerName="nova-cell1-novncproxy-novncproxy" Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.726798 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="087d762e-9478-43bb-b605-785abab29e87" containerName="nova-cell1-novncproxy-novncproxy" Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.727591 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.742552 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.743435 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.743803 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.746939 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.801404 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/343522a1-814f-4826-aec8-bdf76f6f9659-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"343522a1-814f-4826-aec8-bdf76f6f9659\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.801676 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/343522a1-814f-4826-aec8-bdf76f6f9659-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"343522a1-814f-4826-aec8-bdf76f6f9659\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.801818 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t542l\" (UniqueName: \"kubernetes.io/projected/343522a1-814f-4826-aec8-bdf76f6f9659-kube-api-access-t542l\") pod \"nova-cell1-novncproxy-0\" (UID: \"343522a1-814f-4826-aec8-bdf76f6f9659\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.801943 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/343522a1-814f-4826-aec8-bdf76f6f9659-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"343522a1-814f-4826-aec8-bdf76f6f9659\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.802134 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/343522a1-814f-4826-aec8-bdf76f6f9659-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"343522a1-814f-4826-aec8-bdf76f6f9659\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.904709 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/343522a1-814f-4826-aec8-bdf76f6f9659-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"343522a1-814f-4826-aec8-bdf76f6f9659\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.904831 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/343522a1-814f-4826-aec8-bdf76f6f9659-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"343522a1-814f-4826-aec8-bdf76f6f9659\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.904886 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/343522a1-814f-4826-aec8-bdf76f6f9659-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"343522a1-814f-4826-aec8-bdf76f6f9659\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.904935 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t542l\" (UniqueName: \"kubernetes.io/projected/343522a1-814f-4826-aec8-bdf76f6f9659-kube-api-access-t542l\") pod \"nova-cell1-novncproxy-0\" (UID: \"343522a1-814f-4826-aec8-bdf76f6f9659\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.905007 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/343522a1-814f-4826-aec8-bdf76f6f9659-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"343522a1-814f-4826-aec8-bdf76f6f9659\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.908757 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/343522a1-814f-4826-aec8-bdf76f6f9659-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"343522a1-814f-4826-aec8-bdf76f6f9659\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.909248 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/343522a1-814f-4826-aec8-bdf76f6f9659-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"343522a1-814f-4826-aec8-bdf76f6f9659\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.909301 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/343522a1-814f-4826-aec8-bdf76f6f9659-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"343522a1-814f-4826-aec8-bdf76f6f9659\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.911364 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/343522a1-814f-4826-aec8-bdf76f6f9659-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"343522a1-814f-4826-aec8-bdf76f6f9659\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.934808 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t542l\" (UniqueName: \"kubernetes.io/projected/343522a1-814f-4826-aec8-bdf76f6f9659-kube-api-access-t542l\") pod \"nova-cell1-novncproxy-0\" (UID: \"343522a1-814f-4826-aec8-bdf76f6f9659\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.950826 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="087d762e-9478-43bb-b605-785abab29e87" path="/var/lib/kubelet/pods/087d762e-9478-43bb-b605-785abab29e87/volumes" Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.951674 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.952034 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4052ef1-dec0-41b1-816b-44386da0b2b0" containerName="ceilometer-central-agent" containerID="cri-o://763c4af5c9363863bc5ef8feb3b25e4b0192ccc85a4e69cb788a10ef98247250" gracePeriod=30 Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.952151 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4052ef1-dec0-41b1-816b-44386da0b2b0" containerName="sg-core" containerID="cri-o://303319d4f0804551bc848ec0ce1062391964c0da1b6e444a7b0d8ee67eb4e6d5" gracePeriod=30 Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.952189 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4052ef1-dec0-41b1-816b-44386da0b2b0" containerName="proxy-httpd" containerID="cri-o://d882624de1d04d55a1d6c0745405773db20253614de00c3f5ea8261446796dae" gracePeriod=30 Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.952426 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4052ef1-dec0-41b1-816b-44386da0b2b0" containerName="ceilometer-notification-agent" containerID="cri-o://0d50e5f5ca32079a55a3d93a930622a3cb2757c3d0e634afa6edd0f0fc284c51" gracePeriod=30 Jan 31 07:45:11 crc kubenswrapper[4908]: I0131 07:45:11.978889 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d4052ef1-dec0-41b1-816b-44386da0b2b0" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.185:3000/\": EOF" Jan 31 07:45:12 crc kubenswrapper[4908]: I0131 07:45:12.043806 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:45:12 crc kubenswrapper[4908]: I0131 07:45:12.102511 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:45:12 crc kubenswrapper[4908]: I0131 07:45:12.509417 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-gjqxv" event={"ID":"48ebbbad-0bef-4b23-b229-eba1eb8dfd6c","Type":"ContainerStarted","Data":"1e6a311aeb0ee5ba70b137e2b4abd6ce7aa0542daed0e80f5d00cf047cc5ea26"} Jan 31 07:45:12 crc kubenswrapper[4908]: I0131 07:45:12.511133 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68d4b6d797-gjqxv" Jan 31 07:45:12 crc kubenswrapper[4908]: I0131 07:45:12.514230 4908 generic.go:334] "Generic (PLEG): container finished" podID="d4052ef1-dec0-41b1-816b-44386da0b2b0" containerID="d882624de1d04d55a1d6c0745405773db20253614de00c3f5ea8261446796dae" exitCode=0 Jan 31 07:45:12 crc kubenswrapper[4908]: I0131 07:45:12.514257 4908 generic.go:334] "Generic (PLEG): container finished" podID="d4052ef1-dec0-41b1-816b-44386da0b2b0" containerID="303319d4f0804551bc848ec0ce1062391964c0da1b6e444a7b0d8ee67eb4e6d5" exitCode=2 Jan 31 07:45:12 crc kubenswrapper[4908]: I0131 07:45:12.514264 4908 generic.go:334] "Generic (PLEG): container finished" podID="d4052ef1-dec0-41b1-816b-44386da0b2b0" containerID="763c4af5c9363863bc5ef8feb3b25e4b0192ccc85a4e69cb788a10ef98247250" exitCode=0 Jan 31 07:45:12 crc kubenswrapper[4908]: I0131 07:45:12.514310 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4052ef1-dec0-41b1-816b-44386da0b2b0","Type":"ContainerDied","Data":"d882624de1d04d55a1d6c0745405773db20253614de00c3f5ea8261446796dae"} Jan 31 07:45:12 crc kubenswrapper[4908]: I0131 07:45:12.514340 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4052ef1-dec0-41b1-816b-44386da0b2b0","Type":"ContainerDied","Data":"303319d4f0804551bc848ec0ce1062391964c0da1b6e444a7b0d8ee67eb4e6d5"} Jan 31 07:45:12 crc kubenswrapper[4908]: I0131 07:45:12.514353 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4052ef1-dec0-41b1-816b-44386da0b2b0","Type":"ContainerDied","Data":"763c4af5c9363863bc5ef8feb3b25e4b0192ccc85a4e69cb788a10ef98247250"} Jan 31 07:45:12 crc kubenswrapper[4908]: I0131 07:45:12.516622 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281" containerName="nova-api-log" containerID="cri-o://2a9727ae158882231710147cc192ffc6b14b9495cbfd98afbd2c1f7f9f9fc42b" gracePeriod=30 Jan 31 07:45:12 crc kubenswrapper[4908]: I0131 07:45:12.516964 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281" containerName="nova-api-api" containerID="cri-o://76e6a9fdf5a65d5daddfc502032b87d371e3cee8bdf8ea53c612efeaa3b8a719" gracePeriod=30 Jan 31 07:45:12 crc kubenswrapper[4908]: I0131 07:45:12.529805 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68d4b6d797-gjqxv" podStartSLOduration=3.529789207 podStartE2EDuration="3.529789207s" podCreationTimestamp="2026-01-31 07:45:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:45:12.528614228 +0000 UTC m=+1419.144558892" watchObservedRunningTime="2026-01-31 07:45:12.529789207 +0000 UTC m=+1419.145733861" Jan 31 07:45:12 crc kubenswrapper[4908]: I0131 07:45:12.568924 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 07:45:12 crc kubenswrapper[4908]: W0131 07:45:12.569265 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod343522a1_814f_4826_aec8_bdf76f6f9659.slice/crio-d5afb8149c4c7a83b90ba68bca5952f7923e4f09afc6b4f93f38a2886ab9b50e WatchSource:0}: Error finding container d5afb8149c4c7a83b90ba68bca5952f7923e4f09afc6b4f93f38a2886ab9b50e: Status 404 returned error can't find the container with id d5afb8149c4c7a83b90ba68bca5952f7923e4f09afc6b4f93f38a2886ab9b50e Jan 31 07:45:12 crc kubenswrapper[4908]: I0131 07:45:12.835382 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 31 07:45:13 crc kubenswrapper[4908]: I0131 07:45:13.529177 4908 generic.go:334] "Generic (PLEG): container finished" podID="1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281" containerID="2a9727ae158882231710147cc192ffc6b14b9495cbfd98afbd2c1f7f9f9fc42b" exitCode=143 Jan 31 07:45:13 crc kubenswrapper[4908]: I0131 07:45:13.529248 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281","Type":"ContainerDied","Data":"2a9727ae158882231710147cc192ffc6b14b9495cbfd98afbd2c1f7f9f9fc42b"} Jan 31 07:45:13 crc kubenswrapper[4908]: I0131 07:45:13.531117 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"343522a1-814f-4826-aec8-bdf76f6f9659","Type":"ContainerStarted","Data":"4bef5ffe9a7f24aae69c82f74a23aa82b609a0d97d05bf0046157fc2b3d267e8"} Jan 31 07:45:13 crc kubenswrapper[4908]: I0131 07:45:13.531164 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"343522a1-814f-4826-aec8-bdf76f6f9659","Type":"ContainerStarted","Data":"d5afb8149c4c7a83b90ba68bca5952f7923e4f09afc6b4f93f38a2886ab9b50e"} Jan 31 07:45:13 crc kubenswrapper[4908]: I0131 07:45:13.549585 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.549568624 podStartE2EDuration="2.549568624s" podCreationTimestamp="2026-01-31 07:45:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:45:13.548466077 +0000 UTC m=+1420.164410751" watchObservedRunningTime="2026-01-31 07:45:13.549568624 +0000 UTC m=+1420.165513278" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.401747 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.486035 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4052ef1-dec0-41b1-816b-44386da0b2b0-ceilometer-tls-certs\") pod \"d4052ef1-dec0-41b1-816b-44386da0b2b0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.486386 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4052ef1-dec0-41b1-816b-44386da0b2b0-run-httpd\") pod \"d4052ef1-dec0-41b1-816b-44386da0b2b0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.486411 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rjw\" (UniqueName: \"kubernetes.io/projected/d4052ef1-dec0-41b1-816b-44386da0b2b0-kube-api-access-w9rjw\") pod \"d4052ef1-dec0-41b1-816b-44386da0b2b0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.486431 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4052ef1-dec0-41b1-816b-44386da0b2b0-sg-core-conf-yaml\") pod \"d4052ef1-dec0-41b1-816b-44386da0b2b0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.486470 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4052ef1-dec0-41b1-816b-44386da0b2b0-log-httpd\") pod \"d4052ef1-dec0-41b1-816b-44386da0b2b0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.486516 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4052ef1-dec0-41b1-816b-44386da0b2b0-combined-ca-bundle\") pod \"d4052ef1-dec0-41b1-816b-44386da0b2b0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.486605 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4052ef1-dec0-41b1-816b-44386da0b2b0-config-data\") pod \"d4052ef1-dec0-41b1-816b-44386da0b2b0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.486858 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4052ef1-dec0-41b1-816b-44386da0b2b0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d4052ef1-dec0-41b1-816b-44386da0b2b0" (UID: "d4052ef1-dec0-41b1-816b-44386da0b2b0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.487069 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4052ef1-dec0-41b1-816b-44386da0b2b0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d4052ef1-dec0-41b1-816b-44386da0b2b0" (UID: "d4052ef1-dec0-41b1-816b-44386da0b2b0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.487421 4908 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4052ef1-dec0-41b1-816b-44386da0b2b0-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.487441 4908 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4052ef1-dec0-41b1-816b-44386da0b2b0-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.491798 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4052ef1-dec0-41b1-816b-44386da0b2b0-kube-api-access-w9rjw" (OuterVolumeSpecName: "kube-api-access-w9rjw") pod "d4052ef1-dec0-41b1-816b-44386da0b2b0" (UID: "d4052ef1-dec0-41b1-816b-44386da0b2b0"). InnerVolumeSpecName "kube-api-access-w9rjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.522266 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4052ef1-dec0-41b1-816b-44386da0b2b0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d4052ef1-dec0-41b1-816b-44386da0b2b0" (UID: "d4052ef1-dec0-41b1-816b-44386da0b2b0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.543082 4908 generic.go:334] "Generic (PLEG): container finished" podID="d4052ef1-dec0-41b1-816b-44386da0b2b0" containerID="0d50e5f5ca32079a55a3d93a930622a3cb2757c3d0e634afa6edd0f0fc284c51" exitCode=0 Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.544090 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4052ef1-dec0-41b1-816b-44386da0b2b0","Type":"ContainerDied","Data":"0d50e5f5ca32079a55a3d93a930622a3cb2757c3d0e634afa6edd0f0fc284c51"} Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.544128 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4052ef1-dec0-41b1-816b-44386da0b2b0","Type":"ContainerDied","Data":"87f8427517be8472c2a44f2ee459273127eab08c35559e77a66867d3c173d0d3"} Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.544148 4908 scope.go:117] "RemoveContainer" containerID="d882624de1d04d55a1d6c0745405773db20253614de00c3f5ea8261446796dae" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.544174 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.589001 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4052ef1-dec0-41b1-816b-44386da0b2b0-scripts\") pod \"d4052ef1-dec0-41b1-816b-44386da0b2b0\" (UID: \"d4052ef1-dec0-41b1-816b-44386da0b2b0\") " Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.589807 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4052ef1-dec0-41b1-816b-44386da0b2b0-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d4052ef1-dec0-41b1-816b-44386da0b2b0" (UID: "d4052ef1-dec0-41b1-816b-44386da0b2b0"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.591629 4908 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4052ef1-dec0-41b1-816b-44386da0b2b0-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.592037 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rjw\" (UniqueName: \"kubernetes.io/projected/d4052ef1-dec0-41b1-816b-44386da0b2b0-kube-api-access-w9rjw\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.592170 4908 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4052ef1-dec0-41b1-816b-44386da0b2b0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.592450 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4052ef1-dec0-41b1-816b-44386da0b2b0-scripts" (OuterVolumeSpecName: "scripts") pod "d4052ef1-dec0-41b1-816b-44386da0b2b0" (UID: "d4052ef1-dec0-41b1-816b-44386da0b2b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.595906 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4052ef1-dec0-41b1-816b-44386da0b2b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4052ef1-dec0-41b1-816b-44386da0b2b0" (UID: "d4052ef1-dec0-41b1-816b-44386da0b2b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.653814 4908 scope.go:117] "RemoveContainer" containerID="303319d4f0804551bc848ec0ce1062391964c0da1b6e444a7b0d8ee67eb4e6d5" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.662220 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4052ef1-dec0-41b1-816b-44386da0b2b0-config-data" (OuterVolumeSpecName: "config-data") pod "d4052ef1-dec0-41b1-816b-44386da0b2b0" (UID: "d4052ef1-dec0-41b1-816b-44386da0b2b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.682079 4908 scope.go:117] "RemoveContainer" containerID="0d50e5f5ca32079a55a3d93a930622a3cb2757c3d0e634afa6edd0f0fc284c51" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.694396 4908 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4052ef1-dec0-41b1-816b-44386da0b2b0-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.694437 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4052ef1-dec0-41b1-816b-44386da0b2b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.694514 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4052ef1-dec0-41b1-816b-44386da0b2b0-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.702515 4908 scope.go:117] "RemoveContainer" containerID="763c4af5c9363863bc5ef8feb3b25e4b0192ccc85a4e69cb788a10ef98247250" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.733175 4908 scope.go:117] "RemoveContainer" containerID="d882624de1d04d55a1d6c0745405773db20253614de00c3f5ea8261446796dae" Jan 31 07:45:14 crc kubenswrapper[4908]: E0131 07:45:14.734578 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d882624de1d04d55a1d6c0745405773db20253614de00c3f5ea8261446796dae\": container with ID starting with d882624de1d04d55a1d6c0745405773db20253614de00c3f5ea8261446796dae not found: ID does not exist" containerID="d882624de1d04d55a1d6c0745405773db20253614de00c3f5ea8261446796dae" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.734632 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d882624de1d04d55a1d6c0745405773db20253614de00c3f5ea8261446796dae"} err="failed to get container status \"d882624de1d04d55a1d6c0745405773db20253614de00c3f5ea8261446796dae\": rpc error: code = NotFound desc = could not find container \"d882624de1d04d55a1d6c0745405773db20253614de00c3f5ea8261446796dae\": container with ID starting with d882624de1d04d55a1d6c0745405773db20253614de00c3f5ea8261446796dae not found: ID does not exist" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.734671 4908 scope.go:117] "RemoveContainer" containerID="303319d4f0804551bc848ec0ce1062391964c0da1b6e444a7b0d8ee67eb4e6d5" Jan 31 07:45:14 crc kubenswrapper[4908]: E0131 07:45:14.735308 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"303319d4f0804551bc848ec0ce1062391964c0da1b6e444a7b0d8ee67eb4e6d5\": container with ID starting with 303319d4f0804551bc848ec0ce1062391964c0da1b6e444a7b0d8ee67eb4e6d5 not found: ID does not exist" containerID="303319d4f0804551bc848ec0ce1062391964c0da1b6e444a7b0d8ee67eb4e6d5" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.735357 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"303319d4f0804551bc848ec0ce1062391964c0da1b6e444a7b0d8ee67eb4e6d5"} err="failed to get container status \"303319d4f0804551bc848ec0ce1062391964c0da1b6e444a7b0d8ee67eb4e6d5\": rpc error: code = NotFound desc = could not find container \"303319d4f0804551bc848ec0ce1062391964c0da1b6e444a7b0d8ee67eb4e6d5\": container with ID starting with 303319d4f0804551bc848ec0ce1062391964c0da1b6e444a7b0d8ee67eb4e6d5 not found: ID does not exist" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.735394 4908 scope.go:117] "RemoveContainer" containerID="0d50e5f5ca32079a55a3d93a930622a3cb2757c3d0e634afa6edd0f0fc284c51" Jan 31 07:45:14 crc kubenswrapper[4908]: E0131 07:45:14.735749 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d50e5f5ca32079a55a3d93a930622a3cb2757c3d0e634afa6edd0f0fc284c51\": container with ID starting with 0d50e5f5ca32079a55a3d93a930622a3cb2757c3d0e634afa6edd0f0fc284c51 not found: ID does not exist" containerID="0d50e5f5ca32079a55a3d93a930622a3cb2757c3d0e634afa6edd0f0fc284c51" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.735786 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d50e5f5ca32079a55a3d93a930622a3cb2757c3d0e634afa6edd0f0fc284c51"} err="failed to get container status \"0d50e5f5ca32079a55a3d93a930622a3cb2757c3d0e634afa6edd0f0fc284c51\": rpc error: code = NotFound desc = could not find container \"0d50e5f5ca32079a55a3d93a930622a3cb2757c3d0e634afa6edd0f0fc284c51\": container with ID starting with 0d50e5f5ca32079a55a3d93a930622a3cb2757c3d0e634afa6edd0f0fc284c51 not found: ID does not exist" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.735809 4908 scope.go:117] "RemoveContainer" containerID="763c4af5c9363863bc5ef8feb3b25e4b0192ccc85a4e69cb788a10ef98247250" Jan 31 07:45:14 crc kubenswrapper[4908]: E0131 07:45:14.736121 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"763c4af5c9363863bc5ef8feb3b25e4b0192ccc85a4e69cb788a10ef98247250\": container with ID starting with 763c4af5c9363863bc5ef8feb3b25e4b0192ccc85a4e69cb788a10ef98247250 not found: ID does not exist" containerID="763c4af5c9363863bc5ef8feb3b25e4b0192ccc85a4e69cb788a10ef98247250" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.736154 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"763c4af5c9363863bc5ef8feb3b25e4b0192ccc85a4e69cb788a10ef98247250"} err="failed to get container status \"763c4af5c9363863bc5ef8feb3b25e4b0192ccc85a4e69cb788a10ef98247250\": rpc error: code = NotFound desc = could not find container \"763c4af5c9363863bc5ef8feb3b25e4b0192ccc85a4e69cb788a10ef98247250\": container with ID starting with 763c4af5c9363863bc5ef8feb3b25e4b0192ccc85a4e69cb788a10ef98247250 not found: ID does not exist" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.878533 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.888716 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.900243 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:45:14 crc kubenswrapper[4908]: E0131 07:45:14.900727 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4052ef1-dec0-41b1-816b-44386da0b2b0" containerName="ceilometer-notification-agent" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.900749 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4052ef1-dec0-41b1-816b-44386da0b2b0" containerName="ceilometer-notification-agent" Jan 31 07:45:14 crc kubenswrapper[4908]: E0131 07:45:14.900769 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4052ef1-dec0-41b1-816b-44386da0b2b0" containerName="ceilometer-central-agent" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.900777 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4052ef1-dec0-41b1-816b-44386da0b2b0" containerName="ceilometer-central-agent" Jan 31 07:45:14 crc kubenswrapper[4908]: E0131 07:45:14.900791 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4052ef1-dec0-41b1-816b-44386da0b2b0" containerName="sg-core" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.900802 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4052ef1-dec0-41b1-816b-44386da0b2b0" containerName="sg-core" Jan 31 07:45:14 crc kubenswrapper[4908]: E0131 07:45:14.900817 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4052ef1-dec0-41b1-816b-44386da0b2b0" containerName="proxy-httpd" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.900825 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4052ef1-dec0-41b1-816b-44386da0b2b0" containerName="proxy-httpd" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.901063 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4052ef1-dec0-41b1-816b-44386da0b2b0" containerName="proxy-httpd" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.901093 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4052ef1-dec0-41b1-816b-44386da0b2b0" containerName="ceilometer-central-agent" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.901108 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4052ef1-dec0-41b1-816b-44386da0b2b0" containerName="sg-core" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.901123 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4052ef1-dec0-41b1-816b-44386da0b2b0" containerName="ceilometer-notification-agent" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.903107 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.906683 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.906900 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.907122 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.916010 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.998881 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8da7412c-bb80-4222-b7a5-4a88e50a86f2-config-data\") pod \"ceilometer-0\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " pod="openstack/ceilometer-0" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.998924 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cfmr\" (UniqueName: \"kubernetes.io/projected/8da7412c-bb80-4222-b7a5-4a88e50a86f2-kube-api-access-9cfmr\") pod \"ceilometer-0\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " pod="openstack/ceilometer-0" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.998962 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8da7412c-bb80-4222-b7a5-4a88e50a86f2-log-httpd\") pod \"ceilometer-0\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " pod="openstack/ceilometer-0" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.999006 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8da7412c-bb80-4222-b7a5-4a88e50a86f2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " pod="openstack/ceilometer-0" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.999039 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8da7412c-bb80-4222-b7a5-4a88e50a86f2-run-httpd\") pod \"ceilometer-0\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " pod="openstack/ceilometer-0" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.999127 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8da7412c-bb80-4222-b7a5-4a88e50a86f2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " pod="openstack/ceilometer-0" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.999169 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8da7412c-bb80-4222-b7a5-4a88e50a86f2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " pod="openstack/ceilometer-0" Jan 31 07:45:14 crc kubenswrapper[4908]: I0131 07:45:14.999268 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8da7412c-bb80-4222-b7a5-4a88e50a86f2-scripts\") pod \"ceilometer-0\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " pod="openstack/ceilometer-0" Jan 31 07:45:15 crc kubenswrapper[4908]: I0131 07:45:15.100052 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8da7412c-bb80-4222-b7a5-4a88e50a86f2-scripts\") pod \"ceilometer-0\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " pod="openstack/ceilometer-0" Jan 31 07:45:15 crc kubenswrapper[4908]: I0131 07:45:15.100113 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8da7412c-bb80-4222-b7a5-4a88e50a86f2-config-data\") pod \"ceilometer-0\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " pod="openstack/ceilometer-0" Jan 31 07:45:15 crc kubenswrapper[4908]: I0131 07:45:15.100135 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cfmr\" (UniqueName: \"kubernetes.io/projected/8da7412c-bb80-4222-b7a5-4a88e50a86f2-kube-api-access-9cfmr\") pod \"ceilometer-0\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " pod="openstack/ceilometer-0" Jan 31 07:45:15 crc kubenswrapper[4908]: I0131 07:45:15.100166 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8da7412c-bb80-4222-b7a5-4a88e50a86f2-log-httpd\") pod \"ceilometer-0\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " pod="openstack/ceilometer-0" Jan 31 07:45:15 crc kubenswrapper[4908]: I0131 07:45:15.100189 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8da7412c-bb80-4222-b7a5-4a88e50a86f2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " pod="openstack/ceilometer-0" Jan 31 07:45:15 crc kubenswrapper[4908]: I0131 07:45:15.100214 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8da7412c-bb80-4222-b7a5-4a88e50a86f2-run-httpd\") pod \"ceilometer-0\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " pod="openstack/ceilometer-0" Jan 31 07:45:15 crc kubenswrapper[4908]: I0131 07:45:15.100260 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8da7412c-bb80-4222-b7a5-4a88e50a86f2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " pod="openstack/ceilometer-0" Jan 31 07:45:15 crc kubenswrapper[4908]: I0131 07:45:15.100289 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8da7412c-bb80-4222-b7a5-4a88e50a86f2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " pod="openstack/ceilometer-0" Jan 31 07:45:15 crc kubenswrapper[4908]: I0131 07:45:15.100967 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8da7412c-bb80-4222-b7a5-4a88e50a86f2-run-httpd\") pod \"ceilometer-0\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " pod="openstack/ceilometer-0" Jan 31 07:45:15 crc kubenswrapper[4908]: I0131 07:45:15.101157 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8da7412c-bb80-4222-b7a5-4a88e50a86f2-log-httpd\") pod \"ceilometer-0\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " pod="openstack/ceilometer-0" Jan 31 07:45:15 crc kubenswrapper[4908]: I0131 07:45:15.104860 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8da7412c-bb80-4222-b7a5-4a88e50a86f2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " pod="openstack/ceilometer-0" Jan 31 07:45:15 crc kubenswrapper[4908]: I0131 07:45:15.105056 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8da7412c-bb80-4222-b7a5-4a88e50a86f2-scripts\") pod \"ceilometer-0\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " pod="openstack/ceilometer-0" Jan 31 07:45:15 crc kubenswrapper[4908]: I0131 07:45:15.105137 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8da7412c-bb80-4222-b7a5-4a88e50a86f2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " pod="openstack/ceilometer-0" Jan 31 07:45:15 crc kubenswrapper[4908]: I0131 07:45:15.106035 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8da7412c-bb80-4222-b7a5-4a88e50a86f2-config-data\") pod \"ceilometer-0\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " pod="openstack/ceilometer-0" Jan 31 07:45:15 crc kubenswrapper[4908]: I0131 07:45:15.106231 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8da7412c-bb80-4222-b7a5-4a88e50a86f2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " pod="openstack/ceilometer-0" Jan 31 07:45:15 crc kubenswrapper[4908]: I0131 07:45:15.116485 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cfmr\" (UniqueName: \"kubernetes.io/projected/8da7412c-bb80-4222-b7a5-4a88e50a86f2-kube-api-access-9cfmr\") pod \"ceilometer-0\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " pod="openstack/ceilometer-0" Jan 31 07:45:15 crc kubenswrapper[4908]: I0131 07:45:15.229695 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:45:15 crc kubenswrapper[4908]: I0131 07:45:15.651773 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:45:15 crc kubenswrapper[4908]: I0131 07:45:15.955491 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4052ef1-dec0-41b1-816b-44386da0b2b0" path="/var/lib/kubelet/pods/d4052ef1-dec0-41b1-816b-44386da0b2b0/volumes" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.055142 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.117180 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281-logs\") pod \"1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281\" (UID: \"1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281\") " Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.117332 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281-combined-ca-bundle\") pod \"1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281\" (UID: \"1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281\") " Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.117389 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s5xk\" (UniqueName: \"kubernetes.io/projected/1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281-kube-api-access-4s5xk\") pod \"1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281\" (UID: \"1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281\") " Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.117427 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281-config-data\") pod \"1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281\" (UID: \"1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281\") " Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.119756 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281-logs" (OuterVolumeSpecName: "logs") pod "1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281" (UID: "1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.123223 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281-kube-api-access-4s5xk" (OuterVolumeSpecName: "kube-api-access-4s5xk") pod "1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281" (UID: "1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281"). InnerVolumeSpecName "kube-api-access-4s5xk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.148730 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281" (UID: "1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.148845 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281-config-data" (OuterVolumeSpecName: "config-data") pod "1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281" (UID: "1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.219110 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s5xk\" (UniqueName: \"kubernetes.io/projected/1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281-kube-api-access-4s5xk\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.219148 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.219160 4908 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.219172 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.567127 4908 generic.go:334] "Generic (PLEG): container finished" podID="1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281" containerID="76e6a9fdf5a65d5daddfc502032b87d371e3cee8bdf8ea53c612efeaa3b8a719" exitCode=0 Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.567186 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281","Type":"ContainerDied","Data":"76e6a9fdf5a65d5daddfc502032b87d371e3cee8bdf8ea53c612efeaa3b8a719"} Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.567202 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.568069 4908 scope.go:117] "RemoveContainer" containerID="76e6a9fdf5a65d5daddfc502032b87d371e3cee8bdf8ea53c612efeaa3b8a719" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.567997 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281","Type":"ContainerDied","Data":"cd01b26857fd128c06a18791d306e07d7f736c4b966c81d0f662ace825c3a568"} Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.569752 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8da7412c-bb80-4222-b7a5-4a88e50a86f2","Type":"ContainerStarted","Data":"9db0fac8c22fd03cb0a7b92b87d300daa1fc4a35ffd092a51e5290123b4184fb"} Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.569879 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8da7412c-bb80-4222-b7a5-4a88e50a86f2","Type":"ContainerStarted","Data":"a6bd73170bbed5714b5a36dbed657dab246832f5ab5c43632f0b94a99cc52fd6"} Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.588760 4908 scope.go:117] "RemoveContainer" containerID="2a9727ae158882231710147cc192ffc6b14b9495cbfd98afbd2c1f7f9f9fc42b" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.601548 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.612598 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.625733 4908 scope.go:117] "RemoveContainer" containerID="76e6a9fdf5a65d5daddfc502032b87d371e3cee8bdf8ea53c612efeaa3b8a719" Jan 31 07:45:16 crc kubenswrapper[4908]: E0131 07:45:16.628316 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76e6a9fdf5a65d5daddfc502032b87d371e3cee8bdf8ea53c612efeaa3b8a719\": container with ID starting with 76e6a9fdf5a65d5daddfc502032b87d371e3cee8bdf8ea53c612efeaa3b8a719 not found: ID does not exist" containerID="76e6a9fdf5a65d5daddfc502032b87d371e3cee8bdf8ea53c612efeaa3b8a719" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.628346 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76e6a9fdf5a65d5daddfc502032b87d371e3cee8bdf8ea53c612efeaa3b8a719"} err="failed to get container status \"76e6a9fdf5a65d5daddfc502032b87d371e3cee8bdf8ea53c612efeaa3b8a719\": rpc error: code = NotFound desc = could not find container \"76e6a9fdf5a65d5daddfc502032b87d371e3cee8bdf8ea53c612efeaa3b8a719\": container with ID starting with 76e6a9fdf5a65d5daddfc502032b87d371e3cee8bdf8ea53c612efeaa3b8a719 not found: ID does not exist" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.628370 4908 scope.go:117] "RemoveContainer" containerID="2a9727ae158882231710147cc192ffc6b14b9495cbfd98afbd2c1f7f9f9fc42b" Jan 31 07:45:16 crc kubenswrapper[4908]: E0131 07:45:16.630966 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a9727ae158882231710147cc192ffc6b14b9495cbfd98afbd2c1f7f9f9fc42b\": container with ID starting with 2a9727ae158882231710147cc192ffc6b14b9495cbfd98afbd2c1f7f9f9fc42b not found: ID does not exist" containerID="2a9727ae158882231710147cc192ffc6b14b9495cbfd98afbd2c1f7f9f9fc42b" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.631020 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a9727ae158882231710147cc192ffc6b14b9495cbfd98afbd2c1f7f9f9fc42b"} err="failed to get container status \"2a9727ae158882231710147cc192ffc6b14b9495cbfd98afbd2c1f7f9f9fc42b\": rpc error: code = NotFound desc = could not find container \"2a9727ae158882231710147cc192ffc6b14b9495cbfd98afbd2c1f7f9f9fc42b\": container with ID starting with 2a9727ae158882231710147cc192ffc6b14b9495cbfd98afbd2c1f7f9f9fc42b not found: ID does not exist" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.637068 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 07:45:16 crc kubenswrapper[4908]: E0131 07:45:16.637500 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281" containerName="nova-api-api" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.637518 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281" containerName="nova-api-api" Jan 31 07:45:16 crc kubenswrapper[4908]: E0131 07:45:16.637529 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281" containerName="nova-api-log" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.637536 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281" containerName="nova-api-log" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.637708 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281" containerName="nova-api-log" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.637717 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281" containerName="nova-api-api" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.638658 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.644620 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.644634 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.644952 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.658529 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.726034 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2\") " pod="openstack/nova-api-0" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.726084 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-logs\") pod \"nova-api-0\" (UID: \"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2\") " pod="openstack/nova-api-0" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.726136 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-public-tls-certs\") pod \"nova-api-0\" (UID: \"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2\") " pod="openstack/nova-api-0" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.726244 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2\") " pod="openstack/nova-api-0" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.726445 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6pgb\" (UniqueName: \"kubernetes.io/projected/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-kube-api-access-h6pgb\") pod \"nova-api-0\" (UID: \"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2\") " pod="openstack/nova-api-0" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.726612 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-config-data\") pod \"nova-api-0\" (UID: \"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2\") " pod="openstack/nova-api-0" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.828254 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-config-data\") pod \"nova-api-0\" (UID: \"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2\") " pod="openstack/nova-api-0" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.828335 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2\") " pod="openstack/nova-api-0" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.828359 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-logs\") pod \"nova-api-0\" (UID: \"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2\") " pod="openstack/nova-api-0" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.828413 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-public-tls-certs\") pod \"nova-api-0\" (UID: \"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2\") " pod="openstack/nova-api-0" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.828432 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2\") " pod="openstack/nova-api-0" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.828476 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6pgb\" (UniqueName: \"kubernetes.io/projected/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-kube-api-access-h6pgb\") pod \"nova-api-0\" (UID: \"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2\") " pod="openstack/nova-api-0" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.828874 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-logs\") pod \"nova-api-0\" (UID: \"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2\") " pod="openstack/nova-api-0" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.834022 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2\") " pod="openstack/nova-api-0" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.834056 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-public-tls-certs\") pod \"nova-api-0\" (UID: \"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2\") " pod="openstack/nova-api-0" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.834735 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-config-data\") pod \"nova-api-0\" (UID: \"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2\") " pod="openstack/nova-api-0" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.834763 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2\") " pod="openstack/nova-api-0" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.846041 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6pgb\" (UniqueName: \"kubernetes.io/projected/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-kube-api-access-h6pgb\") pod \"nova-api-0\" (UID: \"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2\") " pod="openstack/nova-api-0" Jan 31 07:45:16 crc kubenswrapper[4908]: I0131 07:45:16.958224 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 07:45:17 crc kubenswrapper[4908]: I0131 07:45:17.044651 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:45:17 crc kubenswrapper[4908]: I0131 07:45:17.467460 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:45:17 crc kubenswrapper[4908]: W0131 07:45:17.477497 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb335b96_74ee_4b2b_b0f0_1cb0a63b5fe2.slice/crio-362ddd2a704d12cb15664637bb15fc67c4dbd949f648d92551b51d70c85a4d58 WatchSource:0}: Error finding container 362ddd2a704d12cb15664637bb15fc67c4dbd949f648d92551b51d70c85a4d58: Status 404 returned error can't find the container with id 362ddd2a704d12cb15664637bb15fc67c4dbd949f648d92551b51d70c85a4d58 Jan 31 07:45:17 crc kubenswrapper[4908]: I0131 07:45:17.580207 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2","Type":"ContainerStarted","Data":"362ddd2a704d12cb15664637bb15fc67c4dbd949f648d92551b51d70c85a4d58"} Jan 31 07:45:17 crc kubenswrapper[4908]: I0131 07:45:17.952136 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281" path="/var/lib/kubelet/pods/1fc06b3d-2daf-4dbd-8dd7-0cba97ba7281/volumes" Jan 31 07:45:18 crc kubenswrapper[4908]: I0131 07:45:18.589454 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2","Type":"ContainerStarted","Data":"77923d011d6f020467f073c1ef10e6ee0a30b24723d7a12be5a82a1309f0b73f"} Jan 31 07:45:18 crc kubenswrapper[4908]: I0131 07:45:18.589736 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2","Type":"ContainerStarted","Data":"640ad25e1321717c7f6db32dd4bdb862101c4693ceb950913c79773f3729acf8"} Jan 31 07:45:18 crc kubenswrapper[4908]: I0131 07:45:18.594089 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8da7412c-bb80-4222-b7a5-4a88e50a86f2","Type":"ContainerStarted","Data":"82080723f2252835eadeba740395d67bb1eaa2c0863221f56f3e58e0f7c838ad"} Jan 31 07:45:18 crc kubenswrapper[4908]: I0131 07:45:18.594175 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8da7412c-bb80-4222-b7a5-4a88e50a86f2","Type":"ContainerStarted","Data":"16cbf5496603fc4d9ec91a8cb68dcd8805f40b41b93bd370dde8c679d970251a"} Jan 31 07:45:18 crc kubenswrapper[4908]: I0131 07:45:18.613349 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.613331608 podStartE2EDuration="2.613331608s" podCreationTimestamp="2026-01-31 07:45:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:45:18.609699098 +0000 UTC m=+1425.225643772" watchObservedRunningTime="2026-01-31 07:45:18.613331608 +0000 UTC m=+1425.229276262" Jan 31 07:45:20 crc kubenswrapper[4908]: I0131 07:45:20.038167 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68d4b6d797-gjqxv" Jan 31 07:45:20 crc kubenswrapper[4908]: I0131 07:45:20.093121 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-m4hwz"] Jan 31 07:45:20 crc kubenswrapper[4908]: I0131 07:45:20.093400 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b8cf6657-m4hwz" podUID="96c5593c-c77f-4dc7-9d32-d95b094fe1b3" containerName="dnsmasq-dns" containerID="cri-o://485ae5a2d2d226f7b88468a57785dbdecee0eb45b29ec35b121558efdb484fb3" gracePeriod=10 Jan 31 07:45:20 crc kubenswrapper[4908]: I0131 07:45:20.622244 4908 generic.go:334] "Generic (PLEG): container finished" podID="96c5593c-c77f-4dc7-9d32-d95b094fe1b3" containerID="485ae5a2d2d226f7b88468a57785dbdecee0eb45b29ec35b121558efdb484fb3" exitCode=0 Jan 31 07:45:20 crc kubenswrapper[4908]: I0131 07:45:20.622455 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-m4hwz" event={"ID":"96c5593c-c77f-4dc7-9d32-d95b094fe1b3","Type":"ContainerDied","Data":"485ae5a2d2d226f7b88468a57785dbdecee0eb45b29ec35b121558efdb484fb3"} Jan 31 07:45:20 crc kubenswrapper[4908]: I0131 07:45:20.735355 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-m4hwz" Jan 31 07:45:20 crc kubenswrapper[4908]: I0131 07:45:20.811504 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96c5593c-c77f-4dc7-9d32-d95b094fe1b3-ovsdbserver-sb\") pod \"96c5593c-c77f-4dc7-9d32-d95b094fe1b3\" (UID: \"96c5593c-c77f-4dc7-9d32-d95b094fe1b3\") " Jan 31 07:45:20 crc kubenswrapper[4908]: I0131 07:45:20.811823 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96c5593c-c77f-4dc7-9d32-d95b094fe1b3-ovsdbserver-nb\") pod \"96c5593c-c77f-4dc7-9d32-d95b094fe1b3\" (UID: \"96c5593c-c77f-4dc7-9d32-d95b094fe1b3\") " Jan 31 07:45:20 crc kubenswrapper[4908]: I0131 07:45:20.811861 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm5jw\" (UniqueName: \"kubernetes.io/projected/96c5593c-c77f-4dc7-9d32-d95b094fe1b3-kube-api-access-zm5jw\") pod \"96c5593c-c77f-4dc7-9d32-d95b094fe1b3\" (UID: \"96c5593c-c77f-4dc7-9d32-d95b094fe1b3\") " Jan 31 07:45:20 crc kubenswrapper[4908]: I0131 07:45:20.812055 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96c5593c-c77f-4dc7-9d32-d95b094fe1b3-dns-svc\") pod \"96c5593c-c77f-4dc7-9d32-d95b094fe1b3\" (UID: \"96c5593c-c77f-4dc7-9d32-d95b094fe1b3\") " Jan 31 07:45:20 crc kubenswrapper[4908]: I0131 07:45:20.812086 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96c5593c-c77f-4dc7-9d32-d95b094fe1b3-config\") pod \"96c5593c-c77f-4dc7-9d32-d95b094fe1b3\" (UID: \"96c5593c-c77f-4dc7-9d32-d95b094fe1b3\") " Jan 31 07:45:20 crc kubenswrapper[4908]: I0131 07:45:20.822972 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96c5593c-c77f-4dc7-9d32-d95b094fe1b3-kube-api-access-zm5jw" (OuterVolumeSpecName: "kube-api-access-zm5jw") pod "96c5593c-c77f-4dc7-9d32-d95b094fe1b3" (UID: "96c5593c-c77f-4dc7-9d32-d95b094fe1b3"). InnerVolumeSpecName "kube-api-access-zm5jw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:45:20 crc kubenswrapper[4908]: I0131 07:45:20.875737 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96c5593c-c77f-4dc7-9d32-d95b094fe1b3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "96c5593c-c77f-4dc7-9d32-d95b094fe1b3" (UID: "96c5593c-c77f-4dc7-9d32-d95b094fe1b3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:45:20 crc kubenswrapper[4908]: I0131 07:45:20.879216 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96c5593c-c77f-4dc7-9d32-d95b094fe1b3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "96c5593c-c77f-4dc7-9d32-d95b094fe1b3" (UID: "96c5593c-c77f-4dc7-9d32-d95b094fe1b3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:45:20 crc kubenswrapper[4908]: I0131 07:45:20.881126 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96c5593c-c77f-4dc7-9d32-d95b094fe1b3-config" (OuterVolumeSpecName: "config") pod "96c5593c-c77f-4dc7-9d32-d95b094fe1b3" (UID: "96c5593c-c77f-4dc7-9d32-d95b094fe1b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:45:20 crc kubenswrapper[4908]: I0131 07:45:20.888204 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96c5593c-c77f-4dc7-9d32-d95b094fe1b3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "96c5593c-c77f-4dc7-9d32-d95b094fe1b3" (UID: "96c5593c-c77f-4dc7-9d32-d95b094fe1b3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:45:20 crc kubenswrapper[4908]: I0131 07:45:20.913781 4908 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96c5593c-c77f-4dc7-9d32-d95b094fe1b3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:20 crc kubenswrapper[4908]: I0131 07:45:20.913824 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96c5593c-c77f-4dc7-9d32-d95b094fe1b3-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:20 crc kubenswrapper[4908]: I0131 07:45:20.913838 4908 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96c5593c-c77f-4dc7-9d32-d95b094fe1b3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:20 crc kubenswrapper[4908]: I0131 07:45:20.913853 4908 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96c5593c-c77f-4dc7-9d32-d95b094fe1b3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:20 crc kubenswrapper[4908]: I0131 07:45:20.913865 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm5jw\" (UniqueName: \"kubernetes.io/projected/96c5593c-c77f-4dc7-9d32-d95b094fe1b3-kube-api-access-zm5jw\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:21 crc kubenswrapper[4908]: I0131 07:45:21.631186 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-m4hwz" event={"ID":"96c5593c-c77f-4dc7-9d32-d95b094fe1b3","Type":"ContainerDied","Data":"b663f267fce0fa6d9d7ef87b753cf3607622da67a308519ed1ac2d7e591c8fec"} Jan 31 07:45:21 crc kubenswrapper[4908]: I0131 07:45:21.632238 4908 scope.go:117] "RemoveContainer" containerID="485ae5a2d2d226f7b88468a57785dbdecee0eb45b29ec35b121558efdb484fb3" Jan 31 07:45:21 crc kubenswrapper[4908]: I0131 07:45:21.631216 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-m4hwz" Jan 31 07:45:21 crc kubenswrapper[4908]: I0131 07:45:21.633672 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8da7412c-bb80-4222-b7a5-4a88e50a86f2","Type":"ContainerStarted","Data":"6dd6f37e595f22acc4ab98d88422db9041d09aeecbfeb11f138dd024d4b5eb84"} Jan 31 07:45:21 crc kubenswrapper[4908]: I0131 07:45:21.633831 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 07:45:21 crc kubenswrapper[4908]: I0131 07:45:21.651210 4908 scope.go:117] "RemoveContainer" containerID="3c598a1705b0d0f7cc6b724347ae1843421936f95767e26e760b8b85bfb87b92" Jan 31 07:45:21 crc kubenswrapper[4908]: I0131 07:45:21.671357 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.821009065 podStartE2EDuration="7.671339686s" podCreationTimestamp="2026-01-31 07:45:14 +0000 UTC" firstStartedPulling="2026-01-31 07:45:15.658753951 +0000 UTC m=+1422.274698605" lastFinishedPulling="2026-01-31 07:45:20.509084572 +0000 UTC m=+1427.125029226" observedRunningTime="2026-01-31 07:45:21.667255495 +0000 UTC m=+1428.283200169" watchObservedRunningTime="2026-01-31 07:45:21.671339686 +0000 UTC m=+1428.287284340" Jan 31 07:45:21 crc kubenswrapper[4908]: I0131 07:45:21.707110 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-m4hwz"] Jan 31 07:45:21 crc kubenswrapper[4908]: I0131 07:45:21.717788 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-m4hwz"] Jan 31 07:45:21 crc kubenswrapper[4908]: I0131 07:45:21.950119 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96c5593c-c77f-4dc7-9d32-d95b094fe1b3" path="/var/lib/kubelet/pods/96c5593c-c77f-4dc7-9d32-d95b094fe1b3/volumes" Jan 31 07:45:22 crc kubenswrapper[4908]: I0131 07:45:22.045128 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:45:22 crc kubenswrapper[4908]: I0131 07:45:22.087530 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:45:22 crc kubenswrapper[4908]: I0131 07:45:22.663264 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:45:22 crc kubenswrapper[4908]: I0131 07:45:22.854439 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-pws6x"] Jan 31 07:45:22 crc kubenswrapper[4908]: E0131 07:45:22.854780 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c5593c-c77f-4dc7-9d32-d95b094fe1b3" containerName="dnsmasq-dns" Jan 31 07:45:22 crc kubenswrapper[4908]: I0131 07:45:22.854798 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c5593c-c77f-4dc7-9d32-d95b094fe1b3" containerName="dnsmasq-dns" Jan 31 07:45:22 crc kubenswrapper[4908]: E0131 07:45:22.854811 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c5593c-c77f-4dc7-9d32-d95b094fe1b3" containerName="init" Jan 31 07:45:22 crc kubenswrapper[4908]: I0131 07:45:22.854817 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c5593c-c77f-4dc7-9d32-d95b094fe1b3" containerName="init" Jan 31 07:45:22 crc kubenswrapper[4908]: I0131 07:45:22.854992 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c5593c-c77f-4dc7-9d32-d95b094fe1b3" containerName="dnsmasq-dns" Jan 31 07:45:22 crc kubenswrapper[4908]: I0131 07:45:22.855551 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pws6x" Jan 31 07:45:22 crc kubenswrapper[4908]: I0131 07:45:22.858430 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 31 07:45:22 crc kubenswrapper[4908]: I0131 07:45:22.858554 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 31 07:45:22 crc kubenswrapper[4908]: I0131 07:45:22.863901 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-pws6x"] Jan 31 07:45:22 crc kubenswrapper[4908]: I0131 07:45:22.955470 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f33284f-ffc2-4c8a-8b18-9b2b4083ed18-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-pws6x\" (UID: \"7f33284f-ffc2-4c8a-8b18-9b2b4083ed18\") " pod="openstack/nova-cell1-cell-mapping-pws6x" Jan 31 07:45:22 crc kubenswrapper[4908]: I0131 07:45:22.955682 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vrcj\" (UniqueName: \"kubernetes.io/projected/7f33284f-ffc2-4c8a-8b18-9b2b4083ed18-kube-api-access-8vrcj\") pod \"nova-cell1-cell-mapping-pws6x\" (UID: \"7f33284f-ffc2-4c8a-8b18-9b2b4083ed18\") " pod="openstack/nova-cell1-cell-mapping-pws6x" Jan 31 07:45:22 crc kubenswrapper[4908]: I0131 07:45:22.955805 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f33284f-ffc2-4c8a-8b18-9b2b4083ed18-config-data\") pod \"nova-cell1-cell-mapping-pws6x\" (UID: \"7f33284f-ffc2-4c8a-8b18-9b2b4083ed18\") " pod="openstack/nova-cell1-cell-mapping-pws6x" Jan 31 07:45:22 crc kubenswrapper[4908]: I0131 07:45:22.955933 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f33284f-ffc2-4c8a-8b18-9b2b4083ed18-scripts\") pod \"nova-cell1-cell-mapping-pws6x\" (UID: \"7f33284f-ffc2-4c8a-8b18-9b2b4083ed18\") " pod="openstack/nova-cell1-cell-mapping-pws6x" Jan 31 07:45:23 crc kubenswrapper[4908]: I0131 07:45:23.057562 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f33284f-ffc2-4c8a-8b18-9b2b4083ed18-scripts\") pod \"nova-cell1-cell-mapping-pws6x\" (UID: \"7f33284f-ffc2-4c8a-8b18-9b2b4083ed18\") " pod="openstack/nova-cell1-cell-mapping-pws6x" Jan 31 07:45:23 crc kubenswrapper[4908]: I0131 07:45:23.057707 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f33284f-ffc2-4c8a-8b18-9b2b4083ed18-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-pws6x\" (UID: \"7f33284f-ffc2-4c8a-8b18-9b2b4083ed18\") " pod="openstack/nova-cell1-cell-mapping-pws6x" Jan 31 07:45:23 crc kubenswrapper[4908]: I0131 07:45:23.057771 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vrcj\" (UniqueName: \"kubernetes.io/projected/7f33284f-ffc2-4c8a-8b18-9b2b4083ed18-kube-api-access-8vrcj\") pod \"nova-cell1-cell-mapping-pws6x\" (UID: \"7f33284f-ffc2-4c8a-8b18-9b2b4083ed18\") " pod="openstack/nova-cell1-cell-mapping-pws6x" Jan 31 07:45:23 crc kubenswrapper[4908]: I0131 07:45:23.057822 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f33284f-ffc2-4c8a-8b18-9b2b4083ed18-config-data\") pod \"nova-cell1-cell-mapping-pws6x\" (UID: \"7f33284f-ffc2-4c8a-8b18-9b2b4083ed18\") " pod="openstack/nova-cell1-cell-mapping-pws6x" Jan 31 07:45:23 crc kubenswrapper[4908]: I0131 07:45:23.063643 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f33284f-ffc2-4c8a-8b18-9b2b4083ed18-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-pws6x\" (UID: \"7f33284f-ffc2-4c8a-8b18-9b2b4083ed18\") " pod="openstack/nova-cell1-cell-mapping-pws6x" Jan 31 07:45:23 crc kubenswrapper[4908]: I0131 07:45:23.064231 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f33284f-ffc2-4c8a-8b18-9b2b4083ed18-scripts\") pod \"nova-cell1-cell-mapping-pws6x\" (UID: \"7f33284f-ffc2-4c8a-8b18-9b2b4083ed18\") " pod="openstack/nova-cell1-cell-mapping-pws6x" Jan 31 07:45:23 crc kubenswrapper[4908]: I0131 07:45:23.071841 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f33284f-ffc2-4c8a-8b18-9b2b4083ed18-config-data\") pod \"nova-cell1-cell-mapping-pws6x\" (UID: \"7f33284f-ffc2-4c8a-8b18-9b2b4083ed18\") " pod="openstack/nova-cell1-cell-mapping-pws6x" Jan 31 07:45:23 crc kubenswrapper[4908]: I0131 07:45:23.092105 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vrcj\" (UniqueName: \"kubernetes.io/projected/7f33284f-ffc2-4c8a-8b18-9b2b4083ed18-kube-api-access-8vrcj\") pod \"nova-cell1-cell-mapping-pws6x\" (UID: \"7f33284f-ffc2-4c8a-8b18-9b2b4083ed18\") " pod="openstack/nova-cell1-cell-mapping-pws6x" Jan 31 07:45:23 crc kubenswrapper[4908]: I0131 07:45:23.177574 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pws6x" Jan 31 07:45:23 crc kubenswrapper[4908]: I0131 07:45:23.658093 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-pws6x"] Jan 31 07:45:23 crc kubenswrapper[4908]: W0131 07:45:23.658161 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f33284f_ffc2_4c8a_8b18_9b2b4083ed18.slice/crio-e85fbc3d60ff442df962a002384fdc620615a24a900b43b420c4c3f112ca9557 WatchSource:0}: Error finding container e85fbc3d60ff442df962a002384fdc620615a24a900b43b420c4c3f112ca9557: Status 404 returned error can't find the container with id e85fbc3d60ff442df962a002384fdc620615a24a900b43b420c4c3f112ca9557 Jan 31 07:45:24 crc kubenswrapper[4908]: I0131 07:45:24.665659 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pws6x" event={"ID":"7f33284f-ffc2-4c8a-8b18-9b2b4083ed18","Type":"ContainerStarted","Data":"8637279c0317c8b9b4bef6804af9643143307db0805b845a2816105886485b27"} Jan 31 07:45:24 crc kubenswrapper[4908]: I0131 07:45:24.665918 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pws6x" event={"ID":"7f33284f-ffc2-4c8a-8b18-9b2b4083ed18","Type":"ContainerStarted","Data":"e85fbc3d60ff442df962a002384fdc620615a24a900b43b420c4c3f112ca9557"} Jan 31 07:45:24 crc kubenswrapper[4908]: I0131 07:45:24.705239 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-pws6x" podStartSLOduration=2.705221626 podStartE2EDuration="2.705221626s" podCreationTimestamp="2026-01-31 07:45:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:45:24.69932165 +0000 UTC m=+1431.315266304" watchObservedRunningTime="2026-01-31 07:45:24.705221626 +0000 UTC m=+1431.321166280" Jan 31 07:45:26 crc kubenswrapper[4908]: I0131 07:45:26.958513 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 07:45:26 crc kubenswrapper[4908]: I0131 07:45:26.958864 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 07:45:27 crc kubenswrapper[4908]: I0131 07:45:27.975269 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.192:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 07:45:27 crc kubenswrapper[4908]: I0131 07:45:27.975561 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.192:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 07:45:28 crc kubenswrapper[4908]: I0131 07:45:28.698325 4908 generic.go:334] "Generic (PLEG): container finished" podID="7f33284f-ffc2-4c8a-8b18-9b2b4083ed18" containerID="8637279c0317c8b9b4bef6804af9643143307db0805b845a2816105886485b27" exitCode=0 Jan 31 07:45:28 crc kubenswrapper[4908]: I0131 07:45:28.698592 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pws6x" event={"ID":"7f33284f-ffc2-4c8a-8b18-9b2b4083ed18","Type":"ContainerDied","Data":"8637279c0317c8b9b4bef6804af9643143307db0805b845a2816105886485b27"} Jan 31 07:45:30 crc kubenswrapper[4908]: I0131 07:45:30.047791 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pws6x" Jan 31 07:45:30 crc kubenswrapper[4908]: I0131 07:45:30.197707 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f33284f-ffc2-4c8a-8b18-9b2b4083ed18-scripts\") pod \"7f33284f-ffc2-4c8a-8b18-9b2b4083ed18\" (UID: \"7f33284f-ffc2-4c8a-8b18-9b2b4083ed18\") " Jan 31 07:45:30 crc kubenswrapper[4908]: I0131 07:45:30.197755 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f33284f-ffc2-4c8a-8b18-9b2b4083ed18-config-data\") pod \"7f33284f-ffc2-4c8a-8b18-9b2b4083ed18\" (UID: \"7f33284f-ffc2-4c8a-8b18-9b2b4083ed18\") " Jan 31 07:45:30 crc kubenswrapper[4908]: I0131 07:45:30.197794 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f33284f-ffc2-4c8a-8b18-9b2b4083ed18-combined-ca-bundle\") pod \"7f33284f-ffc2-4c8a-8b18-9b2b4083ed18\" (UID: \"7f33284f-ffc2-4c8a-8b18-9b2b4083ed18\") " Jan 31 07:45:30 crc kubenswrapper[4908]: I0131 07:45:30.197862 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vrcj\" (UniqueName: \"kubernetes.io/projected/7f33284f-ffc2-4c8a-8b18-9b2b4083ed18-kube-api-access-8vrcj\") pod \"7f33284f-ffc2-4c8a-8b18-9b2b4083ed18\" (UID: \"7f33284f-ffc2-4c8a-8b18-9b2b4083ed18\") " Jan 31 07:45:30 crc kubenswrapper[4908]: I0131 07:45:30.209161 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f33284f-ffc2-4c8a-8b18-9b2b4083ed18-scripts" (OuterVolumeSpecName: "scripts") pod "7f33284f-ffc2-4c8a-8b18-9b2b4083ed18" (UID: "7f33284f-ffc2-4c8a-8b18-9b2b4083ed18"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:45:30 crc kubenswrapper[4908]: I0131 07:45:30.209203 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f33284f-ffc2-4c8a-8b18-9b2b4083ed18-kube-api-access-8vrcj" (OuterVolumeSpecName: "kube-api-access-8vrcj") pod "7f33284f-ffc2-4c8a-8b18-9b2b4083ed18" (UID: "7f33284f-ffc2-4c8a-8b18-9b2b4083ed18"). InnerVolumeSpecName "kube-api-access-8vrcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:45:30 crc kubenswrapper[4908]: I0131 07:45:30.222723 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f33284f-ffc2-4c8a-8b18-9b2b4083ed18-config-data" (OuterVolumeSpecName: "config-data") pod "7f33284f-ffc2-4c8a-8b18-9b2b4083ed18" (UID: "7f33284f-ffc2-4c8a-8b18-9b2b4083ed18"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:45:30 crc kubenswrapper[4908]: I0131 07:45:30.228473 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f33284f-ffc2-4c8a-8b18-9b2b4083ed18-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f33284f-ffc2-4c8a-8b18-9b2b4083ed18" (UID: "7f33284f-ffc2-4c8a-8b18-9b2b4083ed18"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:45:30 crc kubenswrapper[4908]: I0131 07:45:30.299537 4908 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f33284f-ffc2-4c8a-8b18-9b2b4083ed18-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:30 crc kubenswrapper[4908]: I0131 07:45:30.299851 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f33284f-ffc2-4c8a-8b18-9b2b4083ed18-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:30 crc kubenswrapper[4908]: I0131 07:45:30.299864 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f33284f-ffc2-4c8a-8b18-9b2b4083ed18-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:30 crc kubenswrapper[4908]: I0131 07:45:30.299876 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vrcj\" (UniqueName: \"kubernetes.io/projected/7f33284f-ffc2-4c8a-8b18-9b2b4083ed18-kube-api-access-8vrcj\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:30 crc kubenswrapper[4908]: I0131 07:45:30.718118 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pws6x" event={"ID":"7f33284f-ffc2-4c8a-8b18-9b2b4083ed18","Type":"ContainerDied","Data":"e85fbc3d60ff442df962a002384fdc620615a24a900b43b420c4c3f112ca9557"} Jan 31 07:45:30 crc kubenswrapper[4908]: I0131 07:45:30.718162 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e85fbc3d60ff442df962a002384fdc620615a24a900b43b420c4c3f112ca9557" Jan 31 07:45:30 crc kubenswrapper[4908]: I0131 07:45:30.718184 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pws6x" Jan 31 07:45:30 crc kubenswrapper[4908]: I0131 07:45:30.910318 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:45:30 crc kubenswrapper[4908]: I0131 07:45:30.910607 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2" containerName="nova-api-log" containerID="cri-o://640ad25e1321717c7f6db32dd4bdb862101c4693ceb950913c79773f3729acf8" gracePeriod=30 Jan 31 07:45:30 crc kubenswrapper[4908]: I0131 07:45:30.910643 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2" containerName="nova-api-api" containerID="cri-o://77923d011d6f020467f073c1ef10e6ee0a30b24723d7a12be5a82a1309f0b73f" gracePeriod=30 Jan 31 07:45:30 crc kubenswrapper[4908]: I0131 07:45:30.922972 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 07:45:30 crc kubenswrapper[4908]: I0131 07:45:30.923202 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="23524731-05d4-4f72-b7cb-95bc9004706a" containerName="nova-scheduler-scheduler" containerID="cri-o://e250bd7103af642db66be41dd2071ccb6bcdfa66515d4a6c3c92114bf793cced" gracePeriod=30 Jan 31 07:45:30 crc kubenswrapper[4908]: I0131 07:45:30.940327 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:45:30 crc kubenswrapper[4908]: I0131 07:45:30.941248 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f879a45d-24c6-4508-829b-a3cdbcda3a33" containerName="nova-metadata-metadata" containerID="cri-o://40c3de0948c325a528f21357bd8c1b86cf63be9fad76f50818c3a538ee3046f0" gracePeriod=30 Jan 31 07:45:30 crc kubenswrapper[4908]: I0131 07:45:30.941205 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f879a45d-24c6-4508-829b-a3cdbcda3a33" containerName="nova-metadata-log" containerID="cri-o://dfaa8c3908d93737adf349c6bce0b7bd4065e1a4dd6784b465dfcf255479bb57" gracePeriod=30 Jan 31 07:45:31 crc kubenswrapper[4908]: I0131 07:45:31.736344 4908 generic.go:334] "Generic (PLEG): container finished" podID="eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2" containerID="640ad25e1321717c7f6db32dd4bdb862101c4693ceb950913c79773f3729acf8" exitCode=143 Jan 31 07:45:31 crc kubenswrapper[4908]: I0131 07:45:31.736438 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2","Type":"ContainerDied","Data":"640ad25e1321717c7f6db32dd4bdb862101c4693ceb950913c79773f3729acf8"} Jan 31 07:45:31 crc kubenswrapper[4908]: I0131 07:45:31.739073 4908 generic.go:334] "Generic (PLEG): container finished" podID="f879a45d-24c6-4508-829b-a3cdbcda3a33" containerID="dfaa8c3908d93737adf349c6bce0b7bd4065e1a4dd6784b465dfcf255479bb57" exitCode=143 Jan 31 07:45:31 crc kubenswrapper[4908]: I0131 07:45:31.739110 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f879a45d-24c6-4508-829b-a3cdbcda3a33","Type":"ContainerDied","Data":"dfaa8c3908d93737adf349c6bce0b7bd4065e1a4dd6784b465dfcf255479bb57"} Jan 31 07:45:33 crc kubenswrapper[4908]: I0131 07:45:33.807761 4908 generic.go:334] "Generic (PLEG): container finished" podID="23524731-05d4-4f72-b7cb-95bc9004706a" containerID="e250bd7103af642db66be41dd2071ccb6bcdfa66515d4a6c3c92114bf793cced" exitCode=0 Jan 31 07:45:33 crc kubenswrapper[4908]: I0131 07:45:33.808060 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"23524731-05d4-4f72-b7cb-95bc9004706a","Type":"ContainerDied","Data":"e250bd7103af642db66be41dd2071ccb6bcdfa66515d4a6c3c92114bf793cced"} Jan 31 07:45:33 crc kubenswrapper[4908]: I0131 07:45:33.967687 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.075740 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23524731-05d4-4f72-b7cb-95bc9004706a-config-data\") pod \"23524731-05d4-4f72-b7cb-95bc9004706a\" (UID: \"23524731-05d4-4f72-b7cb-95bc9004706a\") " Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.075846 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23524731-05d4-4f72-b7cb-95bc9004706a-combined-ca-bundle\") pod \"23524731-05d4-4f72-b7cb-95bc9004706a\" (UID: \"23524731-05d4-4f72-b7cb-95bc9004706a\") " Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.075910 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bx2z\" (UniqueName: \"kubernetes.io/projected/23524731-05d4-4f72-b7cb-95bc9004706a-kube-api-access-7bx2z\") pod \"23524731-05d4-4f72-b7cb-95bc9004706a\" (UID: \"23524731-05d4-4f72-b7cb-95bc9004706a\") " Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.081800 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23524731-05d4-4f72-b7cb-95bc9004706a-kube-api-access-7bx2z" (OuterVolumeSpecName: "kube-api-access-7bx2z") pod "23524731-05d4-4f72-b7cb-95bc9004706a" (UID: "23524731-05d4-4f72-b7cb-95bc9004706a"). InnerVolumeSpecName "kube-api-access-7bx2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.082848 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="f879a45d-24c6-4508-829b-a3cdbcda3a33" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.183:8775/\": read tcp 10.217.0.2:54232->10.217.0.183:8775: read: connection reset by peer" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.082872 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="f879a45d-24c6-4508-829b-a3cdbcda3a33" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.183:8775/\": read tcp 10.217.0.2:54234->10.217.0.183:8775: read: connection reset by peer" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.102231 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23524731-05d4-4f72-b7cb-95bc9004706a-config-data" (OuterVolumeSpecName: "config-data") pod "23524731-05d4-4f72-b7cb-95bc9004706a" (UID: "23524731-05d4-4f72-b7cb-95bc9004706a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.108205 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23524731-05d4-4f72-b7cb-95bc9004706a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23524731-05d4-4f72-b7cb-95bc9004706a" (UID: "23524731-05d4-4f72-b7cb-95bc9004706a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.178655 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23524731-05d4-4f72-b7cb-95bc9004706a-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.178693 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23524731-05d4-4f72-b7cb-95bc9004706a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.178710 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bx2z\" (UniqueName: \"kubernetes.io/projected/23524731-05d4-4f72-b7cb-95bc9004706a-kube-api-access-7bx2z\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.689597 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.695196 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.789205 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f879a45d-24c6-4508-829b-a3cdbcda3a33-config-data\") pod \"f879a45d-24c6-4508-829b-a3cdbcda3a33\" (UID: \"f879a45d-24c6-4508-829b-a3cdbcda3a33\") " Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.789315 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwgtj\" (UniqueName: \"kubernetes.io/projected/f879a45d-24c6-4508-829b-a3cdbcda3a33-kube-api-access-dwgtj\") pod \"f879a45d-24c6-4508-829b-a3cdbcda3a33\" (UID: \"f879a45d-24c6-4508-829b-a3cdbcda3a33\") " Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.789343 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-logs\") pod \"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2\" (UID: \"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2\") " Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.789431 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f879a45d-24c6-4508-829b-a3cdbcda3a33-nova-metadata-tls-certs\") pod \"f879a45d-24c6-4508-829b-a3cdbcda3a33\" (UID: \"f879a45d-24c6-4508-829b-a3cdbcda3a33\") " Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.789463 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-combined-ca-bundle\") pod \"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2\" (UID: \"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2\") " Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.789494 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-public-tls-certs\") pod \"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2\" (UID: \"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2\") " Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.789515 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f879a45d-24c6-4508-829b-a3cdbcda3a33-combined-ca-bundle\") pod \"f879a45d-24c6-4508-829b-a3cdbcda3a33\" (UID: \"f879a45d-24c6-4508-829b-a3cdbcda3a33\") " Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.789640 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f879a45d-24c6-4508-829b-a3cdbcda3a33-logs\") pod \"f879a45d-24c6-4508-829b-a3cdbcda3a33\" (UID: \"f879a45d-24c6-4508-829b-a3cdbcda3a33\") " Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.789689 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-config-data\") pod \"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2\" (UID: \"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2\") " Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.789721 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-internal-tls-certs\") pod \"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2\" (UID: \"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2\") " Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.789741 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6pgb\" (UniqueName: \"kubernetes.io/projected/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-kube-api-access-h6pgb\") pod \"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2\" (UID: \"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2\") " Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.789805 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-logs" (OuterVolumeSpecName: "logs") pod "eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2" (UID: "eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.790061 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f879a45d-24c6-4508-829b-a3cdbcda3a33-logs" (OuterVolumeSpecName: "logs") pod "f879a45d-24c6-4508-829b-a3cdbcda3a33" (UID: "f879a45d-24c6-4508-829b-a3cdbcda3a33"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.790282 4908 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.790300 4908 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f879a45d-24c6-4508-829b-a3cdbcda3a33-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.796195 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-kube-api-access-h6pgb" (OuterVolumeSpecName: "kube-api-access-h6pgb") pod "eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2" (UID: "eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2"). InnerVolumeSpecName "kube-api-access-h6pgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.797268 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f879a45d-24c6-4508-829b-a3cdbcda3a33-kube-api-access-dwgtj" (OuterVolumeSpecName: "kube-api-access-dwgtj") pod "f879a45d-24c6-4508-829b-a3cdbcda3a33" (UID: "f879a45d-24c6-4508-829b-a3cdbcda3a33"). InnerVolumeSpecName "kube-api-access-dwgtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.820148 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f879a45d-24c6-4508-829b-a3cdbcda3a33-config-data" (OuterVolumeSpecName: "config-data") pod "f879a45d-24c6-4508-829b-a3cdbcda3a33" (UID: "f879a45d-24c6-4508-829b-a3cdbcda3a33"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.821231 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"23524731-05d4-4f72-b7cb-95bc9004706a","Type":"ContainerDied","Data":"54669ab3a5475959bc651d02068cf22394142bd3250ce12001164aeb55721466"} Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.821316 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.821633 4908 scope.go:117] "RemoveContainer" containerID="e250bd7103af642db66be41dd2071ccb6bcdfa66515d4a6c3c92114bf793cced" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.831558 4908 generic.go:334] "Generic (PLEG): container finished" podID="eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2" containerID="77923d011d6f020467f073c1ef10e6ee0a30b24723d7a12be5a82a1309f0b73f" exitCode=0 Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.831648 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.831667 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2","Type":"ContainerDied","Data":"77923d011d6f020467f073c1ef10e6ee0a30b24723d7a12be5a82a1309f0b73f"} Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.832026 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2","Type":"ContainerDied","Data":"362ddd2a704d12cb15664637bb15fc67c4dbd949f648d92551b51d70c85a4d58"} Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.840265 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2" (UID: "eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.840284 4908 generic.go:334] "Generic (PLEG): container finished" podID="f879a45d-24c6-4508-829b-a3cdbcda3a33" containerID="40c3de0948c325a528f21357bd8c1b86cf63be9fad76f50818c3a538ee3046f0" exitCode=0 Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.840314 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f879a45d-24c6-4508-829b-a3cdbcda3a33","Type":"ContainerDied","Data":"40c3de0948c325a528f21357bd8c1b86cf63be9fad76f50818c3a538ee3046f0"} Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.840640 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f879a45d-24c6-4508-829b-a3cdbcda3a33","Type":"ContainerDied","Data":"fefc03f386cee19c2ad515cf87023d51101ff202e117a17167565ed6f4a400fb"} Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.840331 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.841449 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-config-data" (OuterVolumeSpecName: "config-data") pod "eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2" (UID: "eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.853309 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f879a45d-24c6-4508-829b-a3cdbcda3a33-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f879a45d-24c6-4508-829b-a3cdbcda3a33" (UID: "f879a45d-24c6-4508-829b-a3cdbcda3a33"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.868558 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f879a45d-24c6-4508-829b-a3cdbcda3a33-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "f879a45d-24c6-4508-829b-a3cdbcda3a33" (UID: "f879a45d-24c6-4508-829b-a3cdbcda3a33"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.872233 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2" (UID: "eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.874016 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2" (UID: "eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.896918 4908 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f879a45d-24c6-4508-829b-a3cdbcda3a33-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.897623 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.897730 4908 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.897871 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f879a45d-24c6-4508-829b-a3cdbcda3a33-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.897969 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.898104 4908 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.898198 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6pgb\" (UniqueName: \"kubernetes.io/projected/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2-kube-api-access-h6pgb\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.898296 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f879a45d-24c6-4508-829b-a3cdbcda3a33-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.898391 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwgtj\" (UniqueName: \"kubernetes.io/projected/f879a45d-24c6-4508-829b-a3cdbcda3a33-kube-api-access-dwgtj\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.948217 4908 scope.go:117] "RemoveContainer" containerID="77923d011d6f020467f073c1ef10e6ee0a30b24723d7a12be5a82a1309f0b73f" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.958377 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.970529 4908 scope.go:117] "RemoveContainer" containerID="640ad25e1321717c7f6db32dd4bdb862101c4693ceb950913c79773f3729acf8" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.972573 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.985090 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 07:45:34 crc kubenswrapper[4908]: E0131 07:45:34.985448 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2" containerName="nova-api-log" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.985463 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2" containerName="nova-api-log" Jan 31 07:45:34 crc kubenswrapper[4908]: E0131 07:45:34.985485 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2" containerName="nova-api-api" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.985491 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2" containerName="nova-api-api" Jan 31 07:45:34 crc kubenswrapper[4908]: E0131 07:45:34.985501 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f879a45d-24c6-4508-829b-a3cdbcda3a33" containerName="nova-metadata-log" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.985507 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="f879a45d-24c6-4508-829b-a3cdbcda3a33" containerName="nova-metadata-log" Jan 31 07:45:34 crc kubenswrapper[4908]: E0131 07:45:34.985518 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f33284f-ffc2-4c8a-8b18-9b2b4083ed18" containerName="nova-manage" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.985523 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f33284f-ffc2-4c8a-8b18-9b2b4083ed18" containerName="nova-manage" Jan 31 07:45:34 crc kubenswrapper[4908]: E0131 07:45:34.985539 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23524731-05d4-4f72-b7cb-95bc9004706a" containerName="nova-scheduler-scheduler" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.985545 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="23524731-05d4-4f72-b7cb-95bc9004706a" containerName="nova-scheduler-scheduler" Jan 31 07:45:34 crc kubenswrapper[4908]: E0131 07:45:34.985554 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f879a45d-24c6-4508-829b-a3cdbcda3a33" containerName="nova-metadata-metadata" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.985560 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="f879a45d-24c6-4508-829b-a3cdbcda3a33" containerName="nova-metadata-metadata" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.985696 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="f879a45d-24c6-4508-829b-a3cdbcda3a33" containerName="nova-metadata-log" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.985711 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2" containerName="nova-api-log" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.985724 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="23524731-05d4-4f72-b7cb-95bc9004706a" containerName="nova-scheduler-scheduler" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.985732 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f33284f-ffc2-4c8a-8b18-9b2b4083ed18" containerName="nova-manage" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.985740 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="f879a45d-24c6-4508-829b-a3cdbcda3a33" containerName="nova-metadata-metadata" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.985746 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2" containerName="nova-api-api" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.986322 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.994182 4908 scope.go:117] "RemoveContainer" containerID="77923d011d6f020467f073c1ef10e6ee0a30b24723d7a12be5a82a1309f0b73f" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.994615 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 31 07:45:34 crc kubenswrapper[4908]: E0131 07:45:34.995849 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77923d011d6f020467f073c1ef10e6ee0a30b24723d7a12be5a82a1309f0b73f\": container with ID starting with 77923d011d6f020467f073c1ef10e6ee0a30b24723d7a12be5a82a1309f0b73f not found: ID does not exist" containerID="77923d011d6f020467f073c1ef10e6ee0a30b24723d7a12be5a82a1309f0b73f" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.995886 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77923d011d6f020467f073c1ef10e6ee0a30b24723d7a12be5a82a1309f0b73f"} err="failed to get container status \"77923d011d6f020467f073c1ef10e6ee0a30b24723d7a12be5a82a1309f0b73f\": rpc error: code = NotFound desc = could not find container \"77923d011d6f020467f073c1ef10e6ee0a30b24723d7a12be5a82a1309f0b73f\": container with ID starting with 77923d011d6f020467f073c1ef10e6ee0a30b24723d7a12be5a82a1309f0b73f not found: ID does not exist" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.995909 4908 scope.go:117] "RemoveContainer" containerID="640ad25e1321717c7f6db32dd4bdb862101c4693ceb950913c79773f3729acf8" Jan 31 07:45:34 crc kubenswrapper[4908]: E0131 07:45:34.996224 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"640ad25e1321717c7f6db32dd4bdb862101c4693ceb950913c79773f3729acf8\": container with ID starting with 640ad25e1321717c7f6db32dd4bdb862101c4693ceb950913c79773f3729acf8 not found: ID does not exist" containerID="640ad25e1321717c7f6db32dd4bdb862101c4693ceb950913c79773f3729acf8" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.996245 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"640ad25e1321717c7f6db32dd4bdb862101c4693ceb950913c79773f3729acf8"} err="failed to get container status \"640ad25e1321717c7f6db32dd4bdb862101c4693ceb950913c79773f3729acf8\": rpc error: code = NotFound desc = could not find container \"640ad25e1321717c7f6db32dd4bdb862101c4693ceb950913c79773f3729acf8\": container with ID starting with 640ad25e1321717c7f6db32dd4bdb862101c4693ceb950913c79773f3729acf8 not found: ID does not exist" Jan 31 07:45:34 crc kubenswrapper[4908]: I0131 07:45:34.996256 4908 scope.go:117] "RemoveContainer" containerID="40c3de0948c325a528f21357bd8c1b86cf63be9fad76f50818c3a538ee3046f0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.006553 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.023020 4908 scope.go:117] "RemoveContainer" containerID="dfaa8c3908d93737adf349c6bce0b7bd4065e1a4dd6784b465dfcf255479bb57" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.039122 4908 scope.go:117] "RemoveContainer" containerID="40c3de0948c325a528f21357bd8c1b86cf63be9fad76f50818c3a538ee3046f0" Jan 31 07:45:35 crc kubenswrapper[4908]: E0131 07:45:35.039617 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40c3de0948c325a528f21357bd8c1b86cf63be9fad76f50818c3a538ee3046f0\": container with ID starting with 40c3de0948c325a528f21357bd8c1b86cf63be9fad76f50818c3a538ee3046f0 not found: ID does not exist" containerID="40c3de0948c325a528f21357bd8c1b86cf63be9fad76f50818c3a538ee3046f0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.039731 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40c3de0948c325a528f21357bd8c1b86cf63be9fad76f50818c3a538ee3046f0"} err="failed to get container status \"40c3de0948c325a528f21357bd8c1b86cf63be9fad76f50818c3a538ee3046f0\": rpc error: code = NotFound desc = could not find container \"40c3de0948c325a528f21357bd8c1b86cf63be9fad76f50818c3a538ee3046f0\": container with ID starting with 40c3de0948c325a528f21357bd8c1b86cf63be9fad76f50818c3a538ee3046f0 not found: ID does not exist" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.039808 4908 scope.go:117] "RemoveContainer" containerID="dfaa8c3908d93737adf349c6bce0b7bd4065e1a4dd6784b465dfcf255479bb57" Jan 31 07:45:35 crc kubenswrapper[4908]: E0131 07:45:35.040128 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfaa8c3908d93737adf349c6bce0b7bd4065e1a4dd6784b465dfcf255479bb57\": container with ID starting with dfaa8c3908d93737adf349c6bce0b7bd4065e1a4dd6784b465dfcf255479bb57 not found: ID does not exist" containerID="dfaa8c3908d93737adf349c6bce0b7bd4065e1a4dd6784b465dfcf255479bb57" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.040202 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfaa8c3908d93737adf349c6bce0b7bd4065e1a4dd6784b465dfcf255479bb57"} err="failed to get container status \"dfaa8c3908d93737adf349c6bce0b7bd4065e1a4dd6784b465dfcf255479bb57\": rpc error: code = NotFound desc = could not find container \"dfaa8c3908d93737adf349c6bce0b7bd4065e1a4dd6784b465dfcf255479bb57\": container with ID starting with dfaa8c3908d93737adf349c6bce0b7bd4065e1a4dd6784b465dfcf255479bb57 not found: ID does not exist" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.102255 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d36e511-03b7-4d73-9a5e-ac775fafa866-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6d36e511-03b7-4d73-9a5e-ac775fafa866\") " pod="openstack/nova-scheduler-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.102653 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9j48\" (UniqueName: \"kubernetes.io/projected/6d36e511-03b7-4d73-9a5e-ac775fafa866-kube-api-access-f9j48\") pod \"nova-scheduler-0\" (UID: \"6d36e511-03b7-4d73-9a5e-ac775fafa866\") " pod="openstack/nova-scheduler-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.102842 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d36e511-03b7-4d73-9a5e-ac775fafa866-config-data\") pod \"nova-scheduler-0\" (UID: \"6d36e511-03b7-4d73-9a5e-ac775fafa866\") " pod="openstack/nova-scheduler-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.169599 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.182406 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.194714 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.204890 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9j48\" (UniqueName: \"kubernetes.io/projected/6d36e511-03b7-4d73-9a5e-ac775fafa866-kube-api-access-f9j48\") pod \"nova-scheduler-0\" (UID: \"6d36e511-03b7-4d73-9a5e-ac775fafa866\") " pod="openstack/nova-scheduler-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.205047 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d36e511-03b7-4d73-9a5e-ac775fafa866-config-data\") pod \"nova-scheduler-0\" (UID: \"6d36e511-03b7-4d73-9a5e-ac775fafa866\") " pod="openstack/nova-scheduler-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.205116 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d36e511-03b7-4d73-9a5e-ac775fafa866-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6d36e511-03b7-4d73-9a5e-ac775fafa866\") " pod="openstack/nova-scheduler-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.208512 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d36e511-03b7-4d73-9a5e-ac775fafa866-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6d36e511-03b7-4d73-9a5e-ac775fafa866\") " pod="openstack/nova-scheduler-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.208784 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d36e511-03b7-4d73-9a5e-ac775fafa866-config-data\") pod \"nova-scheduler-0\" (UID: \"6d36e511-03b7-4d73-9a5e-ac775fafa866\") " pod="openstack/nova-scheduler-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.212398 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.228275 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9j48\" (UniqueName: \"kubernetes.io/projected/6d36e511-03b7-4d73-9a5e-ac775fafa866-kube-api-access-f9j48\") pod \"nova-scheduler-0\" (UID: \"6d36e511-03b7-4d73-9a5e-ac775fafa866\") " pod="openstack/nova-scheduler-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.230741 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.232150 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.240671 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.240827 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.241142 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.254421 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.267371 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.268769 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.274354 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.274476 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.284704 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.317239 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.417094 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd56p\" (UniqueName: \"kubernetes.io/projected/f7fe735d-6bf9-4174-a43e-43a5a79bf69b-kube-api-access-rd56p\") pod \"nova-api-0\" (UID: \"f7fe735d-6bf9-4174-a43e-43a5a79bf69b\") " pod="openstack/nova-api-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.417150 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7fe735d-6bf9-4174-a43e-43a5a79bf69b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f7fe735d-6bf9-4174-a43e-43a5a79bf69b\") " pod="openstack/nova-api-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.417178 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7464ca4-eaeb-4a8f-a554-33acef353bfa-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e7464ca4-eaeb-4a8f-a554-33acef353bfa\") " pod="openstack/nova-metadata-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.417211 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7fe735d-6bf9-4174-a43e-43a5a79bf69b-public-tls-certs\") pod \"nova-api-0\" (UID: \"f7fe735d-6bf9-4174-a43e-43a5a79bf69b\") " pod="openstack/nova-api-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.417234 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7464ca4-eaeb-4a8f-a554-33acef353bfa-logs\") pod \"nova-metadata-0\" (UID: \"e7464ca4-eaeb-4a8f-a554-33acef353bfa\") " pod="openstack/nova-metadata-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.417287 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7fe735d-6bf9-4174-a43e-43a5a79bf69b-logs\") pod \"nova-api-0\" (UID: \"f7fe735d-6bf9-4174-a43e-43a5a79bf69b\") " pod="openstack/nova-api-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.417328 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7fe735d-6bf9-4174-a43e-43a5a79bf69b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f7fe735d-6bf9-4174-a43e-43a5a79bf69b\") " pod="openstack/nova-api-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.417414 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hftjr\" (UniqueName: \"kubernetes.io/projected/e7464ca4-eaeb-4a8f-a554-33acef353bfa-kube-api-access-hftjr\") pod \"nova-metadata-0\" (UID: \"e7464ca4-eaeb-4a8f-a554-33acef353bfa\") " pod="openstack/nova-metadata-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.417558 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7fe735d-6bf9-4174-a43e-43a5a79bf69b-config-data\") pod \"nova-api-0\" (UID: \"f7fe735d-6bf9-4174-a43e-43a5a79bf69b\") " pod="openstack/nova-api-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.417592 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7464ca4-eaeb-4a8f-a554-33acef353bfa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e7464ca4-eaeb-4a8f-a554-33acef353bfa\") " pod="openstack/nova-metadata-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.417619 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7464ca4-eaeb-4a8f-a554-33acef353bfa-config-data\") pod \"nova-metadata-0\" (UID: \"e7464ca4-eaeb-4a8f-a554-33acef353bfa\") " pod="openstack/nova-metadata-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.519417 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd56p\" (UniqueName: \"kubernetes.io/projected/f7fe735d-6bf9-4174-a43e-43a5a79bf69b-kube-api-access-rd56p\") pod \"nova-api-0\" (UID: \"f7fe735d-6bf9-4174-a43e-43a5a79bf69b\") " pod="openstack/nova-api-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.519455 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7fe735d-6bf9-4174-a43e-43a5a79bf69b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f7fe735d-6bf9-4174-a43e-43a5a79bf69b\") " pod="openstack/nova-api-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.519471 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7464ca4-eaeb-4a8f-a554-33acef353bfa-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e7464ca4-eaeb-4a8f-a554-33acef353bfa\") " pod="openstack/nova-metadata-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.519491 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7fe735d-6bf9-4174-a43e-43a5a79bf69b-public-tls-certs\") pod \"nova-api-0\" (UID: \"f7fe735d-6bf9-4174-a43e-43a5a79bf69b\") " pod="openstack/nova-api-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.519507 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7464ca4-eaeb-4a8f-a554-33acef353bfa-logs\") pod \"nova-metadata-0\" (UID: \"e7464ca4-eaeb-4a8f-a554-33acef353bfa\") " pod="openstack/nova-metadata-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.519546 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7fe735d-6bf9-4174-a43e-43a5a79bf69b-logs\") pod \"nova-api-0\" (UID: \"f7fe735d-6bf9-4174-a43e-43a5a79bf69b\") " pod="openstack/nova-api-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.519573 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7fe735d-6bf9-4174-a43e-43a5a79bf69b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f7fe735d-6bf9-4174-a43e-43a5a79bf69b\") " pod="openstack/nova-api-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.519598 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hftjr\" (UniqueName: \"kubernetes.io/projected/e7464ca4-eaeb-4a8f-a554-33acef353bfa-kube-api-access-hftjr\") pod \"nova-metadata-0\" (UID: \"e7464ca4-eaeb-4a8f-a554-33acef353bfa\") " pod="openstack/nova-metadata-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.519659 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7fe735d-6bf9-4174-a43e-43a5a79bf69b-config-data\") pod \"nova-api-0\" (UID: \"f7fe735d-6bf9-4174-a43e-43a5a79bf69b\") " pod="openstack/nova-api-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.519680 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7464ca4-eaeb-4a8f-a554-33acef353bfa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e7464ca4-eaeb-4a8f-a554-33acef353bfa\") " pod="openstack/nova-metadata-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.519698 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7464ca4-eaeb-4a8f-a554-33acef353bfa-config-data\") pod \"nova-metadata-0\" (UID: \"e7464ca4-eaeb-4a8f-a554-33acef353bfa\") " pod="openstack/nova-metadata-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.520801 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7464ca4-eaeb-4a8f-a554-33acef353bfa-logs\") pod \"nova-metadata-0\" (UID: \"e7464ca4-eaeb-4a8f-a554-33acef353bfa\") " pod="openstack/nova-metadata-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.525111 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7464ca4-eaeb-4a8f-a554-33acef353bfa-config-data\") pod \"nova-metadata-0\" (UID: \"e7464ca4-eaeb-4a8f-a554-33acef353bfa\") " pod="openstack/nova-metadata-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.525719 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7fe735d-6bf9-4174-a43e-43a5a79bf69b-logs\") pod \"nova-api-0\" (UID: \"f7fe735d-6bf9-4174-a43e-43a5a79bf69b\") " pod="openstack/nova-api-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.526679 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7fe735d-6bf9-4174-a43e-43a5a79bf69b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f7fe735d-6bf9-4174-a43e-43a5a79bf69b\") " pod="openstack/nova-api-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.535106 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7fe735d-6bf9-4174-a43e-43a5a79bf69b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f7fe735d-6bf9-4174-a43e-43a5a79bf69b\") " pod="openstack/nova-api-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.535562 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7464ca4-eaeb-4a8f-a554-33acef353bfa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e7464ca4-eaeb-4a8f-a554-33acef353bfa\") " pod="openstack/nova-metadata-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.537834 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7fe735d-6bf9-4174-a43e-43a5a79bf69b-config-data\") pod \"nova-api-0\" (UID: \"f7fe735d-6bf9-4174-a43e-43a5a79bf69b\") " pod="openstack/nova-api-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.542193 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7464ca4-eaeb-4a8f-a554-33acef353bfa-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e7464ca4-eaeb-4a8f-a554-33acef353bfa\") " pod="openstack/nova-metadata-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.542331 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd56p\" (UniqueName: \"kubernetes.io/projected/f7fe735d-6bf9-4174-a43e-43a5a79bf69b-kube-api-access-rd56p\") pod \"nova-api-0\" (UID: \"f7fe735d-6bf9-4174-a43e-43a5a79bf69b\") " pod="openstack/nova-api-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.546160 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7fe735d-6bf9-4174-a43e-43a5a79bf69b-public-tls-certs\") pod \"nova-api-0\" (UID: \"f7fe735d-6bf9-4174-a43e-43a5a79bf69b\") " pod="openstack/nova-api-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.546246 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hftjr\" (UniqueName: \"kubernetes.io/projected/e7464ca4-eaeb-4a8f-a554-33acef353bfa-kube-api-access-hftjr\") pod \"nova-metadata-0\" (UID: \"e7464ca4-eaeb-4a8f-a554-33acef353bfa\") " pod="openstack/nova-metadata-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.550467 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.594471 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.774860 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 07:45:35 crc kubenswrapper[4908]: W0131 07:45:35.778315 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d36e511_03b7_4d73_9a5e_ac775fafa866.slice/crio-f1bc11ebe4830adf8fefe1f258a86dc45fe3b1b2943c7347f4bb31ccf5a75616 WatchSource:0}: Error finding container f1bc11ebe4830adf8fefe1f258a86dc45fe3b1b2943c7347f4bb31ccf5a75616: Status 404 returned error can't find the container with id f1bc11ebe4830adf8fefe1f258a86dc45fe3b1b2943c7347f4bb31ccf5a75616 Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.850943 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6d36e511-03b7-4d73-9a5e-ac775fafa866","Type":"ContainerStarted","Data":"f1bc11ebe4830adf8fefe1f258a86dc45fe3b1b2943c7347f4bb31ccf5a75616"} Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.951289 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23524731-05d4-4f72-b7cb-95bc9004706a" path="/var/lib/kubelet/pods/23524731-05d4-4f72-b7cb-95bc9004706a/volumes" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.952571 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2" path="/var/lib/kubelet/pods/eb335b96-74ee-4b2b-b0f0-1cb0a63b5fe2/volumes" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.953302 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f879a45d-24c6-4508-829b-a3cdbcda3a33" path="/var/lib/kubelet/pods/f879a45d-24c6-4508-829b-a3cdbcda3a33/volumes" Jan 31 07:45:35 crc kubenswrapper[4908]: I0131 07:45:35.999750 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:45:36 crc kubenswrapper[4908]: W0131 07:45:36.000563 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7fe735d_6bf9_4174_a43e_43a5a79bf69b.slice/crio-037a70c05cd85713802ea5ae398a7fd1dd848636e8efc936006171f234442faf WatchSource:0}: Error finding container 037a70c05cd85713802ea5ae398a7fd1dd848636e8efc936006171f234442faf: Status 404 returned error can't find the container with id 037a70c05cd85713802ea5ae398a7fd1dd848636e8efc936006171f234442faf Jan 31 07:45:36 crc kubenswrapper[4908]: I0131 07:45:36.121857 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:45:36 crc kubenswrapper[4908]: W0131 07:45:36.125909 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7464ca4_eaeb_4a8f_a554_33acef353bfa.slice/crio-b54a472747e8a75b1fd48b3ed93f5b64b3e4ba95e8cd42c80387ddb124373a47 WatchSource:0}: Error finding container b54a472747e8a75b1fd48b3ed93f5b64b3e4ba95e8cd42c80387ddb124373a47: Status 404 returned error can't find the container with id b54a472747e8a75b1fd48b3ed93f5b64b3e4ba95e8cd42c80387ddb124373a47 Jan 31 07:45:36 crc kubenswrapper[4908]: I0131 07:45:36.865932 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f7fe735d-6bf9-4174-a43e-43a5a79bf69b","Type":"ContainerStarted","Data":"beb4368761aaf9f65c6a8013921cd481e7cf4c32ff33e01ce81961e6656cedcf"} Jan 31 07:45:36 crc kubenswrapper[4908]: I0131 07:45:36.866206 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f7fe735d-6bf9-4174-a43e-43a5a79bf69b","Type":"ContainerStarted","Data":"2e45052a58f44080143944c4ae99f93f2421b20e65ddde59eaa6486c0536e450"} Jan 31 07:45:36 crc kubenswrapper[4908]: I0131 07:45:36.866217 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f7fe735d-6bf9-4174-a43e-43a5a79bf69b","Type":"ContainerStarted","Data":"037a70c05cd85713802ea5ae398a7fd1dd848636e8efc936006171f234442faf"} Jan 31 07:45:36 crc kubenswrapper[4908]: I0131 07:45:36.873762 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6d36e511-03b7-4d73-9a5e-ac775fafa866","Type":"ContainerStarted","Data":"9c01e48e4b9d30296d98dd86300a16fb198fb087dec32d6f9ff0f30317e7df68"} Jan 31 07:45:36 crc kubenswrapper[4908]: I0131 07:45:36.876073 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7464ca4-eaeb-4a8f-a554-33acef353bfa","Type":"ContainerStarted","Data":"e08525d17fedadc4caf532660ecbf8c287fe06f30c6adeb44fd0a75c34f9c8f8"} Jan 31 07:45:36 crc kubenswrapper[4908]: I0131 07:45:36.876129 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7464ca4-eaeb-4a8f-a554-33acef353bfa","Type":"ContainerStarted","Data":"9a88a92f6a5c3f90f984a038ff05794ed820a45447986c128651c00e6e581caf"} Jan 31 07:45:36 crc kubenswrapper[4908]: I0131 07:45:36.876160 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7464ca4-eaeb-4a8f-a554-33acef353bfa","Type":"ContainerStarted","Data":"b54a472747e8a75b1fd48b3ed93f5b64b3e4ba95e8cd42c80387ddb124373a47"} Jan 31 07:45:36 crc kubenswrapper[4908]: I0131 07:45:36.898128 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.898105707 podStartE2EDuration="1.898105707s" podCreationTimestamp="2026-01-31 07:45:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:45:36.893272927 +0000 UTC m=+1443.509217581" watchObservedRunningTime="2026-01-31 07:45:36.898105707 +0000 UTC m=+1443.514050371" Jan 31 07:45:36 crc kubenswrapper[4908]: I0131 07:45:36.913119 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.913103238 podStartE2EDuration="2.913103238s" podCreationTimestamp="2026-01-31 07:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:45:36.905977382 +0000 UTC m=+1443.521922026" watchObservedRunningTime="2026-01-31 07:45:36.913103238 +0000 UTC m=+1443.529047892" Jan 31 07:45:36 crc kubenswrapper[4908]: I0131 07:45:36.928198 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.928176091 podStartE2EDuration="1.928176091s" podCreationTimestamp="2026-01-31 07:45:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:45:36.926308365 +0000 UTC m=+1443.542253039" watchObservedRunningTime="2026-01-31 07:45:36.928176091 +0000 UTC m=+1443.544120745" Jan 31 07:45:40 crc kubenswrapper[4908]: I0131 07:45:40.317600 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 31 07:45:40 crc kubenswrapper[4908]: I0131 07:45:40.431522 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:45:40 crc kubenswrapper[4908]: I0131 07:45:40.431576 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:45:40 crc kubenswrapper[4908]: I0131 07:45:40.431618 4908 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 07:45:40 crc kubenswrapper[4908]: I0131 07:45:40.432158 4908 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eba7927261e32ea7acb8227699daacd0fc29c715f0d6c37c890b4d99dd751ec0"} pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 07:45:40 crc kubenswrapper[4908]: I0131 07:45:40.432212 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" containerID="cri-o://eba7927261e32ea7acb8227699daacd0fc29c715f0d6c37c890b4d99dd751ec0" gracePeriod=600 Jan 31 07:45:40 crc kubenswrapper[4908]: I0131 07:45:40.596135 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 07:45:40 crc kubenswrapper[4908]: I0131 07:45:40.597130 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 07:45:40 crc kubenswrapper[4908]: I0131 07:45:40.915387 4908 generic.go:334] "Generic (PLEG): container finished" podID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerID="eba7927261e32ea7acb8227699daacd0fc29c715f0d6c37c890b4d99dd751ec0" exitCode=0 Jan 31 07:45:40 crc kubenswrapper[4908]: I0131 07:45:40.915465 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerDied","Data":"eba7927261e32ea7acb8227699daacd0fc29c715f0d6c37c890b4d99dd751ec0"} Jan 31 07:45:40 crc kubenswrapper[4908]: I0131 07:45:40.915520 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerStarted","Data":"3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def"} Jan 31 07:45:40 crc kubenswrapper[4908]: I0131 07:45:40.915540 4908 scope.go:117] "RemoveContainer" containerID="c224d07a5673ea9c5d3566a1e4b3b321889159f5901a3aea765d960e0553cfde" Jan 31 07:45:45 crc kubenswrapper[4908]: I0131 07:45:45.241191 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 31 07:45:45 crc kubenswrapper[4908]: I0131 07:45:45.314118 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 31 07:45:45 crc kubenswrapper[4908]: I0131 07:45:45.351811 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 31 07:45:45 crc kubenswrapper[4908]: I0131 07:45:45.551439 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 07:45:45 crc kubenswrapper[4908]: I0131 07:45:45.551488 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 07:45:45 crc kubenswrapper[4908]: I0131 07:45:45.595821 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 07:45:45 crc kubenswrapper[4908]: I0131 07:45:45.595876 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 07:45:45 crc kubenswrapper[4908]: I0131 07:45:45.993671 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 31 07:45:46 crc kubenswrapper[4908]: I0131 07:45:46.576300 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f7fe735d-6bf9-4174-a43e-43a5a79bf69b" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.195:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 07:45:46 crc kubenswrapper[4908]: I0131 07:45:46.576305 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f7fe735d-6bf9-4174-a43e-43a5a79bf69b" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.195:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 07:45:46 crc kubenswrapper[4908]: I0131 07:45:46.630317 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e7464ca4-eaeb-4a8f-a554-33acef353bfa" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 07:45:46 crc kubenswrapper[4908]: I0131 07:45:46.630216 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e7464ca4-eaeb-4a8f-a554-33acef353bfa" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 07:45:55 crc kubenswrapper[4908]: I0131 07:45:55.556596 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 07:45:55 crc kubenswrapper[4908]: I0131 07:45:55.557277 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 07:45:55 crc kubenswrapper[4908]: I0131 07:45:55.557689 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 07:45:55 crc kubenswrapper[4908]: I0131 07:45:55.558052 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 07:45:55 crc kubenswrapper[4908]: I0131 07:45:55.566248 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 07:45:55 crc kubenswrapper[4908]: I0131 07:45:55.566783 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 07:45:55 crc kubenswrapper[4908]: I0131 07:45:55.618140 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 07:45:55 crc kubenswrapper[4908]: I0131 07:45:55.632337 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 07:45:55 crc kubenswrapper[4908]: I0131 07:45:55.637574 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 07:45:56 crc kubenswrapper[4908]: I0131 07:45:56.075439 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 07:46:04 crc kubenswrapper[4908]: I0131 07:46:04.428960 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 07:46:05 crc kubenswrapper[4908]: I0131 07:46:05.317811 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 07:46:08 crc kubenswrapper[4908]: I0131 07:46:08.322483 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="a1644408-1d98-43ed-b7eb-f399d80a7d10" containerName="rabbitmq" containerID="cri-o://bd010ea0a4f22384a7cd4808ac77b0df3b776b11bb70a4be7e2992b77d2a98be" gracePeriod=604797 Jan 31 07:46:09 crc kubenswrapper[4908]: I0131 07:46:09.050563 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="b4df218c-dfc0-4c17-8b5a-4649e3d4e710" containerName="rabbitmq" containerID="cri-o://438d4bdb5a64d2a268428029e93da88fc0e9b5834c50b2214ecf6268d68315d3" gracePeriod=604797 Jan 31 07:46:09 crc kubenswrapper[4908]: I0131 07:46:09.813774 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="a1644408-1d98-43ed-b7eb-f399d80a7d10" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Jan 31 07:46:10 crc kubenswrapper[4908]: I0131 07:46:10.204102 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="b4df218c-dfc0-4c17-8b5a-4649e3d4e710" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Jan 31 07:46:14 crc kubenswrapper[4908]: I0131 07:46:14.901933 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 07:46:14 crc kubenswrapper[4908]: I0131 07:46:14.927047 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a1644408-1d98-43ed-b7eb-f399d80a7d10-rabbitmq-tls\") pod \"a1644408-1d98-43ed-b7eb-f399d80a7d10\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " Jan 31 07:46:14 crc kubenswrapper[4908]: I0131 07:46:14.927114 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a1644408-1d98-43ed-b7eb-f399d80a7d10-plugins-conf\") pod \"a1644408-1d98-43ed-b7eb-f399d80a7d10\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " Jan 31 07:46:14 crc kubenswrapper[4908]: I0131 07:46:14.927131 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a1644408-1d98-43ed-b7eb-f399d80a7d10-erlang-cookie-secret\") pod \"a1644408-1d98-43ed-b7eb-f399d80a7d10\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " Jan 31 07:46:14 crc kubenswrapper[4908]: I0131 07:46:14.928515 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1644408-1d98-43ed-b7eb-f399d80a7d10-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a1644408-1d98-43ed-b7eb-f399d80a7d10" (UID: "a1644408-1d98-43ed-b7eb-f399d80a7d10"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:46:14 crc kubenswrapper[4908]: I0131 07:46:14.940179 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1644408-1d98-43ed-b7eb-f399d80a7d10-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a1644408-1d98-43ed-b7eb-f399d80a7d10" (UID: "a1644408-1d98-43ed-b7eb-f399d80a7d10"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:46:14 crc kubenswrapper[4908]: I0131 07:46:14.950327 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1644408-1d98-43ed-b7eb-f399d80a7d10-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a1644408-1d98-43ed-b7eb-f399d80a7d10" (UID: "a1644408-1d98-43ed-b7eb-f399d80a7d10"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.028269 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a1644408-1d98-43ed-b7eb-f399d80a7d10-rabbitmq-confd\") pod \"a1644408-1d98-43ed-b7eb-f399d80a7d10\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.028331 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1644408-1d98-43ed-b7eb-f399d80a7d10-config-data\") pod \"a1644408-1d98-43ed-b7eb-f399d80a7d10\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.028381 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"a1644408-1d98-43ed-b7eb-f399d80a7d10\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.028536 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a1644408-1d98-43ed-b7eb-f399d80a7d10-pod-info\") pod \"a1644408-1d98-43ed-b7eb-f399d80a7d10\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.030297 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a1644408-1d98-43ed-b7eb-f399d80a7d10-rabbitmq-plugins\") pod \"a1644408-1d98-43ed-b7eb-f399d80a7d10\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.030576 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a1644408-1d98-43ed-b7eb-f399d80a7d10-rabbitmq-erlang-cookie\") pod \"a1644408-1d98-43ed-b7eb-f399d80a7d10\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.030612 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq2jz\" (UniqueName: \"kubernetes.io/projected/a1644408-1d98-43ed-b7eb-f399d80a7d10-kube-api-access-sq2jz\") pod \"a1644408-1d98-43ed-b7eb-f399d80a7d10\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.030667 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a1644408-1d98-43ed-b7eb-f399d80a7d10-server-conf\") pod \"a1644408-1d98-43ed-b7eb-f399d80a7d10\" (UID: \"a1644408-1d98-43ed-b7eb-f399d80a7d10\") " Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.030807 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1644408-1d98-43ed-b7eb-f399d80a7d10-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a1644408-1d98-43ed-b7eb-f399d80a7d10" (UID: "a1644408-1d98-43ed-b7eb-f399d80a7d10"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.031091 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1644408-1d98-43ed-b7eb-f399d80a7d10-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a1644408-1d98-43ed-b7eb-f399d80a7d10" (UID: "a1644408-1d98-43ed-b7eb-f399d80a7d10"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.031293 4908 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a1644408-1d98-43ed-b7eb-f399d80a7d10-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.031313 4908 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a1644408-1d98-43ed-b7eb-f399d80a7d10-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.031323 4908 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a1644408-1d98-43ed-b7eb-f399d80a7d10-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.031332 4908 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a1644408-1d98-43ed-b7eb-f399d80a7d10-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.031340 4908 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a1644408-1d98-43ed-b7eb-f399d80a7d10-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.033368 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "a1644408-1d98-43ed-b7eb-f399d80a7d10" (UID: "a1644408-1d98-43ed-b7eb-f399d80a7d10"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.036077 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1644408-1d98-43ed-b7eb-f399d80a7d10-kube-api-access-sq2jz" (OuterVolumeSpecName: "kube-api-access-sq2jz") pod "a1644408-1d98-43ed-b7eb-f399d80a7d10" (UID: "a1644408-1d98-43ed-b7eb-f399d80a7d10"). InnerVolumeSpecName "kube-api-access-sq2jz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.037776 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a1644408-1d98-43ed-b7eb-f399d80a7d10-pod-info" (OuterVolumeSpecName: "pod-info") pod "a1644408-1d98-43ed-b7eb-f399d80a7d10" (UID: "a1644408-1d98-43ed-b7eb-f399d80a7d10"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.074167 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1644408-1d98-43ed-b7eb-f399d80a7d10-config-data" (OuterVolumeSpecName: "config-data") pod "a1644408-1d98-43ed-b7eb-f399d80a7d10" (UID: "a1644408-1d98-43ed-b7eb-f399d80a7d10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.081688 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1644408-1d98-43ed-b7eb-f399d80a7d10-server-conf" (OuterVolumeSpecName: "server-conf") pod "a1644408-1d98-43ed-b7eb-f399d80a7d10" (UID: "a1644408-1d98-43ed-b7eb-f399d80a7d10"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.133176 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1644408-1d98-43ed-b7eb-f399d80a7d10-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.133219 4908 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.133232 4908 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a1644408-1d98-43ed-b7eb-f399d80a7d10-pod-info\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.133245 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq2jz\" (UniqueName: \"kubernetes.io/projected/a1644408-1d98-43ed-b7eb-f399d80a7d10-kube-api-access-sq2jz\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.133258 4908 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a1644408-1d98-43ed-b7eb-f399d80a7d10-server-conf\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.159316 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1644408-1d98-43ed-b7eb-f399d80a7d10-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a1644408-1d98-43ed-b7eb-f399d80a7d10" (UID: "a1644408-1d98-43ed-b7eb-f399d80a7d10"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.160835 4908 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.234388 4908 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a1644408-1d98-43ed-b7eb-f399d80a7d10-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.234421 4908 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.235600 4908 generic.go:334] "Generic (PLEG): container finished" podID="a1644408-1d98-43ed-b7eb-f399d80a7d10" containerID="bd010ea0a4f22384a7cd4808ac77b0df3b776b11bb70a4be7e2992b77d2a98be" exitCode=0 Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.235645 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a1644408-1d98-43ed-b7eb-f399d80a7d10","Type":"ContainerDied","Data":"bd010ea0a4f22384a7cd4808ac77b0df3b776b11bb70a4be7e2992b77d2a98be"} Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.235677 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a1644408-1d98-43ed-b7eb-f399d80a7d10","Type":"ContainerDied","Data":"c0bca229c05ee858a23d67a67ac967ff9c99701e49df70f21986d67549c09180"} Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.235698 4908 scope.go:117] "RemoveContainer" containerID="bd010ea0a4f22384a7cd4808ac77b0df3b776b11bb70a4be7e2992b77d2a98be" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.235836 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.260895 4908 scope.go:117] "RemoveContainer" containerID="c7d96a44eed3b5f2e9d0fc5bcb9ee82f21abe1cedd04285137bea567fecd04e2" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.279933 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.293048 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.306991 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 07:46:15 crc kubenswrapper[4908]: E0131 07:46:15.307367 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1644408-1d98-43ed-b7eb-f399d80a7d10" containerName="setup-container" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.307390 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1644408-1d98-43ed-b7eb-f399d80a7d10" containerName="setup-container" Jan 31 07:46:15 crc kubenswrapper[4908]: E0131 07:46:15.307413 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1644408-1d98-43ed-b7eb-f399d80a7d10" containerName="rabbitmq" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.307419 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1644408-1d98-43ed-b7eb-f399d80a7d10" containerName="rabbitmq" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.315896 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1644408-1d98-43ed-b7eb-f399d80a7d10" containerName="rabbitmq" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.317071 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.322022 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.322053 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.322227 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.322303 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.322387 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.322452 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-lsd77" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.322529 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.327663 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.376560 4908 scope.go:117] "RemoveContainer" containerID="bd010ea0a4f22384a7cd4808ac77b0df3b776b11bb70a4be7e2992b77d2a98be" Jan 31 07:46:15 crc kubenswrapper[4908]: E0131 07:46:15.376973 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd010ea0a4f22384a7cd4808ac77b0df3b776b11bb70a4be7e2992b77d2a98be\": container with ID starting with bd010ea0a4f22384a7cd4808ac77b0df3b776b11bb70a4be7e2992b77d2a98be not found: ID does not exist" containerID="bd010ea0a4f22384a7cd4808ac77b0df3b776b11bb70a4be7e2992b77d2a98be" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.377015 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd010ea0a4f22384a7cd4808ac77b0df3b776b11bb70a4be7e2992b77d2a98be"} err="failed to get container status \"bd010ea0a4f22384a7cd4808ac77b0df3b776b11bb70a4be7e2992b77d2a98be\": rpc error: code = NotFound desc = could not find container \"bd010ea0a4f22384a7cd4808ac77b0df3b776b11bb70a4be7e2992b77d2a98be\": container with ID starting with bd010ea0a4f22384a7cd4808ac77b0df3b776b11bb70a4be7e2992b77d2a98be not found: ID does not exist" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.377036 4908 scope.go:117] "RemoveContainer" containerID="c7d96a44eed3b5f2e9d0fc5bcb9ee82f21abe1cedd04285137bea567fecd04e2" Jan 31 07:46:15 crc kubenswrapper[4908]: E0131 07:46:15.377518 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7d96a44eed3b5f2e9d0fc5bcb9ee82f21abe1cedd04285137bea567fecd04e2\": container with ID starting with c7d96a44eed3b5f2e9d0fc5bcb9ee82f21abe1cedd04285137bea567fecd04e2 not found: ID does not exist" containerID="c7d96a44eed3b5f2e9d0fc5bcb9ee82f21abe1cedd04285137bea567fecd04e2" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.377567 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7d96a44eed3b5f2e9d0fc5bcb9ee82f21abe1cedd04285137bea567fecd04e2"} err="failed to get container status \"c7d96a44eed3b5f2e9d0fc5bcb9ee82f21abe1cedd04285137bea567fecd04e2\": rpc error: code = NotFound desc = could not find container \"c7d96a44eed3b5f2e9d0fc5bcb9ee82f21abe1cedd04285137bea567fecd04e2\": container with ID starting with c7d96a44eed3b5f2e9d0fc5bcb9ee82f21abe1cedd04285137bea567fecd04e2 not found: ID does not exist" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.437625 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32-config-data\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.437682 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.437823 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.437862 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.437919 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.438072 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.438146 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.438285 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.438317 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.438337 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.438405 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqmmw\" (UniqueName: \"kubernetes.io/projected/b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32-kube-api-access-zqmmw\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.540110 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.540434 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.540481 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.540515 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.540535 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.540577 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.540593 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.540609 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.540639 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqmmw\" (UniqueName: \"kubernetes.io/projected/b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32-kube-api-access-zqmmw\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.540661 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32-config-data\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.540688 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.542126 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.542967 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.543017 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.543235 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.543284 4908 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.545553 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.545724 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32-config-data\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.548522 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.551733 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.551811 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.569926 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqmmw\" (UniqueName: \"kubernetes.io/projected/b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32-kube-api-access-zqmmw\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.580608 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32\") " pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.642114 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.756475 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.947160 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-rabbitmq-plugins\") pod \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.947219 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-plugins-conf\") pod \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.947375 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x65rm\" (UniqueName: \"kubernetes.io/projected/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-kube-api-access-x65rm\") pod \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.947410 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-config-data\") pod \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.947472 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-server-conf\") pod \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.947499 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-pod-info\") pod \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.947519 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-erlang-cookie-secret\") pod \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.947549 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.947589 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b4df218c-dfc0-4c17-8b5a-4649e3d4e710" (UID: "b4df218c-dfc0-4c17-8b5a-4649e3d4e710"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.947598 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-rabbitmq-confd\") pod \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.947670 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-rabbitmq-erlang-cookie\") pod \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.947760 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-rabbitmq-tls\") pod \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\" (UID: \"b4df218c-dfc0-4c17-8b5a-4649e3d4e710\") " Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.947915 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b4df218c-dfc0-4c17-8b5a-4649e3d4e710" (UID: "b4df218c-dfc0-4c17-8b5a-4649e3d4e710"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.948618 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1644408-1d98-43ed-b7eb-f399d80a7d10" path="/var/lib/kubelet/pods/a1644408-1d98-43ed-b7eb-f399d80a7d10/volumes" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.949769 4908 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.949786 4908 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.952541 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b4df218c-dfc0-4c17-8b5a-4649e3d4e710" (UID: "b4df218c-dfc0-4c17-8b5a-4649e3d4e710"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.953538 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "b4df218c-dfc0-4c17-8b5a-4649e3d4e710" (UID: "b4df218c-dfc0-4c17-8b5a-4649e3d4e710"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.953713 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b4df218c-dfc0-4c17-8b5a-4649e3d4e710" (UID: "b4df218c-dfc0-4c17-8b5a-4649e3d4e710"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.955655 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b4df218c-dfc0-4c17-8b5a-4649e3d4e710" (UID: "b4df218c-dfc0-4c17-8b5a-4649e3d4e710"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.955757 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-pod-info" (OuterVolumeSpecName: "pod-info") pod "b4df218c-dfc0-4c17-8b5a-4649e3d4e710" (UID: "b4df218c-dfc0-4c17-8b5a-4649e3d4e710"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.956911 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-kube-api-access-x65rm" (OuterVolumeSpecName: "kube-api-access-x65rm") pod "b4df218c-dfc0-4c17-8b5a-4649e3d4e710" (UID: "b4df218c-dfc0-4c17-8b5a-4649e3d4e710"). InnerVolumeSpecName "kube-api-access-x65rm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:46:15 crc kubenswrapper[4908]: I0131 07:46:15.977897 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-config-data" (OuterVolumeSpecName: "config-data") pod "b4df218c-dfc0-4c17-8b5a-4649e3d4e710" (UID: "b4df218c-dfc0-4c17-8b5a-4649e3d4e710"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.007780 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-server-conf" (OuterVolumeSpecName: "server-conf") pod "b4df218c-dfc0-4c17-8b5a-4649e3d4e710" (UID: "b4df218c-dfc0-4c17-8b5a-4649e3d4e710"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.054106 4908 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-server-conf\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.054142 4908 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-pod-info\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.054152 4908 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.054174 4908 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.054184 4908 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.054193 4908 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.054203 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x65rm\" (UniqueName: \"kubernetes.io/projected/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-kube-api-access-x65rm\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.054212 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.057487 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b4df218c-dfc0-4c17-8b5a-4649e3d4e710" (UID: "b4df218c-dfc0-4c17-8b5a-4649e3d4e710"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.071870 4908 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.119342 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.161261 4908 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.161588 4908 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b4df218c-dfc0-4c17-8b5a-4649e3d4e710-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.249497 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32","Type":"ContainerStarted","Data":"d4bee3142edf0dcfbd4b6a0c0c412bfa414e9a5e1b5d375805da2d034e60136f"} Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.251706 4908 generic.go:334] "Generic (PLEG): container finished" podID="b4df218c-dfc0-4c17-8b5a-4649e3d4e710" containerID="438d4bdb5a64d2a268428029e93da88fc0e9b5834c50b2214ecf6268d68315d3" exitCode=0 Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.251741 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b4df218c-dfc0-4c17-8b5a-4649e3d4e710","Type":"ContainerDied","Data":"438d4bdb5a64d2a268428029e93da88fc0e9b5834c50b2214ecf6268d68315d3"} Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.251762 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b4df218c-dfc0-4c17-8b5a-4649e3d4e710","Type":"ContainerDied","Data":"77b409b12332f9d16d1207c19feb52042ee916c2fad40a0d5a00736d1b50b8b3"} Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.251784 4908 scope.go:117] "RemoveContainer" containerID="438d4bdb5a64d2a268428029e93da88fc0e9b5834c50b2214ecf6268d68315d3" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.251809 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.271355 4908 scope.go:117] "RemoveContainer" containerID="bb91ab1f3fd499a30cb095e03eda92509acf7fe46024be71ac15353a041686a2" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.287100 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.299190 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.313388 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 07:46:16 crc kubenswrapper[4908]: E0131 07:46:16.319530 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4df218c-dfc0-4c17-8b5a-4649e3d4e710" containerName="rabbitmq" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.319772 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4df218c-dfc0-4c17-8b5a-4649e3d4e710" containerName="rabbitmq" Jan 31 07:46:16 crc kubenswrapper[4908]: E0131 07:46:16.319876 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4df218c-dfc0-4c17-8b5a-4649e3d4e710" containerName="setup-container" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.319955 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4df218c-dfc0-4c17-8b5a-4649e3d4e710" containerName="setup-container" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.320283 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4df218c-dfc0-4c17-8b5a-4649e3d4e710" containerName="rabbitmq" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.320834 4908 scope.go:117] "RemoveContainer" containerID="438d4bdb5a64d2a268428029e93da88fc0e9b5834c50b2214ecf6268d68315d3" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.321488 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: E0131 07:46:16.323526 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"438d4bdb5a64d2a268428029e93da88fc0e9b5834c50b2214ecf6268d68315d3\": container with ID starting with 438d4bdb5a64d2a268428029e93da88fc0e9b5834c50b2214ecf6268d68315d3 not found: ID does not exist" containerID="438d4bdb5a64d2a268428029e93da88fc0e9b5834c50b2214ecf6268d68315d3" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.323562 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"438d4bdb5a64d2a268428029e93da88fc0e9b5834c50b2214ecf6268d68315d3"} err="failed to get container status \"438d4bdb5a64d2a268428029e93da88fc0e9b5834c50b2214ecf6268d68315d3\": rpc error: code = NotFound desc = could not find container \"438d4bdb5a64d2a268428029e93da88fc0e9b5834c50b2214ecf6268d68315d3\": container with ID starting with 438d4bdb5a64d2a268428029e93da88fc0e9b5834c50b2214ecf6268d68315d3 not found: ID does not exist" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.323583 4908 scope.go:117] "RemoveContainer" containerID="bb91ab1f3fd499a30cb095e03eda92509acf7fe46024be71ac15353a041686a2" Jan 31 07:46:16 crc kubenswrapper[4908]: E0131 07:46:16.323921 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb91ab1f3fd499a30cb095e03eda92509acf7fe46024be71ac15353a041686a2\": container with ID starting with bb91ab1f3fd499a30cb095e03eda92509acf7fe46024be71ac15353a041686a2 not found: ID does not exist" containerID="bb91ab1f3fd499a30cb095e03eda92509acf7fe46024be71ac15353a041686a2" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.323957 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb91ab1f3fd499a30cb095e03eda92509acf7fe46024be71ac15353a041686a2"} err="failed to get container status \"bb91ab1f3fd499a30cb095e03eda92509acf7fe46024be71ac15353a041686a2\": rpc error: code = NotFound desc = could not find container \"bb91ab1f3fd499a30cb095e03eda92509acf7fe46024be71ac15353a041686a2\": container with ID starting with bb91ab1f3fd499a30cb095e03eda92509acf7fe46024be71ac15353a041686a2 not found: ID does not exist" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.328310 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.328403 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.328527 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.328632 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.328688 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.328804 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-pnrnz" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.329005 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.335844 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.467125 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/676f585e-f8bd-4dd3-bcab-e75c830382e3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.467211 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/676f585e-f8bd-4dd3-bcab-e75c830382e3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.467244 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/676f585e-f8bd-4dd3-bcab-e75c830382e3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.467260 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/676f585e-f8bd-4dd3-bcab-e75c830382e3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.467302 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/676f585e-f8bd-4dd3-bcab-e75c830382e3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.467332 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/676f585e-f8bd-4dd3-bcab-e75c830382e3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.467362 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/676f585e-f8bd-4dd3-bcab-e75c830382e3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.467391 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/676f585e-f8bd-4dd3-bcab-e75c830382e3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.467458 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/676f585e-f8bd-4dd3-bcab-e75c830382e3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.467631 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6gg5\" (UniqueName: \"kubernetes.io/projected/676f585e-f8bd-4dd3-bcab-e75c830382e3-kube-api-access-t6gg5\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.467736 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.569018 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6gg5\" (UniqueName: \"kubernetes.io/projected/676f585e-f8bd-4dd3-bcab-e75c830382e3-kube-api-access-t6gg5\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.569276 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.569660 4908 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.570155 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/676f585e-f8bd-4dd3-bcab-e75c830382e3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.570215 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/676f585e-f8bd-4dd3-bcab-e75c830382e3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.570244 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/676f585e-f8bd-4dd3-bcab-e75c830382e3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.570268 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/676f585e-f8bd-4dd3-bcab-e75c830382e3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.570411 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/676f585e-f8bd-4dd3-bcab-e75c830382e3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.570472 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/676f585e-f8bd-4dd3-bcab-e75c830382e3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.570511 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/676f585e-f8bd-4dd3-bcab-e75c830382e3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.570569 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/676f585e-f8bd-4dd3-bcab-e75c830382e3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.570611 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/676f585e-f8bd-4dd3-bcab-e75c830382e3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.571501 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/676f585e-f8bd-4dd3-bcab-e75c830382e3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.571570 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/676f585e-f8bd-4dd3-bcab-e75c830382e3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.572338 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/676f585e-f8bd-4dd3-bcab-e75c830382e3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.572872 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/676f585e-f8bd-4dd3-bcab-e75c830382e3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.573702 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/676f585e-f8bd-4dd3-bcab-e75c830382e3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.576088 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/676f585e-f8bd-4dd3-bcab-e75c830382e3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.576493 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/676f585e-f8bd-4dd3-bcab-e75c830382e3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.576864 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/676f585e-f8bd-4dd3-bcab-e75c830382e3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.577129 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/676f585e-f8bd-4dd3-bcab-e75c830382e3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.599565 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6gg5\" (UniqueName: \"kubernetes.io/projected/676f585e-f8bd-4dd3-bcab-e75c830382e3-kube-api-access-t6gg5\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.604044 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"676f585e-f8bd-4dd3-bcab-e75c830382e3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:16 crc kubenswrapper[4908]: I0131 07:46:16.645245 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:17 crc kubenswrapper[4908]: I0131 07:46:17.098767 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 07:46:17 crc kubenswrapper[4908]: W0131 07:46:17.105248 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod676f585e_f8bd_4dd3_bcab_e75c830382e3.slice/crio-d2e8c2410a324ae7a99896f4fcaffff6c1703841c037f9d8d6928451c968a62d WatchSource:0}: Error finding container d2e8c2410a324ae7a99896f4fcaffff6c1703841c037f9d8d6928451c968a62d: Status 404 returned error can't find the container with id d2e8c2410a324ae7a99896f4fcaffff6c1703841c037f9d8d6928451c968a62d Jan 31 07:46:17 crc kubenswrapper[4908]: I0131 07:46:17.260816 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"676f585e-f8bd-4dd3-bcab-e75c830382e3","Type":"ContainerStarted","Data":"d2e8c2410a324ae7a99896f4fcaffff6c1703841c037f9d8d6928451c968a62d"} Jan 31 07:46:17 crc kubenswrapper[4908]: I0131 07:46:17.949383 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4df218c-dfc0-4c17-8b5a-4649e3d4e710" path="/var/lib/kubelet/pods/b4df218c-dfc0-4c17-8b5a-4649e3d4e710/volumes" Jan 31 07:46:18 crc kubenswrapper[4908]: I0131 07:46:18.273055 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32","Type":"ContainerStarted","Data":"144c712b588d9577a46d8a1c6c0a0c263af3ba98a9fb9776b07d6eab0dff034b"} Jan 31 07:46:19 crc kubenswrapper[4908]: I0131 07:46:19.282174 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"676f585e-f8bd-4dd3-bcab-e75c830382e3","Type":"ContainerStarted","Data":"4b7cf4d8f73857ea698970681cfaa965d7d057a954784e34a4dd422a5ac33ed7"} Jan 31 07:46:19 crc kubenswrapper[4908]: I0131 07:46:19.830085 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-5dw5w"] Jan 31 07:46:19 crc kubenswrapper[4908]: I0131 07:46:19.832490 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" Jan 31 07:46:19 crc kubenswrapper[4908]: I0131 07:46:19.836640 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 31 07:46:19 crc kubenswrapper[4908]: I0131 07:46:19.843668 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-5dw5w"] Jan 31 07:46:19 crc kubenswrapper[4908]: I0131 07:46:19.927021 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-5dw5w\" (UID: \"38edd38e-7763-43ad-8593-7c49b22d8004\") " pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" Jan 31 07:46:19 crc kubenswrapper[4908]: I0131 07:46:19.927093 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-dns-svc\") pod \"dnsmasq-dns-578b8d767c-5dw5w\" (UID: \"38edd38e-7763-43ad-8593-7c49b22d8004\") " pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" Jan 31 07:46:19 crc kubenswrapper[4908]: I0131 07:46:19.927121 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-5dw5w\" (UID: \"38edd38e-7763-43ad-8593-7c49b22d8004\") " pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" Jan 31 07:46:19 crc kubenswrapper[4908]: I0131 07:46:19.927169 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-5dw5w\" (UID: \"38edd38e-7763-43ad-8593-7c49b22d8004\") " pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" Jan 31 07:46:19 crc kubenswrapper[4908]: I0131 07:46:19.927199 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8929s\" (UniqueName: \"kubernetes.io/projected/38edd38e-7763-43ad-8593-7c49b22d8004-kube-api-access-8929s\") pod \"dnsmasq-dns-578b8d767c-5dw5w\" (UID: \"38edd38e-7763-43ad-8593-7c49b22d8004\") " pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" Jan 31 07:46:19 crc kubenswrapper[4908]: I0131 07:46:19.927421 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-config\") pod \"dnsmasq-dns-578b8d767c-5dw5w\" (UID: \"38edd38e-7763-43ad-8593-7c49b22d8004\") " pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" Jan 31 07:46:20 crc kubenswrapper[4908]: I0131 07:46:20.028653 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-config\") pod \"dnsmasq-dns-578b8d767c-5dw5w\" (UID: \"38edd38e-7763-43ad-8593-7c49b22d8004\") " pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" Jan 31 07:46:20 crc kubenswrapper[4908]: I0131 07:46:20.028736 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-5dw5w\" (UID: \"38edd38e-7763-43ad-8593-7c49b22d8004\") " pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" Jan 31 07:46:20 crc kubenswrapper[4908]: I0131 07:46:20.028775 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-dns-svc\") pod \"dnsmasq-dns-578b8d767c-5dw5w\" (UID: \"38edd38e-7763-43ad-8593-7c49b22d8004\") " pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" Jan 31 07:46:20 crc kubenswrapper[4908]: I0131 07:46:20.028794 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-5dw5w\" (UID: \"38edd38e-7763-43ad-8593-7c49b22d8004\") " pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" Jan 31 07:46:20 crc kubenswrapper[4908]: I0131 07:46:20.028826 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-5dw5w\" (UID: \"38edd38e-7763-43ad-8593-7c49b22d8004\") " pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" Jan 31 07:46:20 crc kubenswrapper[4908]: I0131 07:46:20.028848 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8929s\" (UniqueName: \"kubernetes.io/projected/38edd38e-7763-43ad-8593-7c49b22d8004-kube-api-access-8929s\") pod \"dnsmasq-dns-578b8d767c-5dw5w\" (UID: \"38edd38e-7763-43ad-8593-7c49b22d8004\") " pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" Jan 31 07:46:20 crc kubenswrapper[4908]: I0131 07:46:20.029718 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-config\") pod \"dnsmasq-dns-578b8d767c-5dw5w\" (UID: \"38edd38e-7763-43ad-8593-7c49b22d8004\") " pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" Jan 31 07:46:20 crc kubenswrapper[4908]: I0131 07:46:20.029869 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-5dw5w\" (UID: \"38edd38e-7763-43ad-8593-7c49b22d8004\") " pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" Jan 31 07:46:20 crc kubenswrapper[4908]: I0131 07:46:20.029997 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-dns-svc\") pod \"dnsmasq-dns-578b8d767c-5dw5w\" (UID: \"38edd38e-7763-43ad-8593-7c49b22d8004\") " pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" Jan 31 07:46:20 crc kubenswrapper[4908]: I0131 07:46:20.030294 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-5dw5w\" (UID: \"38edd38e-7763-43ad-8593-7c49b22d8004\") " pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" Jan 31 07:46:20 crc kubenswrapper[4908]: I0131 07:46:20.030610 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-5dw5w\" (UID: \"38edd38e-7763-43ad-8593-7c49b22d8004\") " pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" Jan 31 07:46:20 crc kubenswrapper[4908]: I0131 07:46:20.050643 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8929s\" (UniqueName: \"kubernetes.io/projected/38edd38e-7763-43ad-8593-7c49b22d8004-kube-api-access-8929s\") pod \"dnsmasq-dns-578b8d767c-5dw5w\" (UID: \"38edd38e-7763-43ad-8593-7c49b22d8004\") " pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" Jan 31 07:46:20 crc kubenswrapper[4908]: I0131 07:46:20.167632 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" Jan 31 07:46:20 crc kubenswrapper[4908]: W0131 07:46:20.647781 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38edd38e_7763_43ad_8593_7c49b22d8004.slice/crio-96f116138d0a8a9bf515249d629ffc79b35ea41a005e70ec162ea272be851dad WatchSource:0}: Error finding container 96f116138d0a8a9bf515249d629ffc79b35ea41a005e70ec162ea272be851dad: Status 404 returned error can't find the container with id 96f116138d0a8a9bf515249d629ffc79b35ea41a005e70ec162ea272be851dad Jan 31 07:46:20 crc kubenswrapper[4908]: I0131 07:46:20.648747 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-5dw5w"] Jan 31 07:46:21 crc kubenswrapper[4908]: I0131 07:46:21.300280 4908 generic.go:334] "Generic (PLEG): container finished" podID="38edd38e-7763-43ad-8593-7c49b22d8004" containerID="2776c2759cce2a8154f8fd6c65bdca5565ace693e98abd2253e23bfb8267a822" exitCode=0 Jan 31 07:46:21 crc kubenswrapper[4908]: I0131 07:46:21.300536 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" event={"ID":"38edd38e-7763-43ad-8593-7c49b22d8004","Type":"ContainerDied","Data":"2776c2759cce2a8154f8fd6c65bdca5565ace693e98abd2253e23bfb8267a822"} Jan 31 07:46:21 crc kubenswrapper[4908]: I0131 07:46:21.300561 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" event={"ID":"38edd38e-7763-43ad-8593-7c49b22d8004","Type":"ContainerStarted","Data":"96f116138d0a8a9bf515249d629ffc79b35ea41a005e70ec162ea272be851dad"} Jan 31 07:46:22 crc kubenswrapper[4908]: I0131 07:46:22.317464 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" event={"ID":"38edd38e-7763-43ad-8593-7c49b22d8004","Type":"ContainerStarted","Data":"5ee73f47e0b789f005fd382cffde06e0d21532b0074f023830700f522439636a"} Jan 31 07:46:22 crc kubenswrapper[4908]: I0131 07:46:22.346255 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" podStartSLOduration=3.346217398 podStartE2EDuration="3.346217398s" podCreationTimestamp="2026-01-31 07:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:46:22.344684366 +0000 UTC m=+1488.960629070" watchObservedRunningTime="2026-01-31 07:46:22.346217398 +0000 UTC m=+1488.962162072" Jan 31 07:46:23 crc kubenswrapper[4908]: I0131 07:46:23.333042 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" Jan 31 07:46:30 crc kubenswrapper[4908]: I0131 07:46:30.168902 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" Jan 31 07:46:30 crc kubenswrapper[4908]: I0131 07:46:30.241460 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-gjqxv"] Jan 31 07:46:30 crc kubenswrapper[4908]: I0131 07:46:30.241693 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68d4b6d797-gjqxv" podUID="48ebbbad-0bef-4b23-b229-eba1eb8dfd6c" containerName="dnsmasq-dns" containerID="cri-o://1e6a311aeb0ee5ba70b137e2b4abd6ce7aa0542daed0e80f5d00cf047cc5ea26" gracePeriod=10 Jan 31 07:46:30 crc kubenswrapper[4908]: I0131 07:46:30.396108 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-8qn5b"] Jan 31 07:46:30 crc kubenswrapper[4908]: I0131 07:46:30.417705 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" Jan 31 07:46:30 crc kubenswrapper[4908]: I0131 07:46:30.457069 4908 generic.go:334] "Generic (PLEG): container finished" podID="48ebbbad-0bef-4b23-b229-eba1eb8dfd6c" containerID="1e6a311aeb0ee5ba70b137e2b4abd6ce7aa0542daed0e80f5d00cf047cc5ea26" exitCode=0 Jan 31 07:46:30 crc kubenswrapper[4908]: I0131 07:46:30.457111 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-gjqxv" event={"ID":"48ebbbad-0bef-4b23-b229-eba1eb8dfd6c","Type":"ContainerDied","Data":"1e6a311aeb0ee5ba70b137e2b4abd6ce7aa0542daed0e80f5d00cf047cc5ea26"} Jan 31 07:46:30 crc kubenswrapper[4908]: I0131 07:46:30.465325 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-8qn5b"] Jan 31 07:46:30 crc kubenswrapper[4908]: I0131 07:46:30.530800 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96324b48-9ef5-4df7-aa47-d586f228789e-config\") pod \"dnsmasq-dns-fbc59fbb7-8qn5b\" (UID: \"96324b48-9ef5-4df7-aa47-d586f228789e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" Jan 31 07:46:30 crc kubenswrapper[4908]: I0131 07:46:30.531079 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96324b48-9ef5-4df7-aa47-d586f228789e-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-8qn5b\" (UID: \"96324b48-9ef5-4df7-aa47-d586f228789e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" Jan 31 07:46:30 crc kubenswrapper[4908]: I0131 07:46:30.531223 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96324b48-9ef5-4df7-aa47-d586f228789e-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-8qn5b\" (UID: \"96324b48-9ef5-4df7-aa47-d586f228789e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" Jan 31 07:46:30 crc kubenswrapper[4908]: I0131 07:46:30.531364 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbn2h\" (UniqueName: \"kubernetes.io/projected/96324b48-9ef5-4df7-aa47-d586f228789e-kube-api-access-nbn2h\") pod \"dnsmasq-dns-fbc59fbb7-8qn5b\" (UID: \"96324b48-9ef5-4df7-aa47-d586f228789e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" Jan 31 07:46:30 crc kubenswrapper[4908]: I0131 07:46:30.531508 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/96324b48-9ef5-4df7-aa47-d586f228789e-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-8qn5b\" (UID: \"96324b48-9ef5-4df7-aa47-d586f228789e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" Jan 31 07:46:30 crc kubenswrapper[4908]: I0131 07:46:30.531644 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96324b48-9ef5-4df7-aa47-d586f228789e-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-8qn5b\" (UID: \"96324b48-9ef5-4df7-aa47-d586f228789e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" Jan 31 07:46:30 crc kubenswrapper[4908]: I0131 07:46:30.633155 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/96324b48-9ef5-4df7-aa47-d586f228789e-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-8qn5b\" (UID: \"96324b48-9ef5-4df7-aa47-d586f228789e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" Jan 31 07:46:30 crc kubenswrapper[4908]: I0131 07:46:30.633247 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96324b48-9ef5-4df7-aa47-d586f228789e-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-8qn5b\" (UID: \"96324b48-9ef5-4df7-aa47-d586f228789e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" Jan 31 07:46:30 crc kubenswrapper[4908]: I0131 07:46:30.633308 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96324b48-9ef5-4df7-aa47-d586f228789e-config\") pod \"dnsmasq-dns-fbc59fbb7-8qn5b\" (UID: \"96324b48-9ef5-4df7-aa47-d586f228789e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" Jan 31 07:46:30 crc kubenswrapper[4908]: I0131 07:46:30.633366 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96324b48-9ef5-4df7-aa47-d586f228789e-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-8qn5b\" (UID: \"96324b48-9ef5-4df7-aa47-d586f228789e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" Jan 31 07:46:30 crc kubenswrapper[4908]: I0131 07:46:30.633438 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96324b48-9ef5-4df7-aa47-d586f228789e-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-8qn5b\" (UID: \"96324b48-9ef5-4df7-aa47-d586f228789e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" Jan 31 07:46:30 crc kubenswrapper[4908]: I0131 07:46:30.633508 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbn2h\" (UniqueName: \"kubernetes.io/projected/96324b48-9ef5-4df7-aa47-d586f228789e-kube-api-access-nbn2h\") pod \"dnsmasq-dns-fbc59fbb7-8qn5b\" (UID: \"96324b48-9ef5-4df7-aa47-d586f228789e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" Jan 31 07:46:30 crc kubenswrapper[4908]: I0131 07:46:30.634275 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/96324b48-9ef5-4df7-aa47-d586f228789e-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-8qn5b\" (UID: \"96324b48-9ef5-4df7-aa47-d586f228789e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" Jan 31 07:46:30 crc kubenswrapper[4908]: I0131 07:46:30.634728 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96324b48-9ef5-4df7-aa47-d586f228789e-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-8qn5b\" (UID: \"96324b48-9ef5-4df7-aa47-d586f228789e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" Jan 31 07:46:30 crc kubenswrapper[4908]: I0131 07:46:30.634912 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96324b48-9ef5-4df7-aa47-d586f228789e-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-8qn5b\" (UID: \"96324b48-9ef5-4df7-aa47-d586f228789e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" Jan 31 07:46:30 crc kubenswrapper[4908]: I0131 07:46:30.635485 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96324b48-9ef5-4df7-aa47-d586f228789e-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-8qn5b\" (UID: \"96324b48-9ef5-4df7-aa47-d586f228789e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" Jan 31 07:46:30 crc kubenswrapper[4908]: I0131 07:46:30.635583 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96324b48-9ef5-4df7-aa47-d586f228789e-config\") pod \"dnsmasq-dns-fbc59fbb7-8qn5b\" (UID: \"96324b48-9ef5-4df7-aa47-d586f228789e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" Jan 31 07:46:30 crc kubenswrapper[4908]: I0131 07:46:30.686583 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbn2h\" (UniqueName: \"kubernetes.io/projected/96324b48-9ef5-4df7-aa47-d586f228789e-kube-api-access-nbn2h\") pod \"dnsmasq-dns-fbc59fbb7-8qn5b\" (UID: \"96324b48-9ef5-4df7-aa47-d586f228789e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" Jan 31 07:46:30 crc kubenswrapper[4908]: I0131 07:46:30.775749 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" Jan 31 07:46:30 crc kubenswrapper[4908]: I0131 07:46:30.922522 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-gjqxv" Jan 31 07:46:31 crc kubenswrapper[4908]: I0131 07:46:31.067918 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c-config\") pod \"48ebbbad-0bef-4b23-b229-eba1eb8dfd6c\" (UID: \"48ebbbad-0bef-4b23-b229-eba1eb8dfd6c\") " Jan 31 07:46:31 crc kubenswrapper[4908]: I0131 07:46:31.067965 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c-dns-svc\") pod \"48ebbbad-0bef-4b23-b229-eba1eb8dfd6c\" (UID: \"48ebbbad-0bef-4b23-b229-eba1eb8dfd6c\") " Jan 31 07:46:31 crc kubenswrapper[4908]: I0131 07:46:31.068150 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c-ovsdbserver-nb\") pod \"48ebbbad-0bef-4b23-b229-eba1eb8dfd6c\" (UID: \"48ebbbad-0bef-4b23-b229-eba1eb8dfd6c\") " Jan 31 07:46:31 crc kubenswrapper[4908]: I0131 07:46:31.068256 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c-ovsdbserver-sb\") pod \"48ebbbad-0bef-4b23-b229-eba1eb8dfd6c\" (UID: \"48ebbbad-0bef-4b23-b229-eba1eb8dfd6c\") " Jan 31 07:46:31 crc kubenswrapper[4908]: I0131 07:46:31.068306 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9257\" (UniqueName: \"kubernetes.io/projected/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c-kube-api-access-z9257\") pod \"48ebbbad-0bef-4b23-b229-eba1eb8dfd6c\" (UID: \"48ebbbad-0bef-4b23-b229-eba1eb8dfd6c\") " Jan 31 07:46:31 crc kubenswrapper[4908]: I0131 07:46:31.078193 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c-kube-api-access-z9257" (OuterVolumeSpecName: "kube-api-access-z9257") pod "48ebbbad-0bef-4b23-b229-eba1eb8dfd6c" (UID: "48ebbbad-0bef-4b23-b229-eba1eb8dfd6c"). InnerVolumeSpecName "kube-api-access-z9257". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:46:31 crc kubenswrapper[4908]: I0131 07:46:31.111535 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "48ebbbad-0bef-4b23-b229-eba1eb8dfd6c" (UID: "48ebbbad-0bef-4b23-b229-eba1eb8dfd6c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:46:31 crc kubenswrapper[4908]: I0131 07:46:31.112616 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "48ebbbad-0bef-4b23-b229-eba1eb8dfd6c" (UID: "48ebbbad-0bef-4b23-b229-eba1eb8dfd6c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:46:31 crc kubenswrapper[4908]: I0131 07:46:31.117622 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "48ebbbad-0bef-4b23-b229-eba1eb8dfd6c" (UID: "48ebbbad-0bef-4b23-b229-eba1eb8dfd6c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:46:31 crc kubenswrapper[4908]: I0131 07:46:31.123741 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c-config" (OuterVolumeSpecName: "config") pod "48ebbbad-0bef-4b23-b229-eba1eb8dfd6c" (UID: "48ebbbad-0bef-4b23-b229-eba1eb8dfd6c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:46:31 crc kubenswrapper[4908]: I0131 07:46:31.171023 4908 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:31 crc kubenswrapper[4908]: I0131 07:46:31.171144 4908 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:31 crc kubenswrapper[4908]: I0131 07:46:31.171179 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9257\" (UniqueName: \"kubernetes.io/projected/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c-kube-api-access-z9257\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:31 crc kubenswrapper[4908]: I0131 07:46:31.171197 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:31 crc kubenswrapper[4908]: I0131 07:46:31.171210 4908 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:31 crc kubenswrapper[4908]: I0131 07:46:31.260602 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-8qn5b"] Jan 31 07:46:31 crc kubenswrapper[4908]: I0131 07:46:31.468940 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" event={"ID":"96324b48-9ef5-4df7-aa47-d586f228789e","Type":"ContainerStarted","Data":"24ce2957b07b2acf905e6bbc313705ea768fe1592030dc4b638830ad1ca8db1c"} Jan 31 07:46:31 crc kubenswrapper[4908]: I0131 07:46:31.472375 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-gjqxv" event={"ID":"48ebbbad-0bef-4b23-b229-eba1eb8dfd6c","Type":"ContainerDied","Data":"255ffa39874c3e21bf702f0ad4a8f65da4e6254abe3efb6c3cfd6e60519479c1"} Jan 31 07:46:31 crc kubenswrapper[4908]: I0131 07:46:31.472443 4908 scope.go:117] "RemoveContainer" containerID="1e6a311aeb0ee5ba70b137e2b4abd6ce7aa0542daed0e80f5d00cf047cc5ea26" Jan 31 07:46:31 crc kubenswrapper[4908]: I0131 07:46:31.472656 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-gjqxv" Jan 31 07:46:31 crc kubenswrapper[4908]: I0131 07:46:31.493541 4908 scope.go:117] "RemoveContainer" containerID="453b9889b3198c44ed9724aa0533a7871e1f549fba4a0585cb4f741292407c26" Jan 31 07:46:31 crc kubenswrapper[4908]: I0131 07:46:31.522522 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-gjqxv"] Jan 31 07:46:31 crc kubenswrapper[4908]: I0131 07:46:31.534598 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-gjqxv"] Jan 31 07:46:31 crc kubenswrapper[4908]: I0131 07:46:31.953026 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48ebbbad-0bef-4b23-b229-eba1eb8dfd6c" path="/var/lib/kubelet/pods/48ebbbad-0bef-4b23-b229-eba1eb8dfd6c/volumes" Jan 31 07:46:32 crc kubenswrapper[4908]: I0131 07:46:32.482719 4908 generic.go:334] "Generic (PLEG): container finished" podID="96324b48-9ef5-4df7-aa47-d586f228789e" containerID="66f2f7229c139f71c0f8c76e9348aced376bfd53255a8536bffdf4bb0d680b9a" exitCode=0 Jan 31 07:46:32 crc kubenswrapper[4908]: I0131 07:46:32.482789 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" event={"ID":"96324b48-9ef5-4df7-aa47-d586f228789e","Type":"ContainerDied","Data":"66f2f7229c139f71c0f8c76e9348aced376bfd53255a8536bffdf4bb0d680b9a"} Jan 31 07:46:33 crc kubenswrapper[4908]: I0131 07:46:33.495672 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" event={"ID":"96324b48-9ef5-4df7-aa47-d586f228789e","Type":"ContainerStarted","Data":"e3544921f1f0faf9b4d8e5928970a63e9386801cfe8c66cd77a49374592af458"} Jan 31 07:46:33 crc kubenswrapper[4908]: I0131 07:46:33.496003 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" Jan 31 07:46:33 crc kubenswrapper[4908]: I0131 07:46:33.520691 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" podStartSLOduration=3.520670806 podStartE2EDuration="3.520670806s" podCreationTimestamp="2026-01-31 07:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:46:33.516446202 +0000 UTC m=+1500.132390856" watchObservedRunningTime="2026-01-31 07:46:33.520670806 +0000 UTC m=+1500.136615460" Jan 31 07:46:40 crc kubenswrapper[4908]: I0131 07:46:40.777777 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" Jan 31 07:46:40 crc kubenswrapper[4908]: I0131 07:46:40.844197 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-5dw5w"] Jan 31 07:46:40 crc kubenswrapper[4908]: I0131 07:46:40.844469 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" podUID="38edd38e-7763-43ad-8593-7c49b22d8004" containerName="dnsmasq-dns" containerID="cri-o://5ee73f47e0b789f005fd382cffde06e0d21532b0074f023830700f522439636a" gracePeriod=10 Jan 31 07:46:41 crc kubenswrapper[4908]: I0131 07:46:41.585025 4908 generic.go:334] "Generic (PLEG): container finished" podID="38edd38e-7763-43ad-8593-7c49b22d8004" containerID="5ee73f47e0b789f005fd382cffde06e0d21532b0074f023830700f522439636a" exitCode=0 Jan 31 07:46:41 crc kubenswrapper[4908]: I0131 07:46:41.585209 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" event={"ID":"38edd38e-7763-43ad-8593-7c49b22d8004","Type":"ContainerDied","Data":"5ee73f47e0b789f005fd382cffde06e0d21532b0074f023830700f522439636a"} Jan 31 07:46:42 crc kubenswrapper[4908]: I0131 07:46:42.104432 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" Jan 31 07:46:42 crc kubenswrapper[4908]: I0131 07:46:42.195336 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-ovsdbserver-sb\") pod \"38edd38e-7763-43ad-8593-7c49b22d8004\" (UID: \"38edd38e-7763-43ad-8593-7c49b22d8004\") " Jan 31 07:46:42 crc kubenswrapper[4908]: I0131 07:46:42.195436 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8929s\" (UniqueName: \"kubernetes.io/projected/38edd38e-7763-43ad-8593-7c49b22d8004-kube-api-access-8929s\") pod \"38edd38e-7763-43ad-8593-7c49b22d8004\" (UID: \"38edd38e-7763-43ad-8593-7c49b22d8004\") " Jan 31 07:46:42 crc kubenswrapper[4908]: I0131 07:46:42.195530 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-config\") pod \"38edd38e-7763-43ad-8593-7c49b22d8004\" (UID: \"38edd38e-7763-43ad-8593-7c49b22d8004\") " Jan 31 07:46:42 crc kubenswrapper[4908]: I0131 07:46:42.195577 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-dns-svc\") pod \"38edd38e-7763-43ad-8593-7c49b22d8004\" (UID: \"38edd38e-7763-43ad-8593-7c49b22d8004\") " Jan 31 07:46:42 crc kubenswrapper[4908]: I0131 07:46:42.195603 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-openstack-edpm-ipam\") pod \"38edd38e-7763-43ad-8593-7c49b22d8004\" (UID: \"38edd38e-7763-43ad-8593-7c49b22d8004\") " Jan 31 07:46:42 crc kubenswrapper[4908]: I0131 07:46:42.195642 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-ovsdbserver-nb\") pod \"38edd38e-7763-43ad-8593-7c49b22d8004\" (UID: \"38edd38e-7763-43ad-8593-7c49b22d8004\") " Jan 31 07:46:42 crc kubenswrapper[4908]: I0131 07:46:42.204874 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38edd38e-7763-43ad-8593-7c49b22d8004-kube-api-access-8929s" (OuterVolumeSpecName: "kube-api-access-8929s") pod "38edd38e-7763-43ad-8593-7c49b22d8004" (UID: "38edd38e-7763-43ad-8593-7c49b22d8004"). InnerVolumeSpecName "kube-api-access-8929s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:46:42 crc kubenswrapper[4908]: I0131 07:46:42.259701 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "38edd38e-7763-43ad-8593-7c49b22d8004" (UID: "38edd38e-7763-43ad-8593-7c49b22d8004"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:46:42 crc kubenswrapper[4908]: I0131 07:46:42.265642 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "38edd38e-7763-43ad-8593-7c49b22d8004" (UID: "38edd38e-7763-43ad-8593-7c49b22d8004"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:46:42 crc kubenswrapper[4908]: I0131 07:46:42.269341 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "38edd38e-7763-43ad-8593-7c49b22d8004" (UID: "38edd38e-7763-43ad-8593-7c49b22d8004"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:46:42 crc kubenswrapper[4908]: E0131 07:46:42.274370 4908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-config podName:38edd38e-7763-43ad-8593-7c49b22d8004 nodeName:}" failed. No retries permitted until 2026-01-31 07:46:42.774343404 +0000 UTC m=+1509.390288058 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config" (UniqueName: "kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-config") pod "38edd38e-7763-43ad-8593-7c49b22d8004" (UID: "38edd38e-7763-43ad-8593-7c49b22d8004") : error deleting /var/lib/kubelet/pods/38edd38e-7763-43ad-8593-7c49b22d8004/volume-subpaths: remove /var/lib/kubelet/pods/38edd38e-7763-43ad-8593-7c49b22d8004/volume-subpaths: no such file or directory Jan 31 07:46:42 crc kubenswrapper[4908]: I0131 07:46:42.274612 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "38edd38e-7763-43ad-8593-7c49b22d8004" (UID: "38edd38e-7763-43ad-8593-7c49b22d8004"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:46:42 crc kubenswrapper[4908]: I0131 07:46:42.298973 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8929s\" (UniqueName: \"kubernetes.io/projected/38edd38e-7763-43ad-8593-7c49b22d8004-kube-api-access-8929s\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:42 crc kubenswrapper[4908]: I0131 07:46:42.299157 4908 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:42 crc kubenswrapper[4908]: I0131 07:46:42.299172 4908 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:42 crc kubenswrapper[4908]: I0131 07:46:42.299184 4908 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:42 crc kubenswrapper[4908]: I0131 07:46:42.299195 4908 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:42 crc kubenswrapper[4908]: I0131 07:46:42.596166 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" event={"ID":"38edd38e-7763-43ad-8593-7c49b22d8004","Type":"ContainerDied","Data":"96f116138d0a8a9bf515249d629ffc79b35ea41a005e70ec162ea272be851dad"} Jan 31 07:46:42 crc kubenswrapper[4908]: I0131 07:46:42.596246 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-5dw5w" Jan 31 07:46:42 crc kubenswrapper[4908]: I0131 07:46:42.596468 4908 scope.go:117] "RemoveContainer" containerID="5ee73f47e0b789f005fd382cffde06e0d21532b0074f023830700f522439636a" Jan 31 07:46:42 crc kubenswrapper[4908]: I0131 07:46:42.617802 4908 scope.go:117] "RemoveContainer" containerID="2776c2759cce2a8154f8fd6c65bdca5565ace693e98abd2253e23bfb8267a822" Jan 31 07:46:42 crc kubenswrapper[4908]: I0131 07:46:42.807043 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-config\") pod \"38edd38e-7763-43ad-8593-7c49b22d8004\" (UID: \"38edd38e-7763-43ad-8593-7c49b22d8004\") " Jan 31 07:46:42 crc kubenswrapper[4908]: I0131 07:46:42.807660 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-config" (OuterVolumeSpecName: "config") pod "38edd38e-7763-43ad-8593-7c49b22d8004" (UID: "38edd38e-7763-43ad-8593-7c49b22d8004"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:46:42 crc kubenswrapper[4908]: I0131 07:46:42.909613 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38edd38e-7763-43ad-8593-7c49b22d8004-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:42 crc kubenswrapper[4908]: I0131 07:46:42.935149 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-5dw5w"] Jan 31 07:46:42 crc kubenswrapper[4908]: I0131 07:46:42.944778 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-5dw5w"] Jan 31 07:46:43 crc kubenswrapper[4908]: I0131 07:46:43.953346 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38edd38e-7763-43ad-8593-7c49b22d8004" path="/var/lib/kubelet/pods/38edd38e-7763-43ad-8593-7c49b22d8004/volumes" Jan 31 07:46:50 crc kubenswrapper[4908]: I0131 07:46:50.662699 4908 generic.go:334] "Generic (PLEG): container finished" podID="b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32" containerID="144c712b588d9577a46d8a1c6c0a0c263af3ba98a9fb9776b07d6eab0dff034b" exitCode=0 Jan 31 07:46:50 crc kubenswrapper[4908]: I0131 07:46:50.662723 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32","Type":"ContainerDied","Data":"144c712b588d9577a46d8a1c6c0a0c263af3ba98a9fb9776b07d6eab0dff034b"} Jan 31 07:46:50 crc kubenswrapper[4908]: I0131 07:46:50.666515 4908 generic.go:334] "Generic (PLEG): container finished" podID="676f585e-f8bd-4dd3-bcab-e75c830382e3" containerID="4b7cf4d8f73857ea698970681cfaa965d7d057a954784e34a4dd422a5ac33ed7" exitCode=0 Jan 31 07:46:50 crc kubenswrapper[4908]: I0131 07:46:50.666565 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"676f585e-f8bd-4dd3-bcab-e75c830382e3","Type":"ContainerDied","Data":"4b7cf4d8f73857ea698970681cfaa965d7d057a954784e34a4dd422a5ac33ed7"} Jan 31 07:46:50 crc kubenswrapper[4908]: I0131 07:46:50.933372 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l447t"] Jan 31 07:46:50 crc kubenswrapper[4908]: E0131 07:46:50.934299 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38edd38e-7763-43ad-8593-7c49b22d8004" containerName="init" Jan 31 07:46:50 crc kubenswrapper[4908]: I0131 07:46:50.934323 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="38edd38e-7763-43ad-8593-7c49b22d8004" containerName="init" Jan 31 07:46:50 crc kubenswrapper[4908]: E0131 07:46:50.934342 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48ebbbad-0bef-4b23-b229-eba1eb8dfd6c" containerName="dnsmasq-dns" Jan 31 07:46:50 crc kubenswrapper[4908]: I0131 07:46:50.934353 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="48ebbbad-0bef-4b23-b229-eba1eb8dfd6c" containerName="dnsmasq-dns" Jan 31 07:46:50 crc kubenswrapper[4908]: E0131 07:46:50.934386 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48ebbbad-0bef-4b23-b229-eba1eb8dfd6c" containerName="init" Jan 31 07:46:50 crc kubenswrapper[4908]: I0131 07:46:50.934394 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="48ebbbad-0bef-4b23-b229-eba1eb8dfd6c" containerName="init" Jan 31 07:46:50 crc kubenswrapper[4908]: E0131 07:46:50.934408 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38edd38e-7763-43ad-8593-7c49b22d8004" containerName="dnsmasq-dns" Jan 31 07:46:50 crc kubenswrapper[4908]: I0131 07:46:50.934416 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="38edd38e-7763-43ad-8593-7c49b22d8004" containerName="dnsmasq-dns" Jan 31 07:46:50 crc kubenswrapper[4908]: I0131 07:46:50.934644 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="48ebbbad-0bef-4b23-b229-eba1eb8dfd6c" containerName="dnsmasq-dns" Jan 31 07:46:50 crc kubenswrapper[4908]: I0131 07:46:50.934668 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="38edd38e-7763-43ad-8593-7c49b22d8004" containerName="dnsmasq-dns" Jan 31 07:46:50 crc kubenswrapper[4908]: I0131 07:46:50.935462 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l447t" Jan 31 07:46:50 crc kubenswrapper[4908]: I0131 07:46:50.938435 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 07:46:50 crc kubenswrapper[4908]: I0131 07:46:50.938612 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 07:46:50 crc kubenswrapper[4908]: I0131 07:46:50.938888 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vgwb9" Jan 31 07:46:50 crc kubenswrapper[4908]: I0131 07:46:50.939670 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 07:46:50 crc kubenswrapper[4908]: I0131 07:46:50.942662 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l447t"] Jan 31 07:46:51 crc kubenswrapper[4908]: I0131 07:46:51.057113 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ab03dca-2cc4-4689-bac0-8d3fa92855c5-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l447t\" (UID: \"6ab03dca-2cc4-4689-bac0-8d3fa92855c5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l447t" Jan 31 07:46:51 crc kubenswrapper[4908]: I0131 07:46:51.057173 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ab03dca-2cc4-4689-bac0-8d3fa92855c5-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l447t\" (UID: \"6ab03dca-2cc4-4689-bac0-8d3fa92855c5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l447t" Jan 31 07:46:51 crc kubenswrapper[4908]: I0131 07:46:51.057232 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ab03dca-2cc4-4689-bac0-8d3fa92855c5-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l447t\" (UID: \"6ab03dca-2cc4-4689-bac0-8d3fa92855c5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l447t" Jan 31 07:46:51 crc kubenswrapper[4908]: I0131 07:46:51.057316 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsz88\" (UniqueName: \"kubernetes.io/projected/6ab03dca-2cc4-4689-bac0-8d3fa92855c5-kube-api-access-xsz88\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l447t\" (UID: \"6ab03dca-2cc4-4689-bac0-8d3fa92855c5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l447t" Jan 31 07:46:51 crc kubenswrapper[4908]: I0131 07:46:51.159169 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsz88\" (UniqueName: \"kubernetes.io/projected/6ab03dca-2cc4-4689-bac0-8d3fa92855c5-kube-api-access-xsz88\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l447t\" (UID: \"6ab03dca-2cc4-4689-bac0-8d3fa92855c5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l447t" Jan 31 07:46:51 crc kubenswrapper[4908]: I0131 07:46:51.159685 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ab03dca-2cc4-4689-bac0-8d3fa92855c5-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l447t\" (UID: \"6ab03dca-2cc4-4689-bac0-8d3fa92855c5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l447t" Jan 31 07:46:51 crc kubenswrapper[4908]: I0131 07:46:51.159717 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ab03dca-2cc4-4689-bac0-8d3fa92855c5-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l447t\" (UID: \"6ab03dca-2cc4-4689-bac0-8d3fa92855c5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l447t" Jan 31 07:46:51 crc kubenswrapper[4908]: I0131 07:46:51.160565 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ab03dca-2cc4-4689-bac0-8d3fa92855c5-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l447t\" (UID: \"6ab03dca-2cc4-4689-bac0-8d3fa92855c5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l447t" Jan 31 07:46:51 crc kubenswrapper[4908]: I0131 07:46:51.166701 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ab03dca-2cc4-4689-bac0-8d3fa92855c5-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l447t\" (UID: \"6ab03dca-2cc4-4689-bac0-8d3fa92855c5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l447t" Jan 31 07:46:51 crc kubenswrapper[4908]: I0131 07:46:51.172165 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ab03dca-2cc4-4689-bac0-8d3fa92855c5-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l447t\" (UID: \"6ab03dca-2cc4-4689-bac0-8d3fa92855c5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l447t" Jan 31 07:46:51 crc kubenswrapper[4908]: I0131 07:46:51.172619 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ab03dca-2cc4-4689-bac0-8d3fa92855c5-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l447t\" (UID: \"6ab03dca-2cc4-4689-bac0-8d3fa92855c5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l447t" Jan 31 07:46:51 crc kubenswrapper[4908]: I0131 07:46:51.180568 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsz88\" (UniqueName: \"kubernetes.io/projected/6ab03dca-2cc4-4689-bac0-8d3fa92855c5-kube-api-access-xsz88\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l447t\" (UID: \"6ab03dca-2cc4-4689-bac0-8d3fa92855c5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l447t" Jan 31 07:46:51 crc kubenswrapper[4908]: I0131 07:46:51.254438 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l447t" Jan 31 07:46:51 crc kubenswrapper[4908]: I0131 07:46:51.677323 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32","Type":"ContainerStarted","Data":"afeb18d0dfb02b54cf2cbb3e0411136a0f1b3896d45fd1add537a1ab13f15c2e"} Jan 31 07:46:51 crc kubenswrapper[4908]: I0131 07:46:51.679149 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 31 07:46:51 crc kubenswrapper[4908]: I0131 07:46:51.681542 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"676f585e-f8bd-4dd3-bcab-e75c830382e3","Type":"ContainerStarted","Data":"7c5327710c1a064fa61d8fd6bddec7ef5d3424855fc370acb0fd1b5e3008c628"} Jan 31 07:46:51 crc kubenswrapper[4908]: I0131 07:46:51.682585 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:46:51 crc kubenswrapper[4908]: I0131 07:46:51.703741 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.703720805 podStartE2EDuration="36.703720805s" podCreationTimestamp="2026-01-31 07:46:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:46:51.702689467 +0000 UTC m=+1518.318634131" watchObservedRunningTime="2026-01-31 07:46:51.703720805 +0000 UTC m=+1518.319665459" Jan 31 07:46:51 crc kubenswrapper[4908]: I0131 07:46:51.728021 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=35.728003131 podStartE2EDuration="35.728003131s" podCreationTimestamp="2026-01-31 07:46:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:46:51.724858256 +0000 UTC m=+1518.340802900" watchObservedRunningTime="2026-01-31 07:46:51.728003131 +0000 UTC m=+1518.343947785" Jan 31 07:46:51 crc kubenswrapper[4908]: I0131 07:46:51.806809 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l447t"] Jan 31 07:46:52 crc kubenswrapper[4908]: I0131 07:46:52.691044 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l447t" event={"ID":"6ab03dca-2cc4-4689-bac0-8d3fa92855c5","Type":"ContainerStarted","Data":"54d2c97e3d406a1d799b28519164a3ff966edf2cb55447f83ba5f2aeab0ba26c"} Jan 31 07:47:05 crc kubenswrapper[4908]: I0131 07:47:05.644875 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.197:5671: connect: connection refused" Jan 31 07:47:05 crc kubenswrapper[4908]: E0131 07:47:05.870750 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Jan 31 07:47:05 crc kubenswrapper[4908]: E0131 07:47:05.870920 4908 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 31 07:47:05 crc kubenswrapper[4908]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Jan 31 07:47:05 crc kubenswrapper[4908]: - hosts: all Jan 31 07:47:05 crc kubenswrapper[4908]: strategy: linear Jan 31 07:47:05 crc kubenswrapper[4908]: tasks: Jan 31 07:47:05 crc kubenswrapper[4908]: - name: Enable podified-repos Jan 31 07:47:05 crc kubenswrapper[4908]: become: true Jan 31 07:47:05 crc kubenswrapper[4908]: ansible.builtin.shell: | Jan 31 07:47:05 crc kubenswrapper[4908]: set -euxo pipefail Jan 31 07:47:05 crc kubenswrapper[4908]: pushd /var/tmp Jan 31 07:47:05 crc kubenswrapper[4908]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Jan 31 07:47:05 crc kubenswrapper[4908]: pushd repo-setup-main Jan 31 07:47:05 crc kubenswrapper[4908]: python3 -m venv ./venv Jan 31 07:47:05 crc kubenswrapper[4908]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Jan 31 07:47:05 crc kubenswrapper[4908]: ./venv/bin/repo-setup current-podified -b antelope Jan 31 07:47:05 crc kubenswrapper[4908]: popd Jan 31 07:47:05 crc kubenswrapper[4908]: rm -rf repo-setup-main Jan 31 07:47:05 crc kubenswrapper[4908]: Jan 31 07:47:05 crc kubenswrapper[4908]: Jan 31 07:47:05 crc kubenswrapper[4908]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Jan 31 07:47:05 crc kubenswrapper[4908]: edpm_override_hosts: openstack-edpm-ipam Jan 31 07:47:05 crc kubenswrapper[4908]: edpm_service_type: repo-setup Jan 31 07:47:05 crc kubenswrapper[4908]: Jan 31 07:47:05 crc kubenswrapper[4908]: Jan 31 07:47:05 crc kubenswrapper[4908]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xsz88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-l447t_openstack(6ab03dca-2cc4-4689-bac0-8d3fa92855c5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 31 07:47:05 crc kubenswrapper[4908]: > logger="UnhandledError" Jan 31 07:47:05 crc kubenswrapper[4908]: E0131 07:47:05.872116 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l447t" podUID="6ab03dca-2cc4-4689-bac0-8d3fa92855c5" Jan 31 07:47:06 crc kubenswrapper[4908]: I0131 07:47:06.648401 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:47:06 crc kubenswrapper[4908]: E0131 07:47:06.819203 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l447t" podUID="6ab03dca-2cc4-4689-bac0-8d3fa92855c5" Jan 31 07:47:15 crc kubenswrapper[4908]: I0131 07:47:15.643606 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 31 07:47:20 crc kubenswrapper[4908]: I0131 07:47:20.942654 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l447t" event={"ID":"6ab03dca-2cc4-4689-bac0-8d3fa92855c5","Type":"ContainerStarted","Data":"d6b08802f446c2274874f7d7b48ef3089160d151f3a33a24f50951594f9aa7bc"} Jan 31 07:47:20 crc kubenswrapper[4908]: I0131 07:47:20.967199 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l447t" podStartSLOduration=2.579277165 podStartE2EDuration="30.967174402s" podCreationTimestamp="2026-01-31 07:46:50 +0000 UTC" firstStartedPulling="2026-01-31 07:46:51.82310905 +0000 UTC m=+1518.439053704" lastFinishedPulling="2026-01-31 07:47:20.211006287 +0000 UTC m=+1546.826950941" observedRunningTime="2026-01-31 07:47:20.957952483 +0000 UTC m=+1547.573897137" watchObservedRunningTime="2026-01-31 07:47:20.967174402 +0000 UTC m=+1547.583119056" Jan 31 07:47:27 crc kubenswrapper[4908]: I0131 07:47:27.466830 4908 scope.go:117] "RemoveContainer" containerID="c77171f2efe0b99af1fef8b07aa42561940f514c261810a02748485c80129739" Jan 31 07:47:27 crc kubenswrapper[4908]: I0131 07:47:27.527151 4908 scope.go:117] "RemoveContainer" containerID="0fba0ecf14463cb8eb50c86337c160f74c904dd5caeafb6be3ab2ccd2b91b43e" Jan 31 07:47:27 crc kubenswrapper[4908]: I0131 07:47:27.573760 4908 scope.go:117] "RemoveContainer" containerID="c3ae7e69d0069c50ce083818c9ea6e8c53b38b22a9531538ddbc07d84ab7aac7" Jan 31 07:47:27 crc kubenswrapper[4908]: I0131 07:47:27.747647 4908 scope.go:117] "RemoveContainer" containerID="c51c83aeb7b3ca4c74797da5d96af03183118da7dd1844bc7c7a0bc3735f9e9c" Jan 31 07:47:32 crc kubenswrapper[4908]: I0131 07:47:32.055267 4908 generic.go:334] "Generic (PLEG): container finished" podID="6ab03dca-2cc4-4689-bac0-8d3fa92855c5" containerID="d6b08802f446c2274874f7d7b48ef3089160d151f3a33a24f50951594f9aa7bc" exitCode=0 Jan 31 07:47:32 crc kubenswrapper[4908]: I0131 07:47:32.055335 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l447t" event={"ID":"6ab03dca-2cc4-4689-bac0-8d3fa92855c5","Type":"ContainerDied","Data":"d6b08802f446c2274874f7d7b48ef3089160d151f3a33a24f50951594f9aa7bc"} Jan 31 07:47:33 crc kubenswrapper[4908]: I0131 07:47:33.757660 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l447t" Jan 31 07:47:33 crc kubenswrapper[4908]: I0131 07:47:33.952237 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ab03dca-2cc4-4689-bac0-8d3fa92855c5-repo-setup-combined-ca-bundle\") pod \"6ab03dca-2cc4-4689-bac0-8d3fa92855c5\" (UID: \"6ab03dca-2cc4-4689-bac0-8d3fa92855c5\") " Jan 31 07:47:33 crc kubenswrapper[4908]: I0131 07:47:33.953414 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ab03dca-2cc4-4689-bac0-8d3fa92855c5-inventory\") pod \"6ab03dca-2cc4-4689-bac0-8d3fa92855c5\" (UID: \"6ab03dca-2cc4-4689-bac0-8d3fa92855c5\") " Jan 31 07:47:33 crc kubenswrapper[4908]: I0131 07:47:33.953545 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsz88\" (UniqueName: \"kubernetes.io/projected/6ab03dca-2cc4-4689-bac0-8d3fa92855c5-kube-api-access-xsz88\") pod \"6ab03dca-2cc4-4689-bac0-8d3fa92855c5\" (UID: \"6ab03dca-2cc4-4689-bac0-8d3fa92855c5\") " Jan 31 07:47:33 crc kubenswrapper[4908]: I0131 07:47:33.953649 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ab03dca-2cc4-4689-bac0-8d3fa92855c5-ssh-key-openstack-edpm-ipam\") pod \"6ab03dca-2cc4-4689-bac0-8d3fa92855c5\" (UID: \"6ab03dca-2cc4-4689-bac0-8d3fa92855c5\") " Jan 31 07:47:33 crc kubenswrapper[4908]: I0131 07:47:33.961607 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ab03dca-2cc4-4689-bac0-8d3fa92855c5-kube-api-access-xsz88" (OuterVolumeSpecName: "kube-api-access-xsz88") pod "6ab03dca-2cc4-4689-bac0-8d3fa92855c5" (UID: "6ab03dca-2cc4-4689-bac0-8d3fa92855c5"). InnerVolumeSpecName "kube-api-access-xsz88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:47:33 crc kubenswrapper[4908]: I0131 07:47:33.961801 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ab03dca-2cc4-4689-bac0-8d3fa92855c5-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "6ab03dca-2cc4-4689-bac0-8d3fa92855c5" (UID: "6ab03dca-2cc4-4689-bac0-8d3fa92855c5"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:47:33 crc kubenswrapper[4908]: I0131 07:47:33.984194 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ab03dca-2cc4-4689-bac0-8d3fa92855c5-inventory" (OuterVolumeSpecName: "inventory") pod "6ab03dca-2cc4-4689-bac0-8d3fa92855c5" (UID: "6ab03dca-2cc4-4689-bac0-8d3fa92855c5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:47:33 crc kubenswrapper[4908]: I0131 07:47:33.984529 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ab03dca-2cc4-4689-bac0-8d3fa92855c5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6ab03dca-2cc4-4689-bac0-8d3fa92855c5" (UID: "6ab03dca-2cc4-4689-bac0-8d3fa92855c5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:47:34 crc kubenswrapper[4908]: I0131 07:47:34.062454 4908 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ab03dca-2cc4-4689-bac0-8d3fa92855c5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 07:47:34 crc kubenswrapper[4908]: I0131 07:47:34.062826 4908 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ab03dca-2cc4-4689-bac0-8d3fa92855c5-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:47:34 crc kubenswrapper[4908]: I0131 07:47:34.062842 4908 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ab03dca-2cc4-4689-bac0-8d3fa92855c5-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 07:47:34 crc kubenswrapper[4908]: I0131 07:47:34.062856 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsz88\" (UniqueName: \"kubernetes.io/projected/6ab03dca-2cc4-4689-bac0-8d3fa92855c5-kube-api-access-xsz88\") on node \"crc\" DevicePath \"\"" Jan 31 07:47:34 crc kubenswrapper[4908]: I0131 07:47:34.071480 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l447t" event={"ID":"6ab03dca-2cc4-4689-bac0-8d3fa92855c5","Type":"ContainerDied","Data":"54d2c97e3d406a1d799b28519164a3ff966edf2cb55447f83ba5f2aeab0ba26c"} Jan 31 07:47:34 crc kubenswrapper[4908]: I0131 07:47:34.071522 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54d2c97e3d406a1d799b28519164a3ff966edf2cb55447f83ba5f2aeab0ba26c" Jan 31 07:47:34 crc kubenswrapper[4908]: I0131 07:47:34.071525 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l447t" Jan 31 07:47:34 crc kubenswrapper[4908]: I0131 07:47:34.155281 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk"] Jan 31 07:47:34 crc kubenswrapper[4908]: E0131 07:47:34.155789 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ab03dca-2cc4-4689-bac0-8d3fa92855c5" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 07:47:34 crc kubenswrapper[4908]: I0131 07:47:34.155815 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ab03dca-2cc4-4689-bac0-8d3fa92855c5" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 07:47:34 crc kubenswrapper[4908]: I0131 07:47:34.156093 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ab03dca-2cc4-4689-bac0-8d3fa92855c5" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 07:47:34 crc kubenswrapper[4908]: I0131 07:47:34.157150 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk" Jan 31 07:47:34 crc kubenswrapper[4908]: I0131 07:47:34.163779 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 07:47:34 crc kubenswrapper[4908]: I0131 07:47:34.163928 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 07:47:34 crc kubenswrapper[4908]: I0131 07:47:34.165842 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vgwb9" Jan 31 07:47:34 crc kubenswrapper[4908]: I0131 07:47:34.168482 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 07:47:34 crc kubenswrapper[4908]: I0131 07:47:34.194036 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk"] Jan 31 07:47:34 crc kubenswrapper[4908]: I0131 07:47:34.265919 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/277022d5-bc50-4ebf-a267-f835f1656f8d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk\" (UID: \"277022d5-bc50-4ebf-a267-f835f1656f8d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk" Jan 31 07:47:34 crc kubenswrapper[4908]: I0131 07:47:34.266032 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/277022d5-bc50-4ebf-a267-f835f1656f8d-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk\" (UID: \"277022d5-bc50-4ebf-a267-f835f1656f8d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk" Jan 31 07:47:34 crc kubenswrapper[4908]: I0131 07:47:34.266066 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tchk\" (UniqueName: \"kubernetes.io/projected/277022d5-bc50-4ebf-a267-f835f1656f8d-kube-api-access-2tchk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk\" (UID: \"277022d5-bc50-4ebf-a267-f835f1656f8d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk" Jan 31 07:47:34 crc kubenswrapper[4908]: I0131 07:47:34.266198 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277022d5-bc50-4ebf-a267-f835f1656f8d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk\" (UID: \"277022d5-bc50-4ebf-a267-f835f1656f8d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk" Jan 31 07:47:34 crc kubenswrapper[4908]: I0131 07:47:34.367870 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/277022d5-bc50-4ebf-a267-f835f1656f8d-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk\" (UID: \"277022d5-bc50-4ebf-a267-f835f1656f8d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk" Jan 31 07:47:34 crc kubenswrapper[4908]: I0131 07:47:34.367924 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tchk\" (UniqueName: \"kubernetes.io/projected/277022d5-bc50-4ebf-a267-f835f1656f8d-kube-api-access-2tchk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk\" (UID: \"277022d5-bc50-4ebf-a267-f835f1656f8d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk" Jan 31 07:47:34 crc kubenswrapper[4908]: I0131 07:47:34.368066 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277022d5-bc50-4ebf-a267-f835f1656f8d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk\" (UID: \"277022d5-bc50-4ebf-a267-f835f1656f8d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk" Jan 31 07:47:34 crc kubenswrapper[4908]: I0131 07:47:34.368144 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/277022d5-bc50-4ebf-a267-f835f1656f8d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk\" (UID: \"277022d5-bc50-4ebf-a267-f835f1656f8d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk" Jan 31 07:47:34 crc kubenswrapper[4908]: I0131 07:47:34.376739 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/277022d5-bc50-4ebf-a267-f835f1656f8d-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk\" (UID: \"277022d5-bc50-4ebf-a267-f835f1656f8d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk" Jan 31 07:47:34 crc kubenswrapper[4908]: I0131 07:47:34.383960 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tchk\" (UniqueName: \"kubernetes.io/projected/277022d5-bc50-4ebf-a267-f835f1656f8d-kube-api-access-2tchk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk\" (UID: \"277022d5-bc50-4ebf-a267-f835f1656f8d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk" Jan 31 07:47:34 crc kubenswrapper[4908]: I0131 07:47:34.384532 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277022d5-bc50-4ebf-a267-f835f1656f8d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk\" (UID: \"277022d5-bc50-4ebf-a267-f835f1656f8d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk" Jan 31 07:47:34 crc kubenswrapper[4908]: I0131 07:47:34.384636 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/277022d5-bc50-4ebf-a267-f835f1656f8d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk\" (UID: \"277022d5-bc50-4ebf-a267-f835f1656f8d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk" Jan 31 07:47:34 crc kubenswrapper[4908]: I0131 07:47:34.471446 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk" Jan 31 07:47:34 crc kubenswrapper[4908]: I0131 07:47:34.956240 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk"] Jan 31 07:47:34 crc kubenswrapper[4908]: W0131 07:47:34.960226 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod277022d5_bc50_4ebf_a267_f835f1656f8d.slice/crio-32677852d39d1e6ec96d98d1264394ebe6b2c9fa2f0bd9d8fb8f07788e885c05 WatchSource:0}: Error finding container 32677852d39d1e6ec96d98d1264394ebe6b2c9fa2f0bd9d8fb8f07788e885c05: Status 404 returned error can't find the container with id 32677852d39d1e6ec96d98d1264394ebe6b2c9fa2f0bd9d8fb8f07788e885c05 Jan 31 07:47:35 crc kubenswrapper[4908]: I0131 07:47:35.080958 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk" event={"ID":"277022d5-bc50-4ebf-a267-f835f1656f8d","Type":"ContainerStarted","Data":"32677852d39d1e6ec96d98d1264394ebe6b2c9fa2f0bd9d8fb8f07788e885c05"} Jan 31 07:47:40 crc kubenswrapper[4908]: I0131 07:47:40.431199 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:47:40 crc kubenswrapper[4908]: I0131 07:47:40.431865 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:47:51 crc kubenswrapper[4908]: I0131 07:47:51.405764 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 07:47:52 crc kubenswrapper[4908]: I0131 07:47:52.227882 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk" event={"ID":"277022d5-bc50-4ebf-a267-f835f1656f8d","Type":"ContainerStarted","Data":"84e6c77034c3a1cc91f63f7a76fa5a4a775864ee8981d19e4d54d45ff3b63730"} Jan 31 07:47:52 crc kubenswrapper[4908]: I0131 07:47:52.253234 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk" podStartSLOduration=1.812209548 podStartE2EDuration="18.253213883s" podCreationTimestamp="2026-01-31 07:47:34 +0000 UTC" firstStartedPulling="2026-01-31 07:47:34.962454764 +0000 UTC m=+1561.578399418" lastFinishedPulling="2026-01-31 07:47:51.403459099 +0000 UTC m=+1578.019403753" observedRunningTime="2026-01-31 07:47:52.245131784 +0000 UTC m=+1578.861076438" watchObservedRunningTime="2026-01-31 07:47:52.253213883 +0000 UTC m=+1578.869158547" Jan 31 07:48:10 crc kubenswrapper[4908]: I0131 07:48:10.431010 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:48:10 crc kubenswrapper[4908]: I0131 07:48:10.431471 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:48:24 crc kubenswrapper[4908]: I0131 07:48:24.654716 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r5k8z"] Jan 31 07:48:24 crc kubenswrapper[4908]: I0131 07:48:24.657142 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5k8z" Jan 31 07:48:24 crc kubenswrapper[4908]: I0131 07:48:24.666449 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r5k8z"] Jan 31 07:48:24 crc kubenswrapper[4908]: I0131 07:48:24.797137 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb3993eb-2be9-47b8-93c5-789d6004b977-utilities\") pod \"certified-operators-r5k8z\" (UID: \"eb3993eb-2be9-47b8-93c5-789d6004b977\") " pod="openshift-marketplace/certified-operators-r5k8z" Jan 31 07:48:24 crc kubenswrapper[4908]: I0131 07:48:24.797519 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb3993eb-2be9-47b8-93c5-789d6004b977-catalog-content\") pod \"certified-operators-r5k8z\" (UID: \"eb3993eb-2be9-47b8-93c5-789d6004b977\") " pod="openshift-marketplace/certified-operators-r5k8z" Jan 31 07:48:24 crc kubenswrapper[4908]: I0131 07:48:24.797680 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-429jp\" (UniqueName: \"kubernetes.io/projected/eb3993eb-2be9-47b8-93c5-789d6004b977-kube-api-access-429jp\") pod \"certified-operators-r5k8z\" (UID: \"eb3993eb-2be9-47b8-93c5-789d6004b977\") " pod="openshift-marketplace/certified-operators-r5k8z" Jan 31 07:48:24 crc kubenswrapper[4908]: I0131 07:48:24.899335 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb3993eb-2be9-47b8-93c5-789d6004b977-catalog-content\") pod \"certified-operators-r5k8z\" (UID: \"eb3993eb-2be9-47b8-93c5-789d6004b977\") " pod="openshift-marketplace/certified-operators-r5k8z" Jan 31 07:48:24 crc kubenswrapper[4908]: I0131 07:48:24.899471 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-429jp\" (UniqueName: \"kubernetes.io/projected/eb3993eb-2be9-47b8-93c5-789d6004b977-kube-api-access-429jp\") pod \"certified-operators-r5k8z\" (UID: \"eb3993eb-2be9-47b8-93c5-789d6004b977\") " pod="openshift-marketplace/certified-operators-r5k8z" Jan 31 07:48:24 crc kubenswrapper[4908]: I0131 07:48:24.899559 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb3993eb-2be9-47b8-93c5-789d6004b977-utilities\") pod \"certified-operators-r5k8z\" (UID: \"eb3993eb-2be9-47b8-93c5-789d6004b977\") " pod="openshift-marketplace/certified-operators-r5k8z" Jan 31 07:48:24 crc kubenswrapper[4908]: I0131 07:48:24.899997 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb3993eb-2be9-47b8-93c5-789d6004b977-catalog-content\") pod \"certified-operators-r5k8z\" (UID: \"eb3993eb-2be9-47b8-93c5-789d6004b977\") " pod="openshift-marketplace/certified-operators-r5k8z" Jan 31 07:48:24 crc kubenswrapper[4908]: I0131 07:48:24.900163 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb3993eb-2be9-47b8-93c5-789d6004b977-utilities\") pod \"certified-operators-r5k8z\" (UID: \"eb3993eb-2be9-47b8-93c5-789d6004b977\") " pod="openshift-marketplace/certified-operators-r5k8z" Jan 31 07:48:24 crc kubenswrapper[4908]: I0131 07:48:24.920269 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-429jp\" (UniqueName: \"kubernetes.io/projected/eb3993eb-2be9-47b8-93c5-789d6004b977-kube-api-access-429jp\") pod \"certified-operators-r5k8z\" (UID: \"eb3993eb-2be9-47b8-93c5-789d6004b977\") " pod="openshift-marketplace/certified-operators-r5k8z" Jan 31 07:48:24 crc kubenswrapper[4908]: I0131 07:48:24.976587 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5k8z" Jan 31 07:48:25 crc kubenswrapper[4908]: I0131 07:48:25.524796 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r5k8z"] Jan 31 07:48:25 crc kubenswrapper[4908]: I0131 07:48:25.561649 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5k8z" event={"ID":"eb3993eb-2be9-47b8-93c5-789d6004b977","Type":"ContainerStarted","Data":"0eac267817631b36ad4a4a31d217e9b5adda2767b952034cab2ebf300225dca4"} Jan 31 07:48:26 crc kubenswrapper[4908]: I0131 07:48:26.576872 4908 generic.go:334] "Generic (PLEG): container finished" podID="eb3993eb-2be9-47b8-93c5-789d6004b977" containerID="4a5be37d5f593dcbb225f47d37ed2adcf717a8778ca900374d56a0bb77fd6974" exitCode=0 Jan 31 07:48:26 crc kubenswrapper[4908]: I0131 07:48:26.576914 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5k8z" event={"ID":"eb3993eb-2be9-47b8-93c5-789d6004b977","Type":"ContainerDied","Data":"4a5be37d5f593dcbb225f47d37ed2adcf717a8778ca900374d56a0bb77fd6974"} Jan 31 07:48:27 crc kubenswrapper[4908]: I0131 07:48:27.599754 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5k8z" event={"ID":"eb3993eb-2be9-47b8-93c5-789d6004b977","Type":"ContainerStarted","Data":"5b1c2a15ba27d629a56ed4b5896b09c9e4ad651902ac8bd76dea163b97b21d70"} Jan 31 07:48:28 crc kubenswrapper[4908]: I0131 07:48:28.613634 4908 generic.go:334] "Generic (PLEG): container finished" podID="eb3993eb-2be9-47b8-93c5-789d6004b977" containerID="5b1c2a15ba27d629a56ed4b5896b09c9e4ad651902ac8bd76dea163b97b21d70" exitCode=0 Jan 31 07:48:28 crc kubenswrapper[4908]: I0131 07:48:28.613714 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5k8z" event={"ID":"eb3993eb-2be9-47b8-93c5-789d6004b977","Type":"ContainerDied","Data":"5b1c2a15ba27d629a56ed4b5896b09c9e4ad651902ac8bd76dea163b97b21d70"} Jan 31 07:48:29 crc kubenswrapper[4908]: I0131 07:48:29.625171 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5k8z" event={"ID":"eb3993eb-2be9-47b8-93c5-789d6004b977","Type":"ContainerStarted","Data":"9cbd5ec0d7ac14a735bb472b6390d1e5578c8cfdb6024b513a50e96a0054eaa6"} Jan 31 07:48:29 crc kubenswrapper[4908]: I0131 07:48:29.653513 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r5k8z" podStartSLOduration=3.072837739 podStartE2EDuration="5.653488713s" podCreationTimestamp="2026-01-31 07:48:24 +0000 UTC" firstStartedPulling="2026-01-31 07:48:26.579117696 +0000 UTC m=+1613.195062350" lastFinishedPulling="2026-01-31 07:48:29.15976867 +0000 UTC m=+1615.775713324" observedRunningTime="2026-01-31 07:48:29.645385035 +0000 UTC m=+1616.261329689" watchObservedRunningTime="2026-01-31 07:48:29.653488713 +0000 UTC m=+1616.269433367" Jan 31 07:48:34 crc kubenswrapper[4908]: I0131 07:48:34.977046 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r5k8z" Jan 31 07:48:34 crc kubenswrapper[4908]: I0131 07:48:34.977696 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r5k8z" Jan 31 07:48:35 crc kubenswrapper[4908]: I0131 07:48:35.019867 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r5k8z" Jan 31 07:48:35 crc kubenswrapper[4908]: I0131 07:48:35.717243 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r5k8z" Jan 31 07:48:35 crc kubenswrapper[4908]: I0131 07:48:35.776606 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r5k8z"] Jan 31 07:48:37 crc kubenswrapper[4908]: I0131 07:48:37.690768 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r5k8z" podUID="eb3993eb-2be9-47b8-93c5-789d6004b977" containerName="registry-server" containerID="cri-o://9cbd5ec0d7ac14a735bb472b6390d1e5578c8cfdb6024b513a50e96a0054eaa6" gracePeriod=2 Jan 31 07:48:38 crc kubenswrapper[4908]: I0131 07:48:38.196289 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5k8z" Jan 31 07:48:38 crc kubenswrapper[4908]: I0131 07:48:38.268005 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb3993eb-2be9-47b8-93c5-789d6004b977-utilities\") pod \"eb3993eb-2be9-47b8-93c5-789d6004b977\" (UID: \"eb3993eb-2be9-47b8-93c5-789d6004b977\") " Jan 31 07:48:38 crc kubenswrapper[4908]: I0131 07:48:38.268126 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb3993eb-2be9-47b8-93c5-789d6004b977-catalog-content\") pod \"eb3993eb-2be9-47b8-93c5-789d6004b977\" (UID: \"eb3993eb-2be9-47b8-93c5-789d6004b977\") " Jan 31 07:48:38 crc kubenswrapper[4908]: I0131 07:48:38.268252 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-429jp\" (UniqueName: \"kubernetes.io/projected/eb3993eb-2be9-47b8-93c5-789d6004b977-kube-api-access-429jp\") pod \"eb3993eb-2be9-47b8-93c5-789d6004b977\" (UID: \"eb3993eb-2be9-47b8-93c5-789d6004b977\") " Jan 31 07:48:38 crc kubenswrapper[4908]: I0131 07:48:38.268778 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb3993eb-2be9-47b8-93c5-789d6004b977-utilities" (OuterVolumeSpecName: "utilities") pod "eb3993eb-2be9-47b8-93c5-789d6004b977" (UID: "eb3993eb-2be9-47b8-93c5-789d6004b977"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:48:38 crc kubenswrapper[4908]: I0131 07:48:38.268918 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb3993eb-2be9-47b8-93c5-789d6004b977-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:48:38 crc kubenswrapper[4908]: I0131 07:48:38.279161 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb3993eb-2be9-47b8-93c5-789d6004b977-kube-api-access-429jp" (OuterVolumeSpecName: "kube-api-access-429jp") pod "eb3993eb-2be9-47b8-93c5-789d6004b977" (UID: "eb3993eb-2be9-47b8-93c5-789d6004b977"). InnerVolumeSpecName "kube-api-access-429jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:48:38 crc kubenswrapper[4908]: I0131 07:48:38.314331 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb3993eb-2be9-47b8-93c5-789d6004b977-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb3993eb-2be9-47b8-93c5-789d6004b977" (UID: "eb3993eb-2be9-47b8-93c5-789d6004b977"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:48:38 crc kubenswrapper[4908]: I0131 07:48:38.371146 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb3993eb-2be9-47b8-93c5-789d6004b977-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:48:38 crc kubenswrapper[4908]: I0131 07:48:38.371188 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-429jp\" (UniqueName: \"kubernetes.io/projected/eb3993eb-2be9-47b8-93c5-789d6004b977-kube-api-access-429jp\") on node \"crc\" DevicePath \"\"" Jan 31 07:48:38 crc kubenswrapper[4908]: I0131 07:48:38.699267 4908 generic.go:334] "Generic (PLEG): container finished" podID="eb3993eb-2be9-47b8-93c5-789d6004b977" containerID="9cbd5ec0d7ac14a735bb472b6390d1e5578c8cfdb6024b513a50e96a0054eaa6" exitCode=0 Jan 31 07:48:38 crc kubenswrapper[4908]: I0131 07:48:38.699306 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5k8z" event={"ID":"eb3993eb-2be9-47b8-93c5-789d6004b977","Type":"ContainerDied","Data":"9cbd5ec0d7ac14a735bb472b6390d1e5578c8cfdb6024b513a50e96a0054eaa6"} Jan 31 07:48:38 crc kubenswrapper[4908]: I0131 07:48:38.699323 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5k8z" Jan 31 07:48:38 crc kubenswrapper[4908]: I0131 07:48:38.699338 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5k8z" event={"ID":"eb3993eb-2be9-47b8-93c5-789d6004b977","Type":"ContainerDied","Data":"0eac267817631b36ad4a4a31d217e9b5adda2767b952034cab2ebf300225dca4"} Jan 31 07:48:38 crc kubenswrapper[4908]: I0131 07:48:38.699359 4908 scope.go:117] "RemoveContainer" containerID="9cbd5ec0d7ac14a735bb472b6390d1e5578c8cfdb6024b513a50e96a0054eaa6" Jan 31 07:48:38 crc kubenswrapper[4908]: I0131 07:48:38.722842 4908 scope.go:117] "RemoveContainer" containerID="5b1c2a15ba27d629a56ed4b5896b09c9e4ad651902ac8bd76dea163b97b21d70" Jan 31 07:48:38 crc kubenswrapper[4908]: I0131 07:48:38.735151 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r5k8z"] Jan 31 07:48:38 crc kubenswrapper[4908]: I0131 07:48:38.747305 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r5k8z"] Jan 31 07:48:39 crc kubenswrapper[4908]: I0131 07:48:39.249264 4908 scope.go:117] "RemoveContainer" containerID="4a5be37d5f593dcbb225f47d37ed2adcf717a8778ca900374d56a0bb77fd6974" Jan 31 07:48:39 crc kubenswrapper[4908]: I0131 07:48:39.275089 4908 scope.go:117] "RemoveContainer" containerID="9cbd5ec0d7ac14a735bb472b6390d1e5578c8cfdb6024b513a50e96a0054eaa6" Jan 31 07:48:39 crc kubenswrapper[4908]: E0131 07:48:39.276493 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cbd5ec0d7ac14a735bb472b6390d1e5578c8cfdb6024b513a50e96a0054eaa6\": container with ID starting with 9cbd5ec0d7ac14a735bb472b6390d1e5578c8cfdb6024b513a50e96a0054eaa6 not found: ID does not exist" containerID="9cbd5ec0d7ac14a735bb472b6390d1e5578c8cfdb6024b513a50e96a0054eaa6" Jan 31 07:48:39 crc kubenswrapper[4908]: I0131 07:48:39.276526 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cbd5ec0d7ac14a735bb472b6390d1e5578c8cfdb6024b513a50e96a0054eaa6"} err="failed to get container status \"9cbd5ec0d7ac14a735bb472b6390d1e5578c8cfdb6024b513a50e96a0054eaa6\": rpc error: code = NotFound desc = could not find container \"9cbd5ec0d7ac14a735bb472b6390d1e5578c8cfdb6024b513a50e96a0054eaa6\": container with ID starting with 9cbd5ec0d7ac14a735bb472b6390d1e5578c8cfdb6024b513a50e96a0054eaa6 not found: ID does not exist" Jan 31 07:48:39 crc kubenswrapper[4908]: I0131 07:48:39.276546 4908 scope.go:117] "RemoveContainer" containerID="5b1c2a15ba27d629a56ed4b5896b09c9e4ad651902ac8bd76dea163b97b21d70" Jan 31 07:48:39 crc kubenswrapper[4908]: E0131 07:48:39.277080 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b1c2a15ba27d629a56ed4b5896b09c9e4ad651902ac8bd76dea163b97b21d70\": container with ID starting with 5b1c2a15ba27d629a56ed4b5896b09c9e4ad651902ac8bd76dea163b97b21d70 not found: ID does not exist" containerID="5b1c2a15ba27d629a56ed4b5896b09c9e4ad651902ac8bd76dea163b97b21d70" Jan 31 07:48:39 crc kubenswrapper[4908]: I0131 07:48:39.277125 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b1c2a15ba27d629a56ed4b5896b09c9e4ad651902ac8bd76dea163b97b21d70"} err="failed to get container status \"5b1c2a15ba27d629a56ed4b5896b09c9e4ad651902ac8bd76dea163b97b21d70\": rpc error: code = NotFound desc = could not find container \"5b1c2a15ba27d629a56ed4b5896b09c9e4ad651902ac8bd76dea163b97b21d70\": container with ID starting with 5b1c2a15ba27d629a56ed4b5896b09c9e4ad651902ac8bd76dea163b97b21d70 not found: ID does not exist" Jan 31 07:48:39 crc kubenswrapper[4908]: I0131 07:48:39.277157 4908 scope.go:117] "RemoveContainer" containerID="4a5be37d5f593dcbb225f47d37ed2adcf717a8778ca900374d56a0bb77fd6974" Jan 31 07:48:39 crc kubenswrapper[4908]: E0131 07:48:39.277459 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a5be37d5f593dcbb225f47d37ed2adcf717a8778ca900374d56a0bb77fd6974\": container with ID starting with 4a5be37d5f593dcbb225f47d37ed2adcf717a8778ca900374d56a0bb77fd6974 not found: ID does not exist" containerID="4a5be37d5f593dcbb225f47d37ed2adcf717a8778ca900374d56a0bb77fd6974" Jan 31 07:48:39 crc kubenswrapper[4908]: I0131 07:48:39.277481 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a5be37d5f593dcbb225f47d37ed2adcf717a8778ca900374d56a0bb77fd6974"} err="failed to get container status \"4a5be37d5f593dcbb225f47d37ed2adcf717a8778ca900374d56a0bb77fd6974\": rpc error: code = NotFound desc = could not find container \"4a5be37d5f593dcbb225f47d37ed2adcf717a8778ca900374d56a0bb77fd6974\": container with ID starting with 4a5be37d5f593dcbb225f47d37ed2adcf717a8778ca900374d56a0bb77fd6974 not found: ID does not exist" Jan 31 07:48:39 crc kubenswrapper[4908]: I0131 07:48:39.954117 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb3993eb-2be9-47b8-93c5-789d6004b977" path="/var/lib/kubelet/pods/eb3993eb-2be9-47b8-93c5-789d6004b977/volumes" Jan 31 07:48:40 crc kubenswrapper[4908]: I0131 07:48:40.431636 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:48:40 crc kubenswrapper[4908]: I0131 07:48:40.431701 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:48:40 crc kubenswrapper[4908]: I0131 07:48:40.431755 4908 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 07:48:40 crc kubenswrapper[4908]: I0131 07:48:40.432626 4908 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def"} pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 07:48:40 crc kubenswrapper[4908]: I0131 07:48:40.432705 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" containerID="cri-o://3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def" gracePeriod=600 Jan 31 07:48:40 crc kubenswrapper[4908]: E0131 07:48:40.556100 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 07:48:40 crc kubenswrapper[4908]: I0131 07:48:40.717733 4908 generic.go:334] "Generic (PLEG): container finished" podID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerID="3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def" exitCode=0 Jan 31 07:48:40 crc kubenswrapper[4908]: I0131 07:48:40.717772 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerDied","Data":"3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def"} Jan 31 07:48:40 crc kubenswrapper[4908]: I0131 07:48:40.717802 4908 scope.go:117] "RemoveContainer" containerID="eba7927261e32ea7acb8227699daacd0fc29c715f0d6c37c890b4d99dd751ec0" Jan 31 07:48:40 crc kubenswrapper[4908]: I0131 07:48:40.718100 4908 scope.go:117] "RemoveContainer" containerID="3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def" Jan 31 07:48:40 crc kubenswrapper[4908]: E0131 07:48:40.718310 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 07:48:54 crc kubenswrapper[4908]: I0131 07:48:54.941196 4908 scope.go:117] "RemoveContainer" containerID="3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def" Jan 31 07:48:54 crc kubenswrapper[4908]: E0131 07:48:54.942675 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 07:49:07 crc kubenswrapper[4908]: I0131 07:49:07.948213 4908 scope.go:117] "RemoveContainer" containerID="3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def" Jan 31 07:49:07 crc kubenswrapper[4908]: E0131 07:49:07.949449 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 07:49:21 crc kubenswrapper[4908]: I0131 07:49:21.939814 4908 scope.go:117] "RemoveContainer" containerID="3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def" Jan 31 07:49:21 crc kubenswrapper[4908]: E0131 07:49:21.940537 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 07:49:33 crc kubenswrapper[4908]: I0131 07:49:33.940319 4908 scope.go:117] "RemoveContainer" containerID="3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def" Jan 31 07:49:33 crc kubenswrapper[4908]: E0131 07:49:33.941130 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 07:49:40 crc kubenswrapper[4908]: I0131 07:49:40.505933 4908 patch_prober.go:28] interesting pod/console-854d79fbc5-8nhzx container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.44:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 07:49:40 crc kubenswrapper[4908]: I0131 07:49:40.506460 4908 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-854d79fbc5-8nhzx" podUID="a01a2586-5dc2-4df2-bb93-15f26367c079" containerName="console" probeResult="failure" output="Get \"https://10.217.0.44:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 07:49:45 crc kubenswrapper[4908]: I0131 07:49:45.940803 4908 scope.go:117] "RemoveContainer" containerID="3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def" Jan 31 07:49:45 crc kubenswrapper[4908]: E0131 07:49:45.941460 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 07:49:59 crc kubenswrapper[4908]: I0131 07:49:59.943146 4908 scope.go:117] "RemoveContainer" containerID="3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def" Jan 31 07:49:59 crc kubenswrapper[4908]: E0131 07:49:59.943765 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 07:50:12 crc kubenswrapper[4908]: I0131 07:50:12.940052 4908 scope.go:117] "RemoveContainer" containerID="3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def" Jan 31 07:50:12 crc kubenswrapper[4908]: E0131 07:50:12.940891 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 07:50:27 crc kubenswrapper[4908]: I0131 07:50:27.955397 4908 scope.go:117] "RemoveContainer" containerID="3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def" Jan 31 07:50:27 crc kubenswrapper[4908]: E0131 07:50:27.956559 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 07:50:36 crc kubenswrapper[4908]: I0131 07:50:36.054145 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0720-account-create-update-v6x8z"] Jan 31 07:50:36 crc kubenswrapper[4908]: I0131 07:50:36.063916 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-klpc8"] Jan 31 07:50:36 crc kubenswrapper[4908]: I0131 07:50:36.071884 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-klpc8"] Jan 31 07:50:36 crc kubenswrapper[4908]: I0131 07:50:36.080486 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-0720-account-create-update-v6x8z"] Jan 31 07:50:37 crc kubenswrapper[4908]: I0131 07:50:37.953832 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="498a5601-b656-4767-b223-2f59c64d76c5" path="/var/lib/kubelet/pods/498a5601-b656-4767-b223-2f59c64d76c5/volumes" Jan 31 07:50:37 crc kubenswrapper[4908]: I0131 07:50:37.956245 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98ede730-6518-4590-ade8-ecc313c8147f" path="/var/lib/kubelet/pods/98ede730-6518-4590-ade8-ecc313c8147f/volumes" Jan 31 07:50:39 crc kubenswrapper[4908]: I0131 07:50:39.031731 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-8mnl5"] Jan 31 07:50:39 crc kubenswrapper[4908]: I0131 07:50:39.048225 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-61b8-account-create-update-mwxlx"] Jan 31 07:50:39 crc kubenswrapper[4908]: I0131 07:50:39.066155 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-8mnl5"] Jan 31 07:50:39 crc kubenswrapper[4908]: I0131 07:50:39.074608 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-61b8-account-create-update-mwxlx"] Jan 31 07:50:39 crc kubenswrapper[4908]: I0131 07:50:39.940083 4908 scope.go:117] "RemoveContainer" containerID="3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def" Jan 31 07:50:39 crc kubenswrapper[4908]: E0131 07:50:39.940475 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 07:50:39 crc kubenswrapper[4908]: I0131 07:50:39.953260 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97fff54c-e113-417f-b87c-6e01eea5e6b7" path="/var/lib/kubelet/pods/97fff54c-e113-417f-b87c-6e01eea5e6b7/volumes" Jan 31 07:50:39 crc kubenswrapper[4908]: I0131 07:50:39.954114 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea5d5cc9-3f8f-427d-8675-bd883e86e4ee" path="/var/lib/kubelet/pods/ea5d5cc9-3f8f-427d-8675-bd883e86e4ee/volumes" Jan 31 07:50:42 crc kubenswrapper[4908]: I0131 07:50:42.027282 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-x9fx8"] Jan 31 07:50:42 crc kubenswrapper[4908]: I0131 07:50:42.035497 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-f74b-account-create-update-nz9tr"] Jan 31 07:50:42 crc kubenswrapper[4908]: I0131 07:50:42.044597 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-x9fx8"] Jan 31 07:50:42 crc kubenswrapper[4908]: I0131 07:50:42.052076 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-f74b-account-create-update-nz9tr"] Jan 31 07:50:43 crc kubenswrapper[4908]: I0131 07:50:43.950878 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06b43a84-efc2-407f-818a-afc0151b60d8" path="/var/lib/kubelet/pods/06b43a84-efc2-407f-818a-afc0151b60d8/volumes" Jan 31 07:50:43 crc kubenswrapper[4908]: I0131 07:50:43.952178 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e53b78a9-d76e-42fe-b026-f44b248941ea" path="/var/lib/kubelet/pods/e53b78a9-d76e-42fe-b026-f44b248941ea/volumes" Jan 31 07:50:44 crc kubenswrapper[4908]: I0131 07:50:44.028372 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-5n2cn"] Jan 31 07:50:44 crc kubenswrapper[4908]: I0131 07:50:44.038027 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-5n2cn"] Jan 31 07:50:45 crc kubenswrapper[4908]: I0131 07:50:45.949335 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da434b5e-8218-4d2c-a269-b1ac41d172d0" path="/var/lib/kubelet/pods/da434b5e-8218-4d2c-a269-b1ac41d172d0/volumes" Jan 31 07:50:51 crc kubenswrapper[4908]: I0131 07:50:51.946556 4908 generic.go:334] "Generic (PLEG): container finished" podID="277022d5-bc50-4ebf-a267-f835f1656f8d" containerID="84e6c77034c3a1cc91f63f7a76fa5a4a775864ee8981d19e4d54d45ff3b63730" exitCode=0 Jan 31 07:50:51 crc kubenswrapper[4908]: I0131 07:50:51.958403 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk" event={"ID":"277022d5-bc50-4ebf-a267-f835f1656f8d","Type":"ContainerDied","Data":"84e6c77034c3a1cc91f63f7a76fa5a4a775864ee8981d19e4d54d45ff3b63730"} Jan 31 07:50:52 crc kubenswrapper[4908]: I0131 07:50:52.940419 4908 scope.go:117] "RemoveContainer" containerID="3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def" Jan 31 07:50:52 crc kubenswrapper[4908]: E0131 07:50:52.941017 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 07:50:53 crc kubenswrapper[4908]: I0131 07:50:53.311037 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk" Jan 31 07:50:53 crc kubenswrapper[4908]: I0131 07:50:53.414060 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tchk\" (UniqueName: \"kubernetes.io/projected/277022d5-bc50-4ebf-a267-f835f1656f8d-kube-api-access-2tchk\") pod \"277022d5-bc50-4ebf-a267-f835f1656f8d\" (UID: \"277022d5-bc50-4ebf-a267-f835f1656f8d\") " Jan 31 07:50:53 crc kubenswrapper[4908]: I0131 07:50:53.414123 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/277022d5-bc50-4ebf-a267-f835f1656f8d-ssh-key-openstack-edpm-ipam\") pod \"277022d5-bc50-4ebf-a267-f835f1656f8d\" (UID: \"277022d5-bc50-4ebf-a267-f835f1656f8d\") " Jan 31 07:50:53 crc kubenswrapper[4908]: I0131 07:50:53.414181 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277022d5-bc50-4ebf-a267-f835f1656f8d-bootstrap-combined-ca-bundle\") pod \"277022d5-bc50-4ebf-a267-f835f1656f8d\" (UID: \"277022d5-bc50-4ebf-a267-f835f1656f8d\") " Jan 31 07:50:53 crc kubenswrapper[4908]: I0131 07:50:53.414260 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/277022d5-bc50-4ebf-a267-f835f1656f8d-inventory\") pod \"277022d5-bc50-4ebf-a267-f835f1656f8d\" (UID: \"277022d5-bc50-4ebf-a267-f835f1656f8d\") " Jan 31 07:50:53 crc kubenswrapper[4908]: I0131 07:50:53.419891 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/277022d5-bc50-4ebf-a267-f835f1656f8d-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "277022d5-bc50-4ebf-a267-f835f1656f8d" (UID: "277022d5-bc50-4ebf-a267-f835f1656f8d"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:50:53 crc kubenswrapper[4908]: I0131 07:50:53.420576 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/277022d5-bc50-4ebf-a267-f835f1656f8d-kube-api-access-2tchk" (OuterVolumeSpecName: "kube-api-access-2tchk") pod "277022d5-bc50-4ebf-a267-f835f1656f8d" (UID: "277022d5-bc50-4ebf-a267-f835f1656f8d"). InnerVolumeSpecName "kube-api-access-2tchk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:50:53 crc kubenswrapper[4908]: I0131 07:50:53.441384 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/277022d5-bc50-4ebf-a267-f835f1656f8d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "277022d5-bc50-4ebf-a267-f835f1656f8d" (UID: "277022d5-bc50-4ebf-a267-f835f1656f8d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:50:53 crc kubenswrapper[4908]: I0131 07:50:53.441939 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/277022d5-bc50-4ebf-a267-f835f1656f8d-inventory" (OuterVolumeSpecName: "inventory") pod "277022d5-bc50-4ebf-a267-f835f1656f8d" (UID: "277022d5-bc50-4ebf-a267-f835f1656f8d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:50:53 crc kubenswrapper[4908]: I0131 07:50:53.516163 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tchk\" (UniqueName: \"kubernetes.io/projected/277022d5-bc50-4ebf-a267-f835f1656f8d-kube-api-access-2tchk\") on node \"crc\" DevicePath \"\"" Jan 31 07:50:53 crc kubenswrapper[4908]: I0131 07:50:53.516202 4908 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/277022d5-bc50-4ebf-a267-f835f1656f8d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 07:50:53 crc kubenswrapper[4908]: I0131 07:50:53.516215 4908 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277022d5-bc50-4ebf-a267-f835f1656f8d-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:50:53 crc kubenswrapper[4908]: I0131 07:50:53.516226 4908 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/277022d5-bc50-4ebf-a267-f835f1656f8d-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 07:50:53 crc kubenswrapper[4908]: I0131 07:50:53.965385 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk" event={"ID":"277022d5-bc50-4ebf-a267-f835f1656f8d","Type":"ContainerDied","Data":"32677852d39d1e6ec96d98d1264394ebe6b2c9fa2f0bd9d8fb8f07788e885c05"} Jan 31 07:50:53 crc kubenswrapper[4908]: I0131 07:50:53.965428 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32677852d39d1e6ec96d98d1264394ebe6b2c9fa2f0bd9d8fb8f07788e885c05" Jan 31 07:50:53 crc kubenswrapper[4908]: I0131 07:50:53.965463 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk" Jan 31 07:50:54 crc kubenswrapper[4908]: I0131 07:50:54.060328 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp"] Jan 31 07:50:54 crc kubenswrapper[4908]: E0131 07:50:54.060718 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb3993eb-2be9-47b8-93c5-789d6004b977" containerName="extract-content" Jan 31 07:50:54 crc kubenswrapper[4908]: I0131 07:50:54.060737 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb3993eb-2be9-47b8-93c5-789d6004b977" containerName="extract-content" Jan 31 07:50:54 crc kubenswrapper[4908]: E0131 07:50:54.060763 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277022d5-bc50-4ebf-a267-f835f1656f8d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 07:50:54 crc kubenswrapper[4908]: I0131 07:50:54.060770 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="277022d5-bc50-4ebf-a267-f835f1656f8d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 07:50:54 crc kubenswrapper[4908]: E0131 07:50:54.060783 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb3993eb-2be9-47b8-93c5-789d6004b977" containerName="extract-utilities" Jan 31 07:50:54 crc kubenswrapper[4908]: I0131 07:50:54.060789 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb3993eb-2be9-47b8-93c5-789d6004b977" containerName="extract-utilities" Jan 31 07:50:54 crc kubenswrapper[4908]: E0131 07:50:54.060811 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb3993eb-2be9-47b8-93c5-789d6004b977" containerName="registry-server" Jan 31 07:50:54 crc kubenswrapper[4908]: I0131 07:50:54.060817 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb3993eb-2be9-47b8-93c5-789d6004b977" containerName="registry-server" Jan 31 07:50:54 crc kubenswrapper[4908]: I0131 07:50:54.061011 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="277022d5-bc50-4ebf-a267-f835f1656f8d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 07:50:54 crc kubenswrapper[4908]: I0131 07:50:54.061030 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb3993eb-2be9-47b8-93c5-789d6004b977" containerName="registry-server" Jan 31 07:50:54 crc kubenswrapper[4908]: I0131 07:50:54.061625 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp" Jan 31 07:50:54 crc kubenswrapper[4908]: I0131 07:50:54.063733 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 07:50:54 crc kubenswrapper[4908]: I0131 07:50:54.064506 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 07:50:54 crc kubenswrapper[4908]: I0131 07:50:54.064928 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 07:50:54 crc kubenswrapper[4908]: I0131 07:50:54.066748 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vgwb9" Jan 31 07:50:54 crc kubenswrapper[4908]: I0131 07:50:54.079868 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp"] Jan 31 07:50:54 crc kubenswrapper[4908]: I0131 07:50:54.229222 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bntrm\" (UniqueName: \"kubernetes.io/projected/5b900b02-6c80-43b1-96ac-7a4687cc4e65-kube-api-access-bntrm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp\" (UID: \"5b900b02-6c80-43b1-96ac-7a4687cc4e65\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp" Jan 31 07:50:54 crc kubenswrapper[4908]: I0131 07:50:54.229310 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5b900b02-6c80-43b1-96ac-7a4687cc4e65-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp\" (UID: \"5b900b02-6c80-43b1-96ac-7a4687cc4e65\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp" Jan 31 07:50:54 crc kubenswrapper[4908]: I0131 07:50:54.229352 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b900b02-6c80-43b1-96ac-7a4687cc4e65-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp\" (UID: \"5b900b02-6c80-43b1-96ac-7a4687cc4e65\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp" Jan 31 07:50:54 crc kubenswrapper[4908]: I0131 07:50:54.331433 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5b900b02-6c80-43b1-96ac-7a4687cc4e65-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp\" (UID: \"5b900b02-6c80-43b1-96ac-7a4687cc4e65\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp" Jan 31 07:50:54 crc kubenswrapper[4908]: I0131 07:50:54.332406 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b900b02-6c80-43b1-96ac-7a4687cc4e65-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp\" (UID: \"5b900b02-6c80-43b1-96ac-7a4687cc4e65\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp" Jan 31 07:50:54 crc kubenswrapper[4908]: I0131 07:50:54.332594 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bntrm\" (UniqueName: \"kubernetes.io/projected/5b900b02-6c80-43b1-96ac-7a4687cc4e65-kube-api-access-bntrm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp\" (UID: \"5b900b02-6c80-43b1-96ac-7a4687cc4e65\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp" Jan 31 07:50:54 crc kubenswrapper[4908]: I0131 07:50:54.346328 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b900b02-6c80-43b1-96ac-7a4687cc4e65-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp\" (UID: \"5b900b02-6c80-43b1-96ac-7a4687cc4e65\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp" Jan 31 07:50:54 crc kubenswrapper[4908]: I0131 07:50:54.346953 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5b900b02-6c80-43b1-96ac-7a4687cc4e65-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp\" (UID: \"5b900b02-6c80-43b1-96ac-7a4687cc4e65\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp" Jan 31 07:50:54 crc kubenswrapper[4908]: I0131 07:50:54.355395 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bntrm\" (UniqueName: \"kubernetes.io/projected/5b900b02-6c80-43b1-96ac-7a4687cc4e65-kube-api-access-bntrm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp\" (UID: \"5b900b02-6c80-43b1-96ac-7a4687cc4e65\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp" Jan 31 07:50:54 crc kubenswrapper[4908]: I0131 07:50:54.376265 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp" Jan 31 07:50:54 crc kubenswrapper[4908]: I0131 07:50:54.917805 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp"] Jan 31 07:50:54 crc kubenswrapper[4908]: I0131 07:50:54.931284 4908 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 07:50:54 crc kubenswrapper[4908]: I0131 07:50:54.975758 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp" event={"ID":"5b900b02-6c80-43b1-96ac-7a4687cc4e65","Type":"ContainerStarted","Data":"8ec9d5860bd84819c64b0e050f795d94b6b2905de739b88c88212edbb97585ca"} Jan 31 07:50:56 crc kubenswrapper[4908]: I0131 07:50:56.994792 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp" event={"ID":"5b900b02-6c80-43b1-96ac-7a4687cc4e65","Type":"ContainerStarted","Data":"48f57522321517580267732d3cd4630dd0066853433e4e0c7b51fc2c1df85952"} Jan 31 07:50:57 crc kubenswrapper[4908]: I0131 07:50:57.017463 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp" podStartSLOduration=2.119928095 podStartE2EDuration="3.017444808s" podCreationTimestamp="2026-01-31 07:50:54 +0000 UTC" firstStartedPulling="2026-01-31 07:50:54.930947687 +0000 UTC m=+1761.546892351" lastFinishedPulling="2026-01-31 07:50:55.82846441 +0000 UTC m=+1762.444409064" observedRunningTime="2026-01-31 07:50:57.012273959 +0000 UTC m=+1763.628218613" watchObservedRunningTime="2026-01-31 07:50:57.017444808 +0000 UTC m=+1763.633389462" Jan 31 07:51:03 crc kubenswrapper[4908]: I0131 07:51:03.940370 4908 scope.go:117] "RemoveContainer" containerID="3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def" Jan 31 07:51:03 crc kubenswrapper[4908]: E0131 07:51:03.941041 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 07:51:06 crc kubenswrapper[4908]: I0131 07:51:06.033173 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-49kmd"] Jan 31 07:51:06 crc kubenswrapper[4908]: I0131 07:51:06.048350 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-kg2tc"] Jan 31 07:51:06 crc kubenswrapper[4908]: I0131 07:51:06.066073 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-czx5b"] Jan 31 07:51:06 crc kubenswrapper[4908]: I0131 07:51:06.079604 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7a8d-account-create-update-gh5qr"] Jan 31 07:51:06 crc kubenswrapper[4908]: I0131 07:51:06.088312 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-czx5b"] Jan 31 07:51:06 crc kubenswrapper[4908]: I0131 07:51:06.096149 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-49kmd"] Jan 31 07:51:06 crc kubenswrapper[4908]: I0131 07:51:06.103520 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-kg2tc"] Jan 31 07:51:06 crc kubenswrapper[4908]: I0131 07:51:06.112224 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-0c73-account-create-update-ckgrp"] Jan 31 07:51:06 crc kubenswrapper[4908]: I0131 07:51:06.121063 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-0c73-account-create-update-ckgrp"] Jan 31 07:51:06 crc kubenswrapper[4908]: I0131 07:51:06.129032 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7a8d-account-create-update-gh5qr"] Jan 31 07:51:06 crc kubenswrapper[4908]: I0131 07:51:06.138796 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-a3f7-account-create-update-xwb54"] Jan 31 07:51:06 crc kubenswrapper[4908]: I0131 07:51:06.146537 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-a3f7-account-create-update-xwb54"] Jan 31 07:51:07 crc kubenswrapper[4908]: I0131 07:51:07.950725 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1232d6b2-5c5a-478a-936e-cf5ab61abd80" path="/var/lib/kubelet/pods/1232d6b2-5c5a-478a-936e-cf5ab61abd80/volumes" Jan 31 07:51:07 crc kubenswrapper[4908]: I0131 07:51:07.951510 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f29d7a2-caad-495f-a5c0-b58ddb2f2790" path="/var/lib/kubelet/pods/1f29d7a2-caad-495f-a5c0-b58ddb2f2790/volumes" Jan 31 07:51:07 crc kubenswrapper[4908]: I0131 07:51:07.952123 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22e90746-ae14-4f0b-af18-258e35239b0d" path="/var/lib/kubelet/pods/22e90746-ae14-4f0b-af18-258e35239b0d/volumes" Jan 31 07:51:07 crc kubenswrapper[4908]: I0131 07:51:07.952905 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99373758-9712-495b-bca3-40192fac3419" path="/var/lib/kubelet/pods/99373758-9712-495b-bca3-40192fac3419/volumes" Jan 31 07:51:07 crc kubenswrapper[4908]: I0131 07:51:07.953904 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fbfeac6-331c-4893-be5c-40183532e503" path="/var/lib/kubelet/pods/9fbfeac6-331c-4893-be5c-40183532e503/volumes" Jan 31 07:51:07 crc kubenswrapper[4908]: I0131 07:51:07.954458 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdcde0c8-5958-4c81-8860-1be3a31bcb5c" path="/var/lib/kubelet/pods/fdcde0c8-5958-4c81-8860-1be3a31bcb5c/volumes" Jan 31 07:51:16 crc kubenswrapper[4908]: I0131 07:51:16.046772 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-4xfqn"] Jan 31 07:51:16 crc kubenswrapper[4908]: I0131 07:51:16.057296 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-4xfqn"] Jan 31 07:51:16 crc kubenswrapper[4908]: I0131 07:51:16.941039 4908 scope.go:117] "RemoveContainer" containerID="3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def" Jan 31 07:51:16 crc kubenswrapper[4908]: E0131 07:51:16.941315 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 07:51:17 crc kubenswrapper[4908]: I0131 07:51:17.048139 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-lt52x"] Jan 31 07:51:17 crc kubenswrapper[4908]: I0131 07:51:17.063324 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-lt52x"] Jan 31 07:51:17 crc kubenswrapper[4908]: I0131 07:51:17.950346 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c53a541-b434-4bf6-8b68-6b56f14fee52" path="/var/lib/kubelet/pods/0c53a541-b434-4bf6-8b68-6b56f14fee52/volumes" Jan 31 07:51:17 crc kubenswrapper[4908]: I0131 07:51:17.951238 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35b314ce-a7db-42b5-b571-2f23c1065d37" path="/var/lib/kubelet/pods/35b314ce-a7db-42b5-b571-2f23c1065d37/volumes" Jan 31 07:51:27 crc kubenswrapper[4908]: I0131 07:51:27.952266 4908 scope.go:117] "RemoveContainer" containerID="3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def" Jan 31 07:51:27 crc kubenswrapper[4908]: E0131 07:51:27.952914 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 07:51:27 crc kubenswrapper[4908]: I0131 07:51:27.975565 4908 scope.go:117] "RemoveContainer" containerID="c4d19058bd902ace6ac84ef709af12c044e1e14d8975f41aef1907effbae9761" Jan 31 07:51:28 crc kubenswrapper[4908]: I0131 07:51:28.009740 4908 scope.go:117] "RemoveContainer" containerID="4d0de9e42432c38a6fd8fe5bd64930e27ae7575eb23b426043b1f3315199f86b" Jan 31 07:51:28 crc kubenswrapper[4908]: I0131 07:51:28.041286 4908 scope.go:117] "RemoveContainer" containerID="4fa72dfb8e1a47fffe8534fee40655a18ca2d63841b56e3af3d802dbf9e4f09d" Jan 31 07:51:28 crc kubenswrapper[4908]: I0131 07:51:28.093151 4908 scope.go:117] "RemoveContainer" containerID="f50e6614aa35d50b94047b92b2bdcb064c0ad6ff15efe50dc3d4f93d26c0b004" Jan 31 07:51:28 crc kubenswrapper[4908]: I0131 07:51:28.125796 4908 scope.go:117] "RemoveContainer" containerID="200a62055b8110a7de5a961e12ac6c5a70c85bbd99038e46faf9ba720ec1d598" Jan 31 07:51:28 crc kubenswrapper[4908]: I0131 07:51:28.164497 4908 scope.go:117] "RemoveContainer" containerID="8dc774e1d2e9e88780381fad48fcee1627765a199313fab75a58b9ed0d9a83b5" Jan 31 07:51:28 crc kubenswrapper[4908]: I0131 07:51:28.217463 4908 scope.go:117] "RemoveContainer" containerID="5bd9d7538355958db4a02616f1dc697d8a25a8c9f57dff795fe49bccb9b6eadf" Jan 31 07:51:28 crc kubenswrapper[4908]: I0131 07:51:28.276034 4908 scope.go:117] "RemoveContainer" containerID="da0309834ce602d6d65f7f9b7e6454625ccfa7d4dc45bec65a022b2aa319b4ef" Jan 31 07:51:28 crc kubenswrapper[4908]: I0131 07:51:28.300904 4908 scope.go:117] "RemoveContainer" containerID="055bed958baff07e16aee7b003c020f28515e104d6b81ee059a44dc3b50fd0dd" Jan 31 07:51:28 crc kubenswrapper[4908]: I0131 07:51:28.337728 4908 scope.go:117] "RemoveContainer" containerID="3ecd7306e601d01a7c2f2be8420c80d74111a15ce1a330c64942316c09b14796" Jan 31 07:51:28 crc kubenswrapper[4908]: I0131 07:51:28.380461 4908 scope.go:117] "RemoveContainer" containerID="915c80c54e252450fc700445c187d06efd153965433bfc4474e318bf80bfaccf" Jan 31 07:51:28 crc kubenswrapper[4908]: I0131 07:51:28.414100 4908 scope.go:117] "RemoveContainer" containerID="76a1780294639786eb222fdf49fa236ba7eb6f9e3d9984706a6129dc95607916" Jan 31 07:51:28 crc kubenswrapper[4908]: I0131 07:51:28.434905 4908 scope.go:117] "RemoveContainer" containerID="5abdf72af07280f20669c568f8344aa7660c20c711b9af3c360863cf5e4cc72a" Jan 31 07:51:28 crc kubenswrapper[4908]: I0131 07:51:28.475944 4908 scope.go:117] "RemoveContainer" containerID="9ffad97bdd94d95624830aa4001770e601176f665032368cd234ac41dcefc58f" Jan 31 07:51:28 crc kubenswrapper[4908]: I0131 07:51:28.495635 4908 scope.go:117] "RemoveContainer" containerID="9cd31e4113c0a9128d3d5415f958e99bcf9ee0251f9d07ec5593740f5dfc421e" Jan 31 07:51:28 crc kubenswrapper[4908]: I0131 07:51:28.518447 4908 scope.go:117] "RemoveContainer" containerID="1b0973d51ee69d7c2cd26f28fb04fae2f0a5d5648f9085ed9b7b9774817ddfca" Jan 31 07:51:28 crc kubenswrapper[4908]: I0131 07:51:28.539250 4908 scope.go:117] "RemoveContainer" containerID="e0f3915037dc7bc82589d800d49cfab360555f4719e2be36274ee4a9375d5974" Jan 31 07:51:42 crc kubenswrapper[4908]: I0131 07:51:42.941272 4908 scope.go:117] "RemoveContainer" containerID="3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def" Jan 31 07:51:42 crc kubenswrapper[4908]: E0131 07:51:42.942250 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 07:51:54 crc kubenswrapper[4908]: I0131 07:51:54.941465 4908 scope.go:117] "RemoveContainer" containerID="3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def" Jan 31 07:51:54 crc kubenswrapper[4908]: E0131 07:51:54.942486 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 07:52:04 crc kubenswrapper[4908]: I0131 07:52:04.646397 4908 generic.go:334] "Generic (PLEG): container finished" podID="5b900b02-6c80-43b1-96ac-7a4687cc4e65" containerID="48f57522321517580267732d3cd4630dd0066853433e4e0c7b51fc2c1df85952" exitCode=0 Jan 31 07:52:04 crc kubenswrapper[4908]: I0131 07:52:04.646472 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp" event={"ID":"5b900b02-6c80-43b1-96ac-7a4687cc4e65","Type":"ContainerDied","Data":"48f57522321517580267732d3cd4630dd0066853433e4e0c7b51fc2c1df85952"} Jan 31 07:52:06 crc kubenswrapper[4908]: I0131 07:52:06.049354 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp" Jan 31 07:52:06 crc kubenswrapper[4908]: I0131 07:52:06.182873 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b900b02-6c80-43b1-96ac-7a4687cc4e65-inventory\") pod \"5b900b02-6c80-43b1-96ac-7a4687cc4e65\" (UID: \"5b900b02-6c80-43b1-96ac-7a4687cc4e65\") " Jan 31 07:52:06 crc kubenswrapper[4908]: I0131 07:52:06.183259 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5b900b02-6c80-43b1-96ac-7a4687cc4e65-ssh-key-openstack-edpm-ipam\") pod \"5b900b02-6c80-43b1-96ac-7a4687cc4e65\" (UID: \"5b900b02-6c80-43b1-96ac-7a4687cc4e65\") " Jan 31 07:52:06 crc kubenswrapper[4908]: I0131 07:52:06.183392 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bntrm\" (UniqueName: \"kubernetes.io/projected/5b900b02-6c80-43b1-96ac-7a4687cc4e65-kube-api-access-bntrm\") pod \"5b900b02-6c80-43b1-96ac-7a4687cc4e65\" (UID: \"5b900b02-6c80-43b1-96ac-7a4687cc4e65\") " Jan 31 07:52:06 crc kubenswrapper[4908]: I0131 07:52:06.189636 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b900b02-6c80-43b1-96ac-7a4687cc4e65-kube-api-access-bntrm" (OuterVolumeSpecName: "kube-api-access-bntrm") pod "5b900b02-6c80-43b1-96ac-7a4687cc4e65" (UID: "5b900b02-6c80-43b1-96ac-7a4687cc4e65"). InnerVolumeSpecName "kube-api-access-bntrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:52:06 crc kubenswrapper[4908]: I0131 07:52:06.210842 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b900b02-6c80-43b1-96ac-7a4687cc4e65-inventory" (OuterVolumeSpecName: "inventory") pod "5b900b02-6c80-43b1-96ac-7a4687cc4e65" (UID: "5b900b02-6c80-43b1-96ac-7a4687cc4e65"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:52:06 crc kubenswrapper[4908]: I0131 07:52:06.213365 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b900b02-6c80-43b1-96ac-7a4687cc4e65-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5b900b02-6c80-43b1-96ac-7a4687cc4e65" (UID: "5b900b02-6c80-43b1-96ac-7a4687cc4e65"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:52:06 crc kubenswrapper[4908]: I0131 07:52:06.285901 4908 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b900b02-6c80-43b1-96ac-7a4687cc4e65-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:06 crc kubenswrapper[4908]: I0131 07:52:06.285963 4908 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5b900b02-6c80-43b1-96ac-7a4687cc4e65-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:06 crc kubenswrapper[4908]: I0131 07:52:06.285997 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bntrm\" (UniqueName: \"kubernetes.io/projected/5b900b02-6c80-43b1-96ac-7a4687cc4e65-kube-api-access-bntrm\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:06 crc kubenswrapper[4908]: I0131 07:52:06.663312 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp" event={"ID":"5b900b02-6c80-43b1-96ac-7a4687cc4e65","Type":"ContainerDied","Data":"8ec9d5860bd84819c64b0e050f795d94b6b2905de739b88c88212edbb97585ca"} Jan 31 07:52:06 crc kubenswrapper[4908]: I0131 07:52:06.663360 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ec9d5860bd84819c64b0e050f795d94b6b2905de739b88c88212edbb97585ca" Jan 31 07:52:06 crc kubenswrapper[4908]: I0131 07:52:06.663464 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp" Jan 31 07:52:06 crc kubenswrapper[4908]: I0131 07:52:06.745225 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sdvct"] Jan 31 07:52:06 crc kubenswrapper[4908]: E0131 07:52:06.745601 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b900b02-6c80-43b1-96ac-7a4687cc4e65" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 07:52:06 crc kubenswrapper[4908]: I0131 07:52:06.745619 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b900b02-6c80-43b1-96ac-7a4687cc4e65" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 07:52:06 crc kubenswrapper[4908]: I0131 07:52:06.745832 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b900b02-6c80-43b1-96ac-7a4687cc4e65" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 07:52:06 crc kubenswrapper[4908]: I0131 07:52:06.746583 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sdvct" Jan 31 07:52:06 crc kubenswrapper[4908]: I0131 07:52:06.749842 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 07:52:06 crc kubenswrapper[4908]: I0131 07:52:06.750098 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 07:52:06 crc kubenswrapper[4908]: I0131 07:52:06.753351 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 07:52:06 crc kubenswrapper[4908]: I0131 07:52:06.756084 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vgwb9" Jan 31 07:52:06 crc kubenswrapper[4908]: I0131 07:52:06.759688 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sdvct"] Jan 31 07:52:06 crc kubenswrapper[4908]: I0131 07:52:06.895898 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38cb28b5-2325-4864-8a7d-fb90210a6053-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sdvct\" (UID: \"38cb28b5-2325-4864-8a7d-fb90210a6053\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sdvct" Jan 31 07:52:06 crc kubenswrapper[4908]: I0131 07:52:06.896021 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38cb28b5-2325-4864-8a7d-fb90210a6053-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sdvct\" (UID: \"38cb28b5-2325-4864-8a7d-fb90210a6053\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sdvct" Jan 31 07:52:06 crc kubenswrapper[4908]: I0131 07:52:06.896098 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-868gn\" (UniqueName: \"kubernetes.io/projected/38cb28b5-2325-4864-8a7d-fb90210a6053-kube-api-access-868gn\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sdvct\" (UID: \"38cb28b5-2325-4864-8a7d-fb90210a6053\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sdvct" Jan 31 07:52:06 crc kubenswrapper[4908]: I0131 07:52:06.997111 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38cb28b5-2325-4864-8a7d-fb90210a6053-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sdvct\" (UID: \"38cb28b5-2325-4864-8a7d-fb90210a6053\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sdvct" Jan 31 07:52:06 crc kubenswrapper[4908]: I0131 07:52:06.997639 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38cb28b5-2325-4864-8a7d-fb90210a6053-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sdvct\" (UID: \"38cb28b5-2325-4864-8a7d-fb90210a6053\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sdvct" Jan 31 07:52:06 crc kubenswrapper[4908]: I0131 07:52:06.997706 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-868gn\" (UniqueName: \"kubernetes.io/projected/38cb28b5-2325-4864-8a7d-fb90210a6053-kube-api-access-868gn\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sdvct\" (UID: \"38cb28b5-2325-4864-8a7d-fb90210a6053\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sdvct" Jan 31 07:52:07 crc kubenswrapper[4908]: I0131 07:52:07.000611 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38cb28b5-2325-4864-8a7d-fb90210a6053-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sdvct\" (UID: \"38cb28b5-2325-4864-8a7d-fb90210a6053\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sdvct" Jan 31 07:52:07 crc kubenswrapper[4908]: I0131 07:52:07.002816 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38cb28b5-2325-4864-8a7d-fb90210a6053-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sdvct\" (UID: \"38cb28b5-2325-4864-8a7d-fb90210a6053\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sdvct" Jan 31 07:52:07 crc kubenswrapper[4908]: I0131 07:52:07.018306 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-868gn\" (UniqueName: \"kubernetes.io/projected/38cb28b5-2325-4864-8a7d-fb90210a6053-kube-api-access-868gn\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sdvct\" (UID: \"38cb28b5-2325-4864-8a7d-fb90210a6053\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sdvct" Jan 31 07:52:07 crc kubenswrapper[4908]: I0131 07:52:07.070271 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sdvct" Jan 31 07:52:12 crc kubenswrapper[4908]: I0131 07:52:07.572317 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sdvct"] Jan 31 07:52:12 crc kubenswrapper[4908]: I0131 07:52:07.670158 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sdvct" event={"ID":"38cb28b5-2325-4864-8a7d-fb90210a6053","Type":"ContainerStarted","Data":"83e81576529680d5faf4de387d57e2576b7d57c335dfeef59c0e26c6904294c7"} Jan 31 07:52:12 crc kubenswrapper[4908]: I0131 07:52:07.949256 4908 scope.go:117] "RemoveContainer" containerID="3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def" Jan 31 07:52:12 crc kubenswrapper[4908]: E0131 07:52:07.949538 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 07:52:12 crc kubenswrapper[4908]: I0131 07:52:12.715370 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sdvct" event={"ID":"38cb28b5-2325-4864-8a7d-fb90210a6053","Type":"ContainerStarted","Data":"cfed6f9e002ec9468c878446ccbdb018141c87f9cf96960d0e47756e1eeaa2e8"} Jan 31 07:52:16 crc kubenswrapper[4908]: I0131 07:52:16.745844 4908 generic.go:334] "Generic (PLEG): container finished" podID="38cb28b5-2325-4864-8a7d-fb90210a6053" containerID="cfed6f9e002ec9468c878446ccbdb018141c87f9cf96960d0e47756e1eeaa2e8" exitCode=0 Jan 31 07:52:16 crc kubenswrapper[4908]: I0131 07:52:16.746044 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sdvct" event={"ID":"38cb28b5-2325-4864-8a7d-fb90210a6053","Type":"ContainerDied","Data":"cfed6f9e002ec9468c878446ccbdb018141c87f9cf96960d0e47756e1eeaa2e8"} Jan 31 07:52:18 crc kubenswrapper[4908]: I0131 07:52:18.137955 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sdvct" Jan 31 07:52:18 crc kubenswrapper[4908]: I0131 07:52:18.213015 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38cb28b5-2325-4864-8a7d-fb90210a6053-ssh-key-openstack-edpm-ipam\") pod \"38cb28b5-2325-4864-8a7d-fb90210a6053\" (UID: \"38cb28b5-2325-4864-8a7d-fb90210a6053\") " Jan 31 07:52:18 crc kubenswrapper[4908]: I0131 07:52:18.213182 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-868gn\" (UniqueName: \"kubernetes.io/projected/38cb28b5-2325-4864-8a7d-fb90210a6053-kube-api-access-868gn\") pod \"38cb28b5-2325-4864-8a7d-fb90210a6053\" (UID: \"38cb28b5-2325-4864-8a7d-fb90210a6053\") " Jan 31 07:52:18 crc kubenswrapper[4908]: I0131 07:52:18.213312 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38cb28b5-2325-4864-8a7d-fb90210a6053-inventory\") pod \"38cb28b5-2325-4864-8a7d-fb90210a6053\" (UID: \"38cb28b5-2325-4864-8a7d-fb90210a6053\") " Jan 31 07:52:18 crc kubenswrapper[4908]: I0131 07:52:18.220307 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38cb28b5-2325-4864-8a7d-fb90210a6053-kube-api-access-868gn" (OuterVolumeSpecName: "kube-api-access-868gn") pod "38cb28b5-2325-4864-8a7d-fb90210a6053" (UID: "38cb28b5-2325-4864-8a7d-fb90210a6053"). InnerVolumeSpecName "kube-api-access-868gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:52:18 crc kubenswrapper[4908]: I0131 07:52:18.237201 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38cb28b5-2325-4864-8a7d-fb90210a6053-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "38cb28b5-2325-4864-8a7d-fb90210a6053" (UID: "38cb28b5-2325-4864-8a7d-fb90210a6053"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:52:18 crc kubenswrapper[4908]: I0131 07:52:18.239375 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38cb28b5-2325-4864-8a7d-fb90210a6053-inventory" (OuterVolumeSpecName: "inventory") pod "38cb28b5-2325-4864-8a7d-fb90210a6053" (UID: "38cb28b5-2325-4864-8a7d-fb90210a6053"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:52:18 crc kubenswrapper[4908]: I0131 07:52:18.315595 4908 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38cb28b5-2325-4864-8a7d-fb90210a6053-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:18 crc kubenswrapper[4908]: I0131 07:52:18.315645 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-868gn\" (UniqueName: \"kubernetes.io/projected/38cb28b5-2325-4864-8a7d-fb90210a6053-kube-api-access-868gn\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:18 crc kubenswrapper[4908]: I0131 07:52:18.315660 4908 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38cb28b5-2325-4864-8a7d-fb90210a6053-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:18 crc kubenswrapper[4908]: I0131 07:52:18.762915 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sdvct" event={"ID":"38cb28b5-2325-4864-8a7d-fb90210a6053","Type":"ContainerDied","Data":"83e81576529680d5faf4de387d57e2576b7d57c335dfeef59c0e26c6904294c7"} Jan 31 07:52:18 crc kubenswrapper[4908]: I0131 07:52:18.762968 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83e81576529680d5faf4de387d57e2576b7d57c335dfeef59c0e26c6904294c7" Jan 31 07:52:18 crc kubenswrapper[4908]: I0131 07:52:18.762998 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sdvct" Jan 31 07:52:18 crc kubenswrapper[4908]: I0131 07:52:18.843851 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qfkgh"] Jan 31 07:52:18 crc kubenswrapper[4908]: E0131 07:52:18.844485 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38cb28b5-2325-4864-8a7d-fb90210a6053" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 07:52:18 crc kubenswrapper[4908]: I0131 07:52:18.844505 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="38cb28b5-2325-4864-8a7d-fb90210a6053" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 07:52:18 crc kubenswrapper[4908]: I0131 07:52:18.844669 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="38cb28b5-2325-4864-8a7d-fb90210a6053" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 07:52:18 crc kubenswrapper[4908]: I0131 07:52:18.845263 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qfkgh" Jan 31 07:52:18 crc kubenswrapper[4908]: I0131 07:52:18.847699 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 07:52:18 crc kubenswrapper[4908]: I0131 07:52:18.847922 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 07:52:18 crc kubenswrapper[4908]: I0131 07:52:18.848092 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vgwb9" Jan 31 07:52:18 crc kubenswrapper[4908]: I0131 07:52:18.848133 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 07:52:18 crc kubenswrapper[4908]: I0131 07:52:18.858393 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qfkgh"] Jan 31 07:52:18 crc kubenswrapper[4908]: I0131 07:52:18.925514 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87d37be8-5527-495f-9013-36361f697414-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qfkgh\" (UID: \"87d37be8-5527-495f-9013-36361f697414\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qfkgh" Jan 31 07:52:18 crc kubenswrapper[4908]: I0131 07:52:18.925592 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7297\" (UniqueName: \"kubernetes.io/projected/87d37be8-5527-495f-9013-36361f697414-kube-api-access-w7297\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qfkgh\" (UID: \"87d37be8-5527-495f-9013-36361f697414\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qfkgh" Jan 31 07:52:18 crc kubenswrapper[4908]: I0131 07:52:18.925654 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/87d37be8-5527-495f-9013-36361f697414-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qfkgh\" (UID: \"87d37be8-5527-495f-9013-36361f697414\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qfkgh" Jan 31 07:52:19 crc kubenswrapper[4908]: I0131 07:52:19.027356 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7297\" (UniqueName: \"kubernetes.io/projected/87d37be8-5527-495f-9013-36361f697414-kube-api-access-w7297\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qfkgh\" (UID: \"87d37be8-5527-495f-9013-36361f697414\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qfkgh" Jan 31 07:52:19 crc kubenswrapper[4908]: I0131 07:52:19.027455 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/87d37be8-5527-495f-9013-36361f697414-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qfkgh\" (UID: \"87d37be8-5527-495f-9013-36361f697414\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qfkgh" Jan 31 07:52:19 crc kubenswrapper[4908]: I0131 07:52:19.027628 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87d37be8-5527-495f-9013-36361f697414-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qfkgh\" (UID: \"87d37be8-5527-495f-9013-36361f697414\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qfkgh" Jan 31 07:52:19 crc kubenswrapper[4908]: I0131 07:52:19.033099 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/87d37be8-5527-495f-9013-36361f697414-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qfkgh\" (UID: \"87d37be8-5527-495f-9013-36361f697414\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qfkgh" Jan 31 07:52:19 crc kubenswrapper[4908]: I0131 07:52:19.033406 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87d37be8-5527-495f-9013-36361f697414-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qfkgh\" (UID: \"87d37be8-5527-495f-9013-36361f697414\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qfkgh" Jan 31 07:52:19 crc kubenswrapper[4908]: I0131 07:52:19.045526 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7297\" (UniqueName: \"kubernetes.io/projected/87d37be8-5527-495f-9013-36361f697414-kube-api-access-w7297\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qfkgh\" (UID: \"87d37be8-5527-495f-9013-36361f697414\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qfkgh" Jan 31 07:52:19 crc kubenswrapper[4908]: I0131 07:52:19.162415 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qfkgh" Jan 31 07:52:19 crc kubenswrapper[4908]: I0131 07:52:19.677422 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qfkgh"] Jan 31 07:52:19 crc kubenswrapper[4908]: I0131 07:52:19.771998 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qfkgh" event={"ID":"87d37be8-5527-495f-9013-36361f697414","Type":"ContainerStarted","Data":"f99f2f1f927754b4f1ee401a6df9916244e01b8ec0c9b39b3a031c788ec41a72"} Jan 31 07:52:19 crc kubenswrapper[4908]: I0131 07:52:19.941768 4908 scope.go:117] "RemoveContainer" containerID="3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def" Jan 31 07:52:19 crc kubenswrapper[4908]: E0131 07:52:19.942290 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 07:52:20 crc kubenswrapper[4908]: I0131 07:52:20.781438 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qfkgh" event={"ID":"87d37be8-5527-495f-9013-36361f697414","Type":"ContainerStarted","Data":"15902a77fcc65c29b755ea676545ef17f4ad368a0f4f8a6c3b3eac1513110b57"} Jan 31 07:52:20 crc kubenswrapper[4908]: I0131 07:52:20.805828 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qfkgh" podStartSLOduration=2.281988655 podStartE2EDuration="2.805757022s" podCreationTimestamp="2026-01-31 07:52:18 +0000 UTC" firstStartedPulling="2026-01-31 07:52:19.693022386 +0000 UTC m=+1846.308967040" lastFinishedPulling="2026-01-31 07:52:20.216790753 +0000 UTC m=+1846.832735407" observedRunningTime="2026-01-31 07:52:20.79969055 +0000 UTC m=+1847.415635224" watchObservedRunningTime="2026-01-31 07:52:20.805757022 +0000 UTC m=+1847.421701676" Jan 31 07:52:24 crc kubenswrapper[4908]: I0131 07:52:24.040916 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-ptzt6"] Jan 31 07:52:24 crc kubenswrapper[4908]: I0131 07:52:24.049108 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-ptzt6"] Jan 31 07:52:25 crc kubenswrapper[4908]: I0131 07:52:25.951672 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="570bea44-7c9b-4296-9188-5e5e590e4493" path="/var/lib/kubelet/pods/570bea44-7c9b-4296-9188-5e5e590e4493/volumes" Jan 31 07:52:28 crc kubenswrapper[4908]: I0131 07:52:28.964040 4908 scope.go:117] "RemoveContainer" containerID="180e154f7fbd0350557b954c2fc464484a1cfd2215ffd4d20d43eb6a00b5aa42" Jan 31 07:52:32 crc kubenswrapper[4908]: I0131 07:52:32.940803 4908 scope.go:117] "RemoveContainer" containerID="3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def" Jan 31 07:52:32 crc kubenswrapper[4908]: E0131 07:52:32.942157 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 07:52:34 crc kubenswrapper[4908]: I0131 07:52:34.035360 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-krd5r"] Jan 31 07:52:34 crc kubenswrapper[4908]: I0131 07:52:34.044533 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-krd5r"] Jan 31 07:52:35 crc kubenswrapper[4908]: I0131 07:52:35.951603 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b59881d-759b-492a-b475-c27f092660c6" path="/var/lib/kubelet/pods/3b59881d-759b-492a-b475-c27f092660c6/volumes" Jan 31 07:52:45 crc kubenswrapper[4908]: I0131 07:52:45.940844 4908 scope.go:117] "RemoveContainer" containerID="3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def" Jan 31 07:52:45 crc kubenswrapper[4908]: E0131 07:52:45.941631 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 07:52:55 crc kubenswrapper[4908]: I0131 07:52:55.082360 4908 generic.go:334] "Generic (PLEG): container finished" podID="87d37be8-5527-495f-9013-36361f697414" containerID="15902a77fcc65c29b755ea676545ef17f4ad368a0f4f8a6c3b3eac1513110b57" exitCode=0 Jan 31 07:52:55 crc kubenswrapper[4908]: I0131 07:52:55.082468 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qfkgh" event={"ID":"87d37be8-5527-495f-9013-36361f697414","Type":"ContainerDied","Data":"15902a77fcc65c29b755ea676545ef17f4ad368a0f4f8a6c3b3eac1513110b57"} Jan 31 07:52:56 crc kubenswrapper[4908]: I0131 07:52:56.538411 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qfkgh" Jan 31 07:52:56 crc kubenswrapper[4908]: I0131 07:52:56.652806 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7297\" (UniqueName: \"kubernetes.io/projected/87d37be8-5527-495f-9013-36361f697414-kube-api-access-w7297\") pod \"87d37be8-5527-495f-9013-36361f697414\" (UID: \"87d37be8-5527-495f-9013-36361f697414\") " Jan 31 07:52:56 crc kubenswrapper[4908]: I0131 07:52:56.652920 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/87d37be8-5527-495f-9013-36361f697414-ssh-key-openstack-edpm-ipam\") pod \"87d37be8-5527-495f-9013-36361f697414\" (UID: \"87d37be8-5527-495f-9013-36361f697414\") " Jan 31 07:52:56 crc kubenswrapper[4908]: I0131 07:52:56.653111 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87d37be8-5527-495f-9013-36361f697414-inventory\") pod \"87d37be8-5527-495f-9013-36361f697414\" (UID: \"87d37be8-5527-495f-9013-36361f697414\") " Jan 31 07:52:56 crc kubenswrapper[4908]: I0131 07:52:56.663290 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87d37be8-5527-495f-9013-36361f697414-kube-api-access-w7297" (OuterVolumeSpecName: "kube-api-access-w7297") pod "87d37be8-5527-495f-9013-36361f697414" (UID: "87d37be8-5527-495f-9013-36361f697414"). InnerVolumeSpecName "kube-api-access-w7297". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:52:56 crc kubenswrapper[4908]: I0131 07:52:56.679745 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87d37be8-5527-495f-9013-36361f697414-inventory" (OuterVolumeSpecName: "inventory") pod "87d37be8-5527-495f-9013-36361f697414" (UID: "87d37be8-5527-495f-9013-36361f697414"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:52:56 crc kubenswrapper[4908]: I0131 07:52:56.683174 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87d37be8-5527-495f-9013-36361f697414-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "87d37be8-5527-495f-9013-36361f697414" (UID: "87d37be8-5527-495f-9013-36361f697414"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:52:56 crc kubenswrapper[4908]: I0131 07:52:56.756174 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7297\" (UniqueName: \"kubernetes.io/projected/87d37be8-5527-495f-9013-36361f697414-kube-api-access-w7297\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:56 crc kubenswrapper[4908]: I0131 07:52:56.756201 4908 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/87d37be8-5527-495f-9013-36361f697414-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:56 crc kubenswrapper[4908]: I0131 07:52:56.756212 4908 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87d37be8-5527-495f-9013-36361f697414-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:56 crc kubenswrapper[4908]: I0131 07:52:56.940017 4908 scope.go:117] "RemoveContainer" containerID="3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def" Jan 31 07:52:56 crc kubenswrapper[4908]: E0131 07:52:56.940392 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 07:52:57 crc kubenswrapper[4908]: I0131 07:52:57.106740 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qfkgh" Jan 31 07:52:57 crc kubenswrapper[4908]: I0131 07:52:57.106731 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qfkgh" event={"ID":"87d37be8-5527-495f-9013-36361f697414","Type":"ContainerDied","Data":"f99f2f1f927754b4f1ee401a6df9916244e01b8ec0c9b39b3a031c788ec41a72"} Jan 31 07:52:57 crc kubenswrapper[4908]: I0131 07:52:57.106892 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f99f2f1f927754b4f1ee401a6df9916244e01b8ec0c9b39b3a031c788ec41a72" Jan 31 07:52:57 crc kubenswrapper[4908]: I0131 07:52:57.182099 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx"] Jan 31 07:52:57 crc kubenswrapper[4908]: E0131 07:52:57.182676 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d37be8-5527-495f-9013-36361f697414" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 07:52:57 crc kubenswrapper[4908]: I0131 07:52:57.182695 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d37be8-5527-495f-9013-36361f697414" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 07:52:57 crc kubenswrapper[4908]: I0131 07:52:57.182887 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="87d37be8-5527-495f-9013-36361f697414" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 07:52:57 crc kubenswrapper[4908]: I0131 07:52:57.183457 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx" Jan 31 07:52:57 crc kubenswrapper[4908]: I0131 07:52:57.185952 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vgwb9" Jan 31 07:52:57 crc kubenswrapper[4908]: I0131 07:52:57.186161 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 07:52:57 crc kubenswrapper[4908]: I0131 07:52:57.186309 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 07:52:57 crc kubenswrapper[4908]: I0131 07:52:57.186455 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 07:52:57 crc kubenswrapper[4908]: I0131 07:52:57.198001 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx"] Jan 31 07:52:57 crc kubenswrapper[4908]: I0131 07:52:57.266881 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49ca2ee4-9af6-4ab5-91f0-0e5203ad585d-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx\" (UID: \"49ca2ee4-9af6-4ab5-91f0-0e5203ad585d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx" Jan 31 07:52:57 crc kubenswrapper[4908]: I0131 07:52:57.267219 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf7bp\" (UniqueName: \"kubernetes.io/projected/49ca2ee4-9af6-4ab5-91f0-0e5203ad585d-kube-api-access-vf7bp\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx\" (UID: \"49ca2ee4-9af6-4ab5-91f0-0e5203ad585d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx" Jan 31 07:52:57 crc kubenswrapper[4908]: I0131 07:52:57.267258 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49ca2ee4-9af6-4ab5-91f0-0e5203ad585d-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx\" (UID: \"49ca2ee4-9af6-4ab5-91f0-0e5203ad585d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx" Jan 31 07:52:57 crc kubenswrapper[4908]: I0131 07:52:57.368933 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49ca2ee4-9af6-4ab5-91f0-0e5203ad585d-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx\" (UID: \"49ca2ee4-9af6-4ab5-91f0-0e5203ad585d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx" Jan 31 07:52:57 crc kubenswrapper[4908]: I0131 07:52:57.368995 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf7bp\" (UniqueName: \"kubernetes.io/projected/49ca2ee4-9af6-4ab5-91f0-0e5203ad585d-kube-api-access-vf7bp\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx\" (UID: \"49ca2ee4-9af6-4ab5-91f0-0e5203ad585d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx" Jan 31 07:52:57 crc kubenswrapper[4908]: I0131 07:52:57.369017 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49ca2ee4-9af6-4ab5-91f0-0e5203ad585d-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx\" (UID: \"49ca2ee4-9af6-4ab5-91f0-0e5203ad585d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx" Jan 31 07:52:57 crc kubenswrapper[4908]: I0131 07:52:57.375050 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49ca2ee4-9af6-4ab5-91f0-0e5203ad585d-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx\" (UID: \"49ca2ee4-9af6-4ab5-91f0-0e5203ad585d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx" Jan 31 07:52:57 crc kubenswrapper[4908]: I0131 07:52:57.375361 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49ca2ee4-9af6-4ab5-91f0-0e5203ad585d-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx\" (UID: \"49ca2ee4-9af6-4ab5-91f0-0e5203ad585d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx" Jan 31 07:52:57 crc kubenswrapper[4908]: I0131 07:52:57.424304 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf7bp\" (UniqueName: \"kubernetes.io/projected/49ca2ee4-9af6-4ab5-91f0-0e5203ad585d-kube-api-access-vf7bp\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx\" (UID: \"49ca2ee4-9af6-4ab5-91f0-0e5203ad585d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx" Jan 31 07:52:57 crc kubenswrapper[4908]: I0131 07:52:57.504468 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx" Jan 31 07:52:58 crc kubenswrapper[4908]: I0131 07:52:58.008505 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx"] Jan 31 07:52:58 crc kubenswrapper[4908]: I0131 07:52:58.038600 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-bmhth"] Jan 31 07:52:58 crc kubenswrapper[4908]: I0131 07:52:58.045998 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-bmhth"] Jan 31 07:52:58 crc kubenswrapper[4908]: I0131 07:52:58.116328 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx" event={"ID":"49ca2ee4-9af6-4ab5-91f0-0e5203ad585d","Type":"ContainerStarted","Data":"0eadae7a8a66fee4b78ef17410cec8b84a425c7c531b0f93303683d65f6f4f36"} Jan 31 07:52:59 crc kubenswrapper[4908]: I0131 07:52:59.130425 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx" event={"ID":"49ca2ee4-9af6-4ab5-91f0-0e5203ad585d","Type":"ContainerStarted","Data":"c3af099a0d4748bb0e756818e73874c4f9ad162ffb0d9208446d192e005c4b99"} Jan 31 07:52:59 crc kubenswrapper[4908]: I0131 07:52:59.161344 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx" podStartSLOduration=1.6779330749999999 podStartE2EDuration="2.161323648s" podCreationTimestamp="2026-01-31 07:52:57 +0000 UTC" firstStartedPulling="2026-01-31 07:52:58.015417925 +0000 UTC m=+1884.631362579" lastFinishedPulling="2026-01-31 07:52:58.498808498 +0000 UTC m=+1885.114753152" observedRunningTime="2026-01-31 07:52:59.150431785 +0000 UTC m=+1885.766376439" watchObservedRunningTime="2026-01-31 07:52:59.161323648 +0000 UTC m=+1885.777268302" Jan 31 07:52:59 crc kubenswrapper[4908]: I0131 07:52:59.954547 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9f3cf33-1d4c-4fae-ac0e-54d917d43325" path="/var/lib/kubelet/pods/a9f3cf33-1d4c-4fae-ac0e-54d917d43325/volumes" Jan 31 07:53:03 crc kubenswrapper[4908]: I0131 07:53:03.167089 4908 generic.go:334] "Generic (PLEG): container finished" podID="49ca2ee4-9af6-4ab5-91f0-0e5203ad585d" containerID="c3af099a0d4748bb0e756818e73874c4f9ad162ffb0d9208446d192e005c4b99" exitCode=0 Jan 31 07:53:03 crc kubenswrapper[4908]: I0131 07:53:03.167276 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx" event={"ID":"49ca2ee4-9af6-4ab5-91f0-0e5203ad585d","Type":"ContainerDied","Data":"c3af099a0d4748bb0e756818e73874c4f9ad162ffb0d9208446d192e005c4b99"} Jan 31 07:53:04 crc kubenswrapper[4908]: I0131 07:53:04.574061 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx" Jan 31 07:53:04 crc kubenswrapper[4908]: I0131 07:53:04.598658 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vf7bp\" (UniqueName: \"kubernetes.io/projected/49ca2ee4-9af6-4ab5-91f0-0e5203ad585d-kube-api-access-vf7bp\") pod \"49ca2ee4-9af6-4ab5-91f0-0e5203ad585d\" (UID: \"49ca2ee4-9af6-4ab5-91f0-0e5203ad585d\") " Jan 31 07:53:04 crc kubenswrapper[4908]: I0131 07:53:04.598890 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49ca2ee4-9af6-4ab5-91f0-0e5203ad585d-ssh-key-openstack-edpm-ipam\") pod \"49ca2ee4-9af6-4ab5-91f0-0e5203ad585d\" (UID: \"49ca2ee4-9af6-4ab5-91f0-0e5203ad585d\") " Jan 31 07:53:04 crc kubenswrapper[4908]: I0131 07:53:04.598971 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49ca2ee4-9af6-4ab5-91f0-0e5203ad585d-inventory\") pod \"49ca2ee4-9af6-4ab5-91f0-0e5203ad585d\" (UID: \"49ca2ee4-9af6-4ab5-91f0-0e5203ad585d\") " Jan 31 07:53:04 crc kubenswrapper[4908]: I0131 07:53:04.604498 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ca2ee4-9af6-4ab5-91f0-0e5203ad585d-kube-api-access-vf7bp" (OuterVolumeSpecName: "kube-api-access-vf7bp") pod "49ca2ee4-9af6-4ab5-91f0-0e5203ad585d" (UID: "49ca2ee4-9af6-4ab5-91f0-0e5203ad585d"). InnerVolumeSpecName "kube-api-access-vf7bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:53:04 crc kubenswrapper[4908]: I0131 07:53:04.629933 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49ca2ee4-9af6-4ab5-91f0-0e5203ad585d-inventory" (OuterVolumeSpecName: "inventory") pod "49ca2ee4-9af6-4ab5-91f0-0e5203ad585d" (UID: "49ca2ee4-9af6-4ab5-91f0-0e5203ad585d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:53:04 crc kubenswrapper[4908]: I0131 07:53:04.633009 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49ca2ee4-9af6-4ab5-91f0-0e5203ad585d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "49ca2ee4-9af6-4ab5-91f0-0e5203ad585d" (UID: "49ca2ee4-9af6-4ab5-91f0-0e5203ad585d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:53:04 crc kubenswrapper[4908]: I0131 07:53:04.701315 4908 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49ca2ee4-9af6-4ab5-91f0-0e5203ad585d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:04 crc kubenswrapper[4908]: I0131 07:53:04.701604 4908 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49ca2ee4-9af6-4ab5-91f0-0e5203ad585d-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:04 crc kubenswrapper[4908]: I0131 07:53:04.701674 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vf7bp\" (UniqueName: \"kubernetes.io/projected/49ca2ee4-9af6-4ab5-91f0-0e5203ad585d-kube-api-access-vf7bp\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:05 crc kubenswrapper[4908]: I0131 07:53:05.186605 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx" event={"ID":"49ca2ee4-9af6-4ab5-91f0-0e5203ad585d","Type":"ContainerDied","Data":"0eadae7a8a66fee4b78ef17410cec8b84a425c7c531b0f93303683d65f6f4f36"} Jan 31 07:53:05 crc kubenswrapper[4908]: I0131 07:53:05.186654 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0eadae7a8a66fee4b78ef17410cec8b84a425c7c531b0f93303683d65f6f4f36" Jan 31 07:53:05 crc kubenswrapper[4908]: I0131 07:53:05.186701 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx" Jan 31 07:53:05 crc kubenswrapper[4908]: I0131 07:53:05.261717 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r"] Jan 31 07:53:05 crc kubenswrapper[4908]: E0131 07:53:05.262263 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49ca2ee4-9af6-4ab5-91f0-0e5203ad585d" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 31 07:53:05 crc kubenswrapper[4908]: I0131 07:53:05.262292 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="49ca2ee4-9af6-4ab5-91f0-0e5203ad585d" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 31 07:53:05 crc kubenswrapper[4908]: I0131 07:53:05.262552 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="49ca2ee4-9af6-4ab5-91f0-0e5203ad585d" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 31 07:53:05 crc kubenswrapper[4908]: I0131 07:53:05.263349 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r" Jan 31 07:53:05 crc kubenswrapper[4908]: I0131 07:53:05.266521 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 07:53:05 crc kubenswrapper[4908]: I0131 07:53:05.266578 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 07:53:05 crc kubenswrapper[4908]: I0131 07:53:05.270560 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 07:53:05 crc kubenswrapper[4908]: I0131 07:53:05.279059 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r"] Jan 31 07:53:05 crc kubenswrapper[4908]: I0131 07:53:05.279340 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vgwb9" Jan 31 07:53:05 crc kubenswrapper[4908]: I0131 07:53:05.311727 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b59cc14-57f8-4c8d-9807-fdd0194ea923-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r\" (UID: \"1b59cc14-57f8-4c8d-9807-fdd0194ea923\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r" Jan 31 07:53:05 crc kubenswrapper[4908]: I0131 07:53:05.311870 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwg46\" (UniqueName: \"kubernetes.io/projected/1b59cc14-57f8-4c8d-9807-fdd0194ea923-kube-api-access-cwg46\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r\" (UID: \"1b59cc14-57f8-4c8d-9807-fdd0194ea923\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r" Jan 31 07:53:05 crc kubenswrapper[4908]: I0131 07:53:05.312087 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b59cc14-57f8-4c8d-9807-fdd0194ea923-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r\" (UID: \"1b59cc14-57f8-4c8d-9807-fdd0194ea923\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r" Jan 31 07:53:05 crc kubenswrapper[4908]: I0131 07:53:05.413915 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b59cc14-57f8-4c8d-9807-fdd0194ea923-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r\" (UID: \"1b59cc14-57f8-4c8d-9807-fdd0194ea923\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r" Jan 31 07:53:05 crc kubenswrapper[4908]: I0131 07:53:05.413998 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b59cc14-57f8-4c8d-9807-fdd0194ea923-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r\" (UID: \"1b59cc14-57f8-4c8d-9807-fdd0194ea923\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r" Jan 31 07:53:05 crc kubenswrapper[4908]: I0131 07:53:05.414059 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwg46\" (UniqueName: \"kubernetes.io/projected/1b59cc14-57f8-4c8d-9807-fdd0194ea923-kube-api-access-cwg46\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r\" (UID: \"1b59cc14-57f8-4c8d-9807-fdd0194ea923\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r" Jan 31 07:53:05 crc kubenswrapper[4908]: I0131 07:53:05.419495 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b59cc14-57f8-4c8d-9807-fdd0194ea923-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r\" (UID: \"1b59cc14-57f8-4c8d-9807-fdd0194ea923\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r" Jan 31 07:53:05 crc kubenswrapper[4908]: I0131 07:53:05.419677 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b59cc14-57f8-4c8d-9807-fdd0194ea923-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r\" (UID: \"1b59cc14-57f8-4c8d-9807-fdd0194ea923\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r" Jan 31 07:53:05 crc kubenswrapper[4908]: I0131 07:53:05.430086 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwg46\" (UniqueName: \"kubernetes.io/projected/1b59cc14-57f8-4c8d-9807-fdd0194ea923-kube-api-access-cwg46\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r\" (UID: \"1b59cc14-57f8-4c8d-9807-fdd0194ea923\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r" Jan 31 07:53:05 crc kubenswrapper[4908]: I0131 07:53:05.583404 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r" Jan 31 07:53:06 crc kubenswrapper[4908]: I0131 07:53:06.090936 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r"] Jan 31 07:53:06 crc kubenswrapper[4908]: I0131 07:53:06.196274 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r" event={"ID":"1b59cc14-57f8-4c8d-9807-fdd0194ea923","Type":"ContainerStarted","Data":"3e8ba4c082ab68e06149b3594dbb90515e185a46e5665d7619c8fb2913c49b2e"} Jan 31 07:53:07 crc kubenswrapper[4908]: I0131 07:53:07.214865 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r" event={"ID":"1b59cc14-57f8-4c8d-9807-fdd0194ea923","Type":"ContainerStarted","Data":"a9337c4a5e3bcca4a2af413c574c3947514656569bd223d6910e2d228e240d7c"} Jan 31 07:53:07 crc kubenswrapper[4908]: I0131 07:53:07.261317 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r" podStartSLOduration=1.8271463350000001 podStartE2EDuration="2.261297933s" podCreationTimestamp="2026-01-31 07:53:05 +0000 UTC" firstStartedPulling="2026-01-31 07:53:06.097722246 +0000 UTC m=+1892.713666900" lastFinishedPulling="2026-01-31 07:53:06.531873844 +0000 UTC m=+1893.147818498" observedRunningTime="2026-01-31 07:53:07.254715868 +0000 UTC m=+1893.870660522" watchObservedRunningTime="2026-01-31 07:53:07.261297933 +0000 UTC m=+1893.877242587" Jan 31 07:53:09 crc kubenswrapper[4908]: I0131 07:53:09.940770 4908 scope.go:117] "RemoveContainer" containerID="3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def" Jan 31 07:53:09 crc kubenswrapper[4908]: E0131 07:53:09.941427 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 07:53:11 crc kubenswrapper[4908]: I0131 07:53:11.037021 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-8bx9v"] Jan 31 07:53:11 crc kubenswrapper[4908]: I0131 07:53:11.044314 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-8bx9v"] Jan 31 07:53:11 crc kubenswrapper[4908]: I0131 07:53:11.952653 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee8dbc71-e43b-49a6-9d68-78b987f39b89" path="/var/lib/kubelet/pods/ee8dbc71-e43b-49a6-9d68-78b987f39b89/volumes" Jan 31 07:53:22 crc kubenswrapper[4908]: I0131 07:53:22.940062 4908 scope.go:117] "RemoveContainer" containerID="3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def" Jan 31 07:53:22 crc kubenswrapper[4908]: E0131 07:53:22.940816 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 07:53:26 crc kubenswrapper[4908]: I0131 07:53:26.030702 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-lbm78"] Jan 31 07:53:26 crc kubenswrapper[4908]: I0131 07:53:26.052015 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-lbm78"] Jan 31 07:53:27 crc kubenswrapper[4908]: I0131 07:53:27.950393 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1b5c255-9609-4fc5-a3af-10d0faf40366" path="/var/lib/kubelet/pods/d1b5c255-9609-4fc5-a3af-10d0faf40366/volumes" Jan 31 07:53:29 crc kubenswrapper[4908]: I0131 07:53:29.047395 4908 scope.go:117] "RemoveContainer" containerID="bc2fd22b47fe8a4ed84c4721fb8f25d493a18ca5a11a3e6022c1f0f4e1b3be8f" Jan 31 07:53:29 crc kubenswrapper[4908]: I0131 07:53:29.074071 4908 scope.go:117] "RemoveContainer" containerID="5a86a9cecf68c69686e197d5cb7dd2280893873fbf065193f53ae3dd8613e2fe" Jan 31 07:53:29 crc kubenswrapper[4908]: I0131 07:53:29.127609 4908 scope.go:117] "RemoveContainer" containerID="51d33265242be6c5d8c594fbe23fcc24be49780a685e54cda6de66912ed24619" Jan 31 07:53:29 crc kubenswrapper[4908]: I0131 07:53:29.168584 4908 scope.go:117] "RemoveContainer" containerID="2bd36076791b4526cd4ca77aa1139f6079b805edcaabb95501b08d9e06dfe87e" Jan 31 07:53:33 crc kubenswrapper[4908]: I0131 07:53:33.940463 4908 scope.go:117] "RemoveContainer" containerID="3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def" Jan 31 07:53:33 crc kubenswrapper[4908]: E0131 07:53:33.941763 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 07:53:39 crc kubenswrapper[4908]: I0131 07:53:39.044743 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-zhkcp"] Jan 31 07:53:39 crc kubenswrapper[4908]: I0131 07:53:39.055867 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-kkx5h"] Jan 31 07:53:39 crc kubenswrapper[4908]: I0131 07:53:39.065855 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-mbdvd"] Jan 31 07:53:39 crc kubenswrapper[4908]: I0131 07:53:39.075221 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-zhkcp"] Jan 31 07:53:39 crc kubenswrapper[4908]: I0131 07:53:39.084489 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-kkx5h"] Jan 31 07:53:39 crc kubenswrapper[4908]: I0131 07:53:39.093364 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-mbdvd"] Jan 31 07:53:39 crc kubenswrapper[4908]: I0131 07:53:39.954161 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b02f781-7443-49e3-aa1e-a9a1a8a36329" path="/var/lib/kubelet/pods/4b02f781-7443-49e3-aa1e-a9a1a8a36329/volumes" Jan 31 07:53:39 crc kubenswrapper[4908]: I0131 07:53:39.955472 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a874264b-db1f-4a01-9f15-c1e50e22b854" path="/var/lib/kubelet/pods/a874264b-db1f-4a01-9f15-c1e50e22b854/volumes" Jan 31 07:53:39 crc kubenswrapper[4908]: I0131 07:53:39.956212 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b99de16f-0b42-4cfb-b041-c0a388bc31e0" path="/var/lib/kubelet/pods/b99de16f-0b42-4cfb-b041-c0a388bc31e0/volumes" Jan 31 07:53:40 crc kubenswrapper[4908]: I0131 07:53:40.023783 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-ada9-account-create-update-pprbw"] Jan 31 07:53:40 crc kubenswrapper[4908]: I0131 07:53:40.033805 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-b44a-account-create-update-bd28z"] Jan 31 07:53:40 crc kubenswrapper[4908]: I0131 07:53:40.043134 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-b96a-account-create-update-tppx9"] Jan 31 07:53:40 crc kubenswrapper[4908]: I0131 07:53:40.057406 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-ada9-account-create-update-pprbw"] Jan 31 07:53:40 crc kubenswrapper[4908]: I0131 07:53:40.064297 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-b44a-account-create-update-bd28z"] Jan 31 07:53:40 crc kubenswrapper[4908]: I0131 07:53:40.070892 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-b96a-account-create-update-tppx9"] Jan 31 07:53:41 crc kubenswrapper[4908]: I0131 07:53:41.955432 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0405b85c-acf0-4a3a-9018-c34165dd440a" path="/var/lib/kubelet/pods/0405b85c-acf0-4a3a-9018-c34165dd440a/volumes" Jan 31 07:53:41 crc kubenswrapper[4908]: I0131 07:53:41.956235 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220" path="/var/lib/kubelet/pods/b0ced2b4-a151-4b9c-9bc2-1f9eee1e2220/volumes" Jan 31 07:53:41 crc kubenswrapper[4908]: I0131 07:53:41.956878 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4ca7fd3-7fa3-4048-8325-f58efea50f94" path="/var/lib/kubelet/pods/c4ca7fd3-7fa3-4048-8325-f58efea50f94/volumes" Jan 31 07:53:45 crc kubenswrapper[4908]: I0131 07:53:45.940042 4908 scope.go:117] "RemoveContainer" containerID="3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def" Jan 31 07:53:46 crc kubenswrapper[4908]: I0131 07:53:46.565655 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerStarted","Data":"be8190fad5773a1ed4c2064880cbb099b59976ded3b5fd0bc8e24e0b9a05cbc6"} Jan 31 07:53:51 crc kubenswrapper[4908]: I0131 07:53:51.804455 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d7lmk"] Jan 31 07:53:51 crc kubenswrapper[4908]: I0131 07:53:51.806856 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7lmk" Jan 31 07:53:51 crc kubenswrapper[4908]: I0131 07:53:51.816352 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d7lmk"] Jan 31 07:53:51 crc kubenswrapper[4908]: I0131 07:53:51.982133 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab3ed7e6-c344-450a-b8dd-a2e7af655ddb-catalog-content\") pod \"redhat-operators-d7lmk\" (UID: \"ab3ed7e6-c344-450a-b8dd-a2e7af655ddb\") " pod="openshift-marketplace/redhat-operators-d7lmk" Jan 31 07:53:51 crc kubenswrapper[4908]: I0131 07:53:51.982185 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab3ed7e6-c344-450a-b8dd-a2e7af655ddb-utilities\") pod \"redhat-operators-d7lmk\" (UID: \"ab3ed7e6-c344-450a-b8dd-a2e7af655ddb\") " pod="openshift-marketplace/redhat-operators-d7lmk" Jan 31 07:53:51 crc kubenswrapper[4908]: I0131 07:53:51.982214 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kkzr\" (UniqueName: \"kubernetes.io/projected/ab3ed7e6-c344-450a-b8dd-a2e7af655ddb-kube-api-access-7kkzr\") pod \"redhat-operators-d7lmk\" (UID: \"ab3ed7e6-c344-450a-b8dd-a2e7af655ddb\") " pod="openshift-marketplace/redhat-operators-d7lmk" Jan 31 07:53:52 crc kubenswrapper[4908]: I0131 07:53:52.083742 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab3ed7e6-c344-450a-b8dd-a2e7af655ddb-catalog-content\") pod \"redhat-operators-d7lmk\" (UID: \"ab3ed7e6-c344-450a-b8dd-a2e7af655ddb\") " pod="openshift-marketplace/redhat-operators-d7lmk" Jan 31 07:53:52 crc kubenswrapper[4908]: I0131 07:53:52.083806 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab3ed7e6-c344-450a-b8dd-a2e7af655ddb-utilities\") pod \"redhat-operators-d7lmk\" (UID: \"ab3ed7e6-c344-450a-b8dd-a2e7af655ddb\") " pod="openshift-marketplace/redhat-operators-d7lmk" Jan 31 07:53:52 crc kubenswrapper[4908]: I0131 07:53:52.083851 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kkzr\" (UniqueName: \"kubernetes.io/projected/ab3ed7e6-c344-450a-b8dd-a2e7af655ddb-kube-api-access-7kkzr\") pod \"redhat-operators-d7lmk\" (UID: \"ab3ed7e6-c344-450a-b8dd-a2e7af655ddb\") " pod="openshift-marketplace/redhat-operators-d7lmk" Jan 31 07:53:52 crc kubenswrapper[4908]: I0131 07:53:52.084751 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab3ed7e6-c344-450a-b8dd-a2e7af655ddb-utilities\") pod \"redhat-operators-d7lmk\" (UID: \"ab3ed7e6-c344-450a-b8dd-a2e7af655ddb\") " pod="openshift-marketplace/redhat-operators-d7lmk" Jan 31 07:53:52 crc kubenswrapper[4908]: I0131 07:53:52.084819 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab3ed7e6-c344-450a-b8dd-a2e7af655ddb-catalog-content\") pod \"redhat-operators-d7lmk\" (UID: \"ab3ed7e6-c344-450a-b8dd-a2e7af655ddb\") " pod="openshift-marketplace/redhat-operators-d7lmk" Jan 31 07:53:52 crc kubenswrapper[4908]: I0131 07:53:52.105602 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kkzr\" (UniqueName: \"kubernetes.io/projected/ab3ed7e6-c344-450a-b8dd-a2e7af655ddb-kube-api-access-7kkzr\") pod \"redhat-operators-d7lmk\" (UID: \"ab3ed7e6-c344-450a-b8dd-a2e7af655ddb\") " pod="openshift-marketplace/redhat-operators-d7lmk" Jan 31 07:53:52 crc kubenswrapper[4908]: I0131 07:53:52.179028 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7lmk" Jan 31 07:53:52 crc kubenswrapper[4908]: I0131 07:53:52.618009 4908 generic.go:334] "Generic (PLEG): container finished" podID="1b59cc14-57f8-4c8d-9807-fdd0194ea923" containerID="a9337c4a5e3bcca4a2af413c574c3947514656569bd223d6910e2d228e240d7c" exitCode=0 Jan 31 07:53:52 crc kubenswrapper[4908]: I0131 07:53:52.618100 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r" event={"ID":"1b59cc14-57f8-4c8d-9807-fdd0194ea923","Type":"ContainerDied","Data":"a9337c4a5e3bcca4a2af413c574c3947514656569bd223d6910e2d228e240d7c"} Jan 31 07:53:52 crc kubenswrapper[4908]: I0131 07:53:52.706926 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d7lmk"] Jan 31 07:53:53 crc kubenswrapper[4908]: I0131 07:53:53.593733 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4s4qj"] Jan 31 07:53:53 crc kubenswrapper[4908]: I0131 07:53:53.595949 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4s4qj" Jan 31 07:53:53 crc kubenswrapper[4908]: I0131 07:53:53.612209 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4s4qj"] Jan 31 07:53:53 crc kubenswrapper[4908]: I0131 07:53:53.628267 4908 generic.go:334] "Generic (PLEG): container finished" podID="ab3ed7e6-c344-450a-b8dd-a2e7af655ddb" containerID="3bdeb8030f06b3e30c4c6a4351a868fe69103f75b0438d8672b8240e96a8a404" exitCode=0 Jan 31 07:53:53 crc kubenswrapper[4908]: I0131 07:53:53.628411 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7lmk" event={"ID":"ab3ed7e6-c344-450a-b8dd-a2e7af655ddb","Type":"ContainerDied","Data":"3bdeb8030f06b3e30c4c6a4351a868fe69103f75b0438d8672b8240e96a8a404"} Jan 31 07:53:53 crc kubenswrapper[4908]: I0131 07:53:53.628479 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7lmk" event={"ID":"ab3ed7e6-c344-450a-b8dd-a2e7af655ddb","Type":"ContainerStarted","Data":"64de0e010b59e43cf426fac0d89a1317babc15e2c968e1879a31613c60dfc4a6"} Jan 31 07:53:53 crc kubenswrapper[4908]: I0131 07:53:53.717498 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d52cc1f-60d2-4122-ab4a-94ef0fd0735d-utilities\") pod \"community-operators-4s4qj\" (UID: \"0d52cc1f-60d2-4122-ab4a-94ef0fd0735d\") " pod="openshift-marketplace/community-operators-4s4qj" Jan 31 07:53:53 crc kubenswrapper[4908]: I0131 07:53:53.717769 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d52cc1f-60d2-4122-ab4a-94ef0fd0735d-catalog-content\") pod \"community-operators-4s4qj\" (UID: \"0d52cc1f-60d2-4122-ab4a-94ef0fd0735d\") " pod="openshift-marketplace/community-operators-4s4qj" Jan 31 07:53:53 crc kubenswrapper[4908]: I0131 07:53:53.717965 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc4mr\" (UniqueName: \"kubernetes.io/projected/0d52cc1f-60d2-4122-ab4a-94ef0fd0735d-kube-api-access-cc4mr\") pod \"community-operators-4s4qj\" (UID: \"0d52cc1f-60d2-4122-ab4a-94ef0fd0735d\") " pod="openshift-marketplace/community-operators-4s4qj" Jan 31 07:53:53 crc kubenswrapper[4908]: I0131 07:53:53.819624 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d52cc1f-60d2-4122-ab4a-94ef0fd0735d-utilities\") pod \"community-operators-4s4qj\" (UID: \"0d52cc1f-60d2-4122-ab4a-94ef0fd0735d\") " pod="openshift-marketplace/community-operators-4s4qj" Jan 31 07:53:53 crc kubenswrapper[4908]: I0131 07:53:53.819723 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d52cc1f-60d2-4122-ab4a-94ef0fd0735d-catalog-content\") pod \"community-operators-4s4qj\" (UID: \"0d52cc1f-60d2-4122-ab4a-94ef0fd0735d\") " pod="openshift-marketplace/community-operators-4s4qj" Jan 31 07:53:53 crc kubenswrapper[4908]: I0131 07:53:53.819797 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc4mr\" (UniqueName: \"kubernetes.io/projected/0d52cc1f-60d2-4122-ab4a-94ef0fd0735d-kube-api-access-cc4mr\") pod \"community-operators-4s4qj\" (UID: \"0d52cc1f-60d2-4122-ab4a-94ef0fd0735d\") " pod="openshift-marketplace/community-operators-4s4qj" Jan 31 07:53:53 crc kubenswrapper[4908]: I0131 07:53:53.820184 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d52cc1f-60d2-4122-ab4a-94ef0fd0735d-utilities\") pod \"community-operators-4s4qj\" (UID: \"0d52cc1f-60d2-4122-ab4a-94ef0fd0735d\") " pod="openshift-marketplace/community-operators-4s4qj" Jan 31 07:53:53 crc kubenswrapper[4908]: I0131 07:53:53.820280 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d52cc1f-60d2-4122-ab4a-94ef0fd0735d-catalog-content\") pod \"community-operators-4s4qj\" (UID: \"0d52cc1f-60d2-4122-ab4a-94ef0fd0735d\") " pod="openshift-marketplace/community-operators-4s4qj" Jan 31 07:53:53 crc kubenswrapper[4908]: I0131 07:53:53.845056 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc4mr\" (UniqueName: \"kubernetes.io/projected/0d52cc1f-60d2-4122-ab4a-94ef0fd0735d-kube-api-access-cc4mr\") pod \"community-operators-4s4qj\" (UID: \"0d52cc1f-60d2-4122-ab4a-94ef0fd0735d\") " pod="openshift-marketplace/community-operators-4s4qj" Jan 31 07:53:53 crc kubenswrapper[4908]: I0131 07:53:53.919432 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4s4qj" Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.067521 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r" Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.148630 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b59cc14-57f8-4c8d-9807-fdd0194ea923-inventory\") pod \"1b59cc14-57f8-4c8d-9807-fdd0194ea923\" (UID: \"1b59cc14-57f8-4c8d-9807-fdd0194ea923\") " Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.148806 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwg46\" (UniqueName: \"kubernetes.io/projected/1b59cc14-57f8-4c8d-9807-fdd0194ea923-kube-api-access-cwg46\") pod \"1b59cc14-57f8-4c8d-9807-fdd0194ea923\" (UID: \"1b59cc14-57f8-4c8d-9807-fdd0194ea923\") " Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.148848 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b59cc14-57f8-4c8d-9807-fdd0194ea923-ssh-key-openstack-edpm-ipam\") pod \"1b59cc14-57f8-4c8d-9807-fdd0194ea923\" (UID: \"1b59cc14-57f8-4c8d-9807-fdd0194ea923\") " Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.188191 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b59cc14-57f8-4c8d-9807-fdd0194ea923-kube-api-access-cwg46" (OuterVolumeSpecName: "kube-api-access-cwg46") pod "1b59cc14-57f8-4c8d-9807-fdd0194ea923" (UID: "1b59cc14-57f8-4c8d-9807-fdd0194ea923"). InnerVolumeSpecName "kube-api-access-cwg46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.209125 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b59cc14-57f8-4c8d-9807-fdd0194ea923-inventory" (OuterVolumeSpecName: "inventory") pod "1b59cc14-57f8-4c8d-9807-fdd0194ea923" (UID: "1b59cc14-57f8-4c8d-9807-fdd0194ea923"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.211198 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b59cc14-57f8-4c8d-9807-fdd0194ea923-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1b59cc14-57f8-4c8d-9807-fdd0194ea923" (UID: "1b59cc14-57f8-4c8d-9807-fdd0194ea923"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.241251 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4s4qj"] Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.251094 4908 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b59cc14-57f8-4c8d-9807-fdd0194ea923-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.251133 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwg46\" (UniqueName: \"kubernetes.io/projected/1b59cc14-57f8-4c8d-9807-fdd0194ea923-kube-api-access-cwg46\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.251150 4908 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b59cc14-57f8-4c8d-9807-fdd0194ea923-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.639009 4908 generic.go:334] "Generic (PLEG): container finished" podID="0d52cc1f-60d2-4122-ab4a-94ef0fd0735d" containerID="b650f5111127fb8ce5e903597a93bb143b52041f577e4eb63044d35f8119ad83" exitCode=0 Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.639094 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4s4qj" event={"ID":"0d52cc1f-60d2-4122-ab4a-94ef0fd0735d","Type":"ContainerDied","Data":"b650f5111127fb8ce5e903597a93bb143b52041f577e4eb63044d35f8119ad83"} Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.639126 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4s4qj" event={"ID":"0d52cc1f-60d2-4122-ab4a-94ef0fd0735d","Type":"ContainerStarted","Data":"2ddb5f278e5cdb6f1fe248a7bf1b4d05603b330a67c1fa5d3acb37d3128fb36f"} Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.640952 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r" event={"ID":"1b59cc14-57f8-4c8d-9807-fdd0194ea923","Type":"ContainerDied","Data":"3e8ba4c082ab68e06149b3594dbb90515e185a46e5665d7619c8fb2913c49b2e"} Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.641012 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e8ba4c082ab68e06149b3594dbb90515e185a46e5665d7619c8fb2913c49b2e" Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.641061 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r" Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.722286 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-bpj2x"] Jan 31 07:53:54 crc kubenswrapper[4908]: E0131 07:53:54.722669 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b59cc14-57f8-4c8d-9807-fdd0194ea923" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.722687 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b59cc14-57f8-4c8d-9807-fdd0194ea923" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.722905 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b59cc14-57f8-4c8d-9807-fdd0194ea923" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.734513 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-bpj2x" Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.735339 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-bpj2x"] Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.736732 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vgwb9" Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.737346 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.738765 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.738955 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.760079 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmkd5\" (UniqueName: \"kubernetes.io/projected/bf7eef61-f6ee-4f41-8207-d010fcde59e5-kube-api-access-gmkd5\") pod \"ssh-known-hosts-edpm-deployment-bpj2x\" (UID: \"bf7eef61-f6ee-4f41-8207-d010fcde59e5\") " pod="openstack/ssh-known-hosts-edpm-deployment-bpj2x" Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.760232 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf7eef61-f6ee-4f41-8207-d010fcde59e5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-bpj2x\" (UID: \"bf7eef61-f6ee-4f41-8207-d010fcde59e5\") " pod="openstack/ssh-known-hosts-edpm-deployment-bpj2x" Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.760364 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/bf7eef61-f6ee-4f41-8207-d010fcde59e5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-bpj2x\" (UID: \"bf7eef61-f6ee-4f41-8207-d010fcde59e5\") " pod="openstack/ssh-known-hosts-edpm-deployment-bpj2x" Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.861571 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/bf7eef61-f6ee-4f41-8207-d010fcde59e5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-bpj2x\" (UID: \"bf7eef61-f6ee-4f41-8207-d010fcde59e5\") " pod="openstack/ssh-known-hosts-edpm-deployment-bpj2x" Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.861903 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmkd5\" (UniqueName: \"kubernetes.io/projected/bf7eef61-f6ee-4f41-8207-d010fcde59e5-kube-api-access-gmkd5\") pod \"ssh-known-hosts-edpm-deployment-bpj2x\" (UID: \"bf7eef61-f6ee-4f41-8207-d010fcde59e5\") " pod="openstack/ssh-known-hosts-edpm-deployment-bpj2x" Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.861972 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf7eef61-f6ee-4f41-8207-d010fcde59e5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-bpj2x\" (UID: \"bf7eef61-f6ee-4f41-8207-d010fcde59e5\") " pod="openstack/ssh-known-hosts-edpm-deployment-bpj2x" Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.867257 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf7eef61-f6ee-4f41-8207-d010fcde59e5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-bpj2x\" (UID: \"bf7eef61-f6ee-4f41-8207-d010fcde59e5\") " pod="openstack/ssh-known-hosts-edpm-deployment-bpj2x" Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.867266 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/bf7eef61-f6ee-4f41-8207-d010fcde59e5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-bpj2x\" (UID: \"bf7eef61-f6ee-4f41-8207-d010fcde59e5\") " pod="openstack/ssh-known-hosts-edpm-deployment-bpj2x" Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.881328 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmkd5\" (UniqueName: \"kubernetes.io/projected/bf7eef61-f6ee-4f41-8207-d010fcde59e5-kube-api-access-gmkd5\") pod \"ssh-known-hosts-edpm-deployment-bpj2x\" (UID: \"bf7eef61-f6ee-4f41-8207-d010fcde59e5\") " pod="openstack/ssh-known-hosts-edpm-deployment-bpj2x" Jan 31 07:53:54 crc kubenswrapper[4908]: I0131 07:53:54.996738 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-86r88"] Jan 31 07:53:55 crc kubenswrapper[4908]: I0131 07:53:55.017572 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-86r88" Jan 31 07:53:55 crc kubenswrapper[4908]: I0131 07:53:55.039583 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-86r88"] Jan 31 07:53:55 crc kubenswrapper[4908]: I0131 07:53:55.053758 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-bpj2x" Jan 31 07:53:55 crc kubenswrapper[4908]: I0131 07:53:55.065146 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac6dcb59-9f90-4dbc-9d68-359f871a4527-utilities\") pod \"redhat-marketplace-86r88\" (UID: \"ac6dcb59-9f90-4dbc-9d68-359f871a4527\") " pod="openshift-marketplace/redhat-marketplace-86r88" Jan 31 07:53:55 crc kubenswrapper[4908]: I0131 07:53:55.065272 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac6dcb59-9f90-4dbc-9d68-359f871a4527-catalog-content\") pod \"redhat-marketplace-86r88\" (UID: \"ac6dcb59-9f90-4dbc-9d68-359f871a4527\") " pod="openshift-marketplace/redhat-marketplace-86r88" Jan 31 07:53:55 crc kubenswrapper[4908]: I0131 07:53:55.065312 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njw6b\" (UniqueName: \"kubernetes.io/projected/ac6dcb59-9f90-4dbc-9d68-359f871a4527-kube-api-access-njw6b\") pod \"redhat-marketplace-86r88\" (UID: \"ac6dcb59-9f90-4dbc-9d68-359f871a4527\") " pod="openshift-marketplace/redhat-marketplace-86r88" Jan 31 07:53:55 crc kubenswrapper[4908]: I0131 07:53:55.166792 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac6dcb59-9f90-4dbc-9d68-359f871a4527-utilities\") pod \"redhat-marketplace-86r88\" (UID: \"ac6dcb59-9f90-4dbc-9d68-359f871a4527\") " pod="openshift-marketplace/redhat-marketplace-86r88" Jan 31 07:53:55 crc kubenswrapper[4908]: I0131 07:53:55.166961 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac6dcb59-9f90-4dbc-9d68-359f871a4527-catalog-content\") pod \"redhat-marketplace-86r88\" (UID: \"ac6dcb59-9f90-4dbc-9d68-359f871a4527\") " pod="openshift-marketplace/redhat-marketplace-86r88" Jan 31 07:53:55 crc kubenswrapper[4908]: I0131 07:53:55.167089 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njw6b\" (UniqueName: \"kubernetes.io/projected/ac6dcb59-9f90-4dbc-9d68-359f871a4527-kube-api-access-njw6b\") pod \"redhat-marketplace-86r88\" (UID: \"ac6dcb59-9f90-4dbc-9d68-359f871a4527\") " pod="openshift-marketplace/redhat-marketplace-86r88" Jan 31 07:53:55 crc kubenswrapper[4908]: I0131 07:53:55.167951 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac6dcb59-9f90-4dbc-9d68-359f871a4527-utilities\") pod \"redhat-marketplace-86r88\" (UID: \"ac6dcb59-9f90-4dbc-9d68-359f871a4527\") " pod="openshift-marketplace/redhat-marketplace-86r88" Jan 31 07:53:55 crc kubenswrapper[4908]: I0131 07:53:55.168246 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac6dcb59-9f90-4dbc-9d68-359f871a4527-catalog-content\") pod \"redhat-marketplace-86r88\" (UID: \"ac6dcb59-9f90-4dbc-9d68-359f871a4527\") " pod="openshift-marketplace/redhat-marketplace-86r88" Jan 31 07:53:55 crc kubenswrapper[4908]: I0131 07:53:55.187078 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njw6b\" (UniqueName: \"kubernetes.io/projected/ac6dcb59-9f90-4dbc-9d68-359f871a4527-kube-api-access-njw6b\") pod \"redhat-marketplace-86r88\" (UID: \"ac6dcb59-9f90-4dbc-9d68-359f871a4527\") " pod="openshift-marketplace/redhat-marketplace-86r88" Jan 31 07:53:55 crc kubenswrapper[4908]: I0131 07:53:55.345927 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-86r88" Jan 31 07:53:55 crc kubenswrapper[4908]: I0131 07:53:55.636336 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-bpj2x"] Jan 31 07:53:55 crc kubenswrapper[4908]: W0131 07:53:55.665299 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf7eef61_f6ee_4f41_8207_d010fcde59e5.slice/crio-8d2959aaff413feeddc00ff3b7310b3d77005ec2177206b300939277272db90c WatchSource:0}: Error finding container 8d2959aaff413feeddc00ff3b7310b3d77005ec2177206b300939277272db90c: Status 404 returned error can't find the container with id 8d2959aaff413feeddc00ff3b7310b3d77005ec2177206b300939277272db90c Jan 31 07:53:55 crc kubenswrapper[4908]: I0131 07:53:55.673501 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7lmk" event={"ID":"ab3ed7e6-c344-450a-b8dd-a2e7af655ddb","Type":"ContainerStarted","Data":"f48ffb42db4080b7c1f7608d9ef73845fe36d9c73661a3ce3b8a160f6820e692"} Jan 31 07:53:55 crc kubenswrapper[4908]: I0131 07:53:55.823072 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-86r88"] Jan 31 07:53:56 crc kubenswrapper[4908]: I0131 07:53:56.685266 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4s4qj" event={"ID":"0d52cc1f-60d2-4122-ab4a-94ef0fd0735d","Type":"ContainerStarted","Data":"99cf23c7a10c7fe3cf511bc4f3a831fb6a0d26a6d2daafa3dd64ee319f45aedf"} Jan 31 07:53:56 crc kubenswrapper[4908]: I0131 07:53:56.688374 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-86r88" event={"ID":"ac6dcb59-9f90-4dbc-9d68-359f871a4527","Type":"ContainerStarted","Data":"ac90b9a93e48c9d81cc600e9fca6093d2d0abc269e17f7b42e8461a89012ca4e"} Jan 31 07:53:56 crc kubenswrapper[4908]: I0131 07:53:56.688685 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-86r88" event={"ID":"ac6dcb59-9f90-4dbc-9d68-359f871a4527","Type":"ContainerStarted","Data":"aedac3a5d6acdd2e764ad2056d5f5d0ff5e1b1e4565d066ae2abeffd5a3278f7"} Jan 31 07:53:56 crc kubenswrapper[4908]: I0131 07:53:56.690143 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-bpj2x" event={"ID":"bf7eef61-f6ee-4f41-8207-d010fcde59e5","Type":"ContainerStarted","Data":"8d2959aaff413feeddc00ff3b7310b3d77005ec2177206b300939277272db90c"} Jan 31 07:53:56 crc kubenswrapper[4908]: I0131 07:53:56.692279 4908 generic.go:334] "Generic (PLEG): container finished" podID="ab3ed7e6-c344-450a-b8dd-a2e7af655ddb" containerID="f48ffb42db4080b7c1f7608d9ef73845fe36d9c73661a3ce3b8a160f6820e692" exitCode=0 Jan 31 07:53:56 crc kubenswrapper[4908]: I0131 07:53:56.692329 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7lmk" event={"ID":"ab3ed7e6-c344-450a-b8dd-a2e7af655ddb","Type":"ContainerDied","Data":"f48ffb42db4080b7c1f7608d9ef73845fe36d9c73661a3ce3b8a160f6820e692"} Jan 31 07:53:58 crc kubenswrapper[4908]: I0131 07:53:58.725620 4908 generic.go:334] "Generic (PLEG): container finished" podID="ac6dcb59-9f90-4dbc-9d68-359f871a4527" containerID="ac90b9a93e48c9d81cc600e9fca6093d2d0abc269e17f7b42e8461a89012ca4e" exitCode=0 Jan 31 07:53:58 crc kubenswrapper[4908]: I0131 07:53:58.725681 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-86r88" event={"ID":"ac6dcb59-9f90-4dbc-9d68-359f871a4527","Type":"ContainerDied","Data":"ac90b9a93e48c9d81cc600e9fca6093d2d0abc269e17f7b42e8461a89012ca4e"} Jan 31 07:53:58 crc kubenswrapper[4908]: I0131 07:53:58.742410 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-bpj2x" event={"ID":"bf7eef61-f6ee-4f41-8207-d010fcde59e5","Type":"ContainerStarted","Data":"c7e4e8e329926e10155fa62e2fe09582cb3c7547c2623eda72edea5312f6d757"} Jan 31 07:53:58 crc kubenswrapper[4908]: I0131 07:53:58.781329 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-bpj2x" podStartSLOduration=2.112996601 podStartE2EDuration="4.781302107s" podCreationTimestamp="2026-01-31 07:53:54 +0000 UTC" firstStartedPulling="2026-01-31 07:53:55.672020732 +0000 UTC m=+1942.287965386" lastFinishedPulling="2026-01-31 07:53:58.340326238 +0000 UTC m=+1944.956270892" observedRunningTime="2026-01-31 07:53:58.77225041 +0000 UTC m=+1945.388195094" watchObservedRunningTime="2026-01-31 07:53:58.781302107 +0000 UTC m=+1945.397246761" Jan 31 07:53:59 crc kubenswrapper[4908]: I0131 07:53:59.757123 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7lmk" event={"ID":"ab3ed7e6-c344-450a-b8dd-a2e7af655ddb","Type":"ContainerStarted","Data":"d81d928a9849beda2fc354d99b832539b8329393ea46eb6f72a84fd2e228e486"} Jan 31 07:53:59 crc kubenswrapper[4908]: I0131 07:53:59.785628 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d7lmk" podStartSLOduration=3.551006614 podStartE2EDuration="8.785610726s" podCreationTimestamp="2026-01-31 07:53:51 +0000 UTC" firstStartedPulling="2026-01-31 07:53:53.629812721 +0000 UTC m=+1940.245757375" lastFinishedPulling="2026-01-31 07:53:58.864416823 +0000 UTC m=+1945.480361487" observedRunningTime="2026-01-31 07:53:59.778739834 +0000 UTC m=+1946.394684508" watchObservedRunningTime="2026-01-31 07:53:59.785610726 +0000 UTC m=+1946.401555380" Jan 31 07:54:00 crc kubenswrapper[4908]: I0131 07:54:00.765873 4908 generic.go:334] "Generic (PLEG): container finished" podID="ac6dcb59-9f90-4dbc-9d68-359f871a4527" containerID="6b70a60f842983c12ae068abb04f22e7c73090443a0841c508e02bf2853f8e5f" exitCode=0 Jan 31 07:54:00 crc kubenswrapper[4908]: I0131 07:54:00.765955 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-86r88" event={"ID":"ac6dcb59-9f90-4dbc-9d68-359f871a4527","Type":"ContainerDied","Data":"6b70a60f842983c12ae068abb04f22e7c73090443a0841c508e02bf2853f8e5f"} Jan 31 07:54:02 crc kubenswrapper[4908]: I0131 07:54:02.179874 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d7lmk" Jan 31 07:54:02 crc kubenswrapper[4908]: I0131 07:54:02.179939 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d7lmk" Jan 31 07:54:03 crc kubenswrapper[4908]: I0131 07:54:03.233007 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d7lmk" podUID="ab3ed7e6-c344-450a-b8dd-a2e7af655ddb" containerName="registry-server" probeResult="failure" output=< Jan 31 07:54:03 crc kubenswrapper[4908]: timeout: failed to connect service ":50051" within 1s Jan 31 07:54:03 crc kubenswrapper[4908]: > Jan 31 07:54:03 crc kubenswrapper[4908]: I0131 07:54:03.792194 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-86r88" event={"ID":"ac6dcb59-9f90-4dbc-9d68-359f871a4527","Type":"ContainerStarted","Data":"3d2f50e0c6038c86ed745aef2738be108d13df98e6729680cc1f6eb8f6127f8e"} Jan 31 07:54:03 crc kubenswrapper[4908]: I0131 07:54:03.809231 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-86r88" podStartSLOduration=5.422495912 podStartE2EDuration="9.80921514s" podCreationTimestamp="2026-01-31 07:53:54 +0000 UTC" firstStartedPulling="2026-01-31 07:53:58.800737785 +0000 UTC m=+1945.416682439" lastFinishedPulling="2026-01-31 07:54:03.187457013 +0000 UTC m=+1949.803401667" observedRunningTime="2026-01-31 07:54:03.806883472 +0000 UTC m=+1950.422828136" watchObservedRunningTime="2026-01-31 07:54:03.80921514 +0000 UTC m=+1950.425159794" Jan 31 07:54:05 crc kubenswrapper[4908]: I0131 07:54:05.347312 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-86r88" Jan 31 07:54:05 crc kubenswrapper[4908]: I0131 07:54:05.347368 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-86r88" Jan 31 07:54:05 crc kubenswrapper[4908]: I0131 07:54:05.395942 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-86r88" Jan 31 07:54:05 crc kubenswrapper[4908]: I0131 07:54:05.811728 4908 generic.go:334] "Generic (PLEG): container finished" podID="0d52cc1f-60d2-4122-ab4a-94ef0fd0735d" containerID="99cf23c7a10c7fe3cf511bc4f3a831fb6a0d26a6d2daafa3dd64ee319f45aedf" exitCode=0 Jan 31 07:54:05 crc kubenswrapper[4908]: I0131 07:54:05.811868 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4s4qj" event={"ID":"0d52cc1f-60d2-4122-ab4a-94ef0fd0735d","Type":"ContainerDied","Data":"99cf23c7a10c7fe3cf511bc4f3a831fb6a0d26a6d2daafa3dd64ee319f45aedf"} Jan 31 07:54:05 crc kubenswrapper[4908]: I0131 07:54:05.814957 4908 generic.go:334] "Generic (PLEG): container finished" podID="bf7eef61-f6ee-4f41-8207-d010fcde59e5" containerID="c7e4e8e329926e10155fa62e2fe09582cb3c7547c2623eda72edea5312f6d757" exitCode=0 Jan 31 07:54:05 crc kubenswrapper[4908]: I0131 07:54:05.815882 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-bpj2x" event={"ID":"bf7eef61-f6ee-4f41-8207-d010fcde59e5","Type":"ContainerDied","Data":"c7e4e8e329926e10155fa62e2fe09582cb3c7547c2623eda72edea5312f6d757"} Jan 31 07:54:07 crc kubenswrapper[4908]: I0131 07:54:07.240547 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-bpj2x" Jan 31 07:54:07 crc kubenswrapper[4908]: I0131 07:54:07.302863 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/bf7eef61-f6ee-4f41-8207-d010fcde59e5-inventory-0\") pod \"bf7eef61-f6ee-4f41-8207-d010fcde59e5\" (UID: \"bf7eef61-f6ee-4f41-8207-d010fcde59e5\") " Jan 31 07:54:07 crc kubenswrapper[4908]: I0131 07:54:07.303163 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf7eef61-f6ee-4f41-8207-d010fcde59e5-ssh-key-openstack-edpm-ipam\") pod \"bf7eef61-f6ee-4f41-8207-d010fcde59e5\" (UID: \"bf7eef61-f6ee-4f41-8207-d010fcde59e5\") " Jan 31 07:54:07 crc kubenswrapper[4908]: I0131 07:54:07.303257 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmkd5\" (UniqueName: \"kubernetes.io/projected/bf7eef61-f6ee-4f41-8207-d010fcde59e5-kube-api-access-gmkd5\") pod \"bf7eef61-f6ee-4f41-8207-d010fcde59e5\" (UID: \"bf7eef61-f6ee-4f41-8207-d010fcde59e5\") " Jan 31 07:54:07 crc kubenswrapper[4908]: I0131 07:54:07.308207 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf7eef61-f6ee-4f41-8207-d010fcde59e5-kube-api-access-gmkd5" (OuterVolumeSpecName: "kube-api-access-gmkd5") pod "bf7eef61-f6ee-4f41-8207-d010fcde59e5" (UID: "bf7eef61-f6ee-4f41-8207-d010fcde59e5"). InnerVolumeSpecName "kube-api-access-gmkd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:54:07 crc kubenswrapper[4908]: I0131 07:54:07.338966 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf7eef61-f6ee-4f41-8207-d010fcde59e5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bf7eef61-f6ee-4f41-8207-d010fcde59e5" (UID: "bf7eef61-f6ee-4f41-8207-d010fcde59e5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:07 crc kubenswrapper[4908]: I0131 07:54:07.349109 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf7eef61-f6ee-4f41-8207-d010fcde59e5-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "bf7eef61-f6ee-4f41-8207-d010fcde59e5" (UID: "bf7eef61-f6ee-4f41-8207-d010fcde59e5"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:07 crc kubenswrapper[4908]: I0131 07:54:07.405615 4908 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/bf7eef61-f6ee-4f41-8207-d010fcde59e5-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:07 crc kubenswrapper[4908]: I0131 07:54:07.405877 4908 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf7eef61-f6ee-4f41-8207-d010fcde59e5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:07 crc kubenswrapper[4908]: I0131 07:54:07.405954 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmkd5\" (UniqueName: \"kubernetes.io/projected/bf7eef61-f6ee-4f41-8207-d010fcde59e5-kube-api-access-gmkd5\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:07 crc kubenswrapper[4908]: I0131 07:54:07.876319 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-bpj2x" event={"ID":"bf7eef61-f6ee-4f41-8207-d010fcde59e5","Type":"ContainerDied","Data":"8d2959aaff413feeddc00ff3b7310b3d77005ec2177206b300939277272db90c"} Jan 31 07:54:07 crc kubenswrapper[4908]: I0131 07:54:07.876561 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d2959aaff413feeddc00ff3b7310b3d77005ec2177206b300939277272db90c" Jan 31 07:54:07 crc kubenswrapper[4908]: I0131 07:54:07.876425 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-bpj2x" Jan 31 07:54:07 crc kubenswrapper[4908]: I0131 07:54:07.909429 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4s4qj" event={"ID":"0d52cc1f-60d2-4122-ab4a-94ef0fd0735d","Type":"ContainerStarted","Data":"4337787e8472a85a91dc7751568fbfe748a30ed9e5d227bf44c29222ae2b55a7"} Jan 31 07:54:07 crc kubenswrapper[4908]: I0131 07:54:07.946753 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4s4qj" podStartSLOduration=3.104521958 podStartE2EDuration="14.946738905s" podCreationTimestamp="2026-01-31 07:53:53 +0000 UTC" firstStartedPulling="2026-01-31 07:53:54.748025139 +0000 UTC m=+1941.363969793" lastFinishedPulling="2026-01-31 07:54:06.590242086 +0000 UTC m=+1953.206186740" observedRunningTime="2026-01-31 07:54:07.944393226 +0000 UTC m=+1954.560337890" watchObservedRunningTime="2026-01-31 07:54:07.946738905 +0000 UTC m=+1954.562683559" Jan 31 07:54:08 crc kubenswrapper[4908]: I0131 07:54:08.002470 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fs7kz"] Jan 31 07:54:08 crc kubenswrapper[4908]: E0131 07:54:08.002870 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf7eef61-f6ee-4f41-8207-d010fcde59e5" containerName="ssh-known-hosts-edpm-deployment" Jan 31 07:54:08 crc kubenswrapper[4908]: I0131 07:54:08.002887 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf7eef61-f6ee-4f41-8207-d010fcde59e5" containerName="ssh-known-hosts-edpm-deployment" Jan 31 07:54:08 crc kubenswrapper[4908]: I0131 07:54:08.003110 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf7eef61-f6ee-4f41-8207-d010fcde59e5" containerName="ssh-known-hosts-edpm-deployment" Jan 31 07:54:08 crc kubenswrapper[4908]: I0131 07:54:08.003737 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fs7kz" Jan 31 07:54:08 crc kubenswrapper[4908]: I0131 07:54:08.008334 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vgwb9" Jan 31 07:54:08 crc kubenswrapper[4908]: I0131 07:54:08.008564 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 07:54:08 crc kubenswrapper[4908]: I0131 07:54:08.008689 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 07:54:08 crc kubenswrapper[4908]: I0131 07:54:08.008782 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 07:54:08 crc kubenswrapper[4908]: I0131 07:54:08.025087 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fs7kz"] Jan 31 07:54:08 crc kubenswrapper[4908]: I0131 07:54:08.031025 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cvw4\" (UniqueName: \"kubernetes.io/projected/74bf529d-bd61-4ef6-98e4-22cfcf412e48-kube-api-access-9cvw4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fs7kz\" (UID: \"74bf529d-bd61-4ef6-98e4-22cfcf412e48\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fs7kz" Jan 31 07:54:08 crc kubenswrapper[4908]: I0131 07:54:08.031281 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74bf529d-bd61-4ef6-98e4-22cfcf412e48-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fs7kz\" (UID: \"74bf529d-bd61-4ef6-98e4-22cfcf412e48\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fs7kz" Jan 31 07:54:08 crc kubenswrapper[4908]: I0131 07:54:08.031520 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74bf529d-bd61-4ef6-98e4-22cfcf412e48-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fs7kz\" (UID: \"74bf529d-bd61-4ef6-98e4-22cfcf412e48\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fs7kz" Jan 31 07:54:08 crc kubenswrapper[4908]: I0131 07:54:08.133002 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74bf529d-bd61-4ef6-98e4-22cfcf412e48-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fs7kz\" (UID: \"74bf529d-bd61-4ef6-98e4-22cfcf412e48\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fs7kz" Jan 31 07:54:08 crc kubenswrapper[4908]: I0131 07:54:08.133439 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74bf529d-bd61-4ef6-98e4-22cfcf412e48-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fs7kz\" (UID: \"74bf529d-bd61-4ef6-98e4-22cfcf412e48\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fs7kz" Jan 31 07:54:08 crc kubenswrapper[4908]: I0131 07:54:08.133597 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cvw4\" (UniqueName: \"kubernetes.io/projected/74bf529d-bd61-4ef6-98e4-22cfcf412e48-kube-api-access-9cvw4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fs7kz\" (UID: \"74bf529d-bd61-4ef6-98e4-22cfcf412e48\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fs7kz" Jan 31 07:54:08 crc kubenswrapper[4908]: I0131 07:54:08.137922 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74bf529d-bd61-4ef6-98e4-22cfcf412e48-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fs7kz\" (UID: \"74bf529d-bd61-4ef6-98e4-22cfcf412e48\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fs7kz" Jan 31 07:54:08 crc kubenswrapper[4908]: I0131 07:54:08.149035 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74bf529d-bd61-4ef6-98e4-22cfcf412e48-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fs7kz\" (UID: \"74bf529d-bd61-4ef6-98e4-22cfcf412e48\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fs7kz" Jan 31 07:54:08 crc kubenswrapper[4908]: I0131 07:54:08.152074 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cvw4\" (UniqueName: \"kubernetes.io/projected/74bf529d-bd61-4ef6-98e4-22cfcf412e48-kube-api-access-9cvw4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fs7kz\" (UID: \"74bf529d-bd61-4ef6-98e4-22cfcf412e48\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fs7kz" Jan 31 07:54:08 crc kubenswrapper[4908]: I0131 07:54:08.326626 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fs7kz" Jan 31 07:54:08 crc kubenswrapper[4908]: I0131 07:54:08.899738 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fs7kz"] Jan 31 07:54:08 crc kubenswrapper[4908]: I0131 07:54:08.917449 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fs7kz" event={"ID":"74bf529d-bd61-4ef6-98e4-22cfcf412e48","Type":"ContainerStarted","Data":"b99511f29d6e87c8ec17448852b4466602b77665015f924e2885138d35ff4b06"} Jan 31 07:54:13 crc kubenswrapper[4908]: I0131 07:54:13.222477 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d7lmk" podUID="ab3ed7e6-c344-450a-b8dd-a2e7af655ddb" containerName="registry-server" probeResult="failure" output=< Jan 31 07:54:13 crc kubenswrapper[4908]: timeout: failed to connect service ":50051" within 1s Jan 31 07:54:13 crc kubenswrapper[4908]: > Jan 31 07:54:13 crc kubenswrapper[4908]: I0131 07:54:13.920348 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4s4qj" Jan 31 07:54:13 crc kubenswrapper[4908]: I0131 07:54:13.920402 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4s4qj" Jan 31 07:54:13 crc kubenswrapper[4908]: I0131 07:54:13.975432 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4s4qj" Jan 31 07:54:14 crc kubenswrapper[4908]: I0131 07:54:14.025115 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4s4qj" Jan 31 07:54:14 crc kubenswrapper[4908]: I0131 07:54:14.215813 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4s4qj"] Jan 31 07:54:15 crc kubenswrapper[4908]: I0131 07:54:15.389623 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-86r88" Jan 31 07:54:15 crc kubenswrapper[4908]: I0131 07:54:15.972687 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4s4qj" podUID="0d52cc1f-60d2-4122-ab4a-94ef0fd0735d" containerName="registry-server" containerID="cri-o://4337787e8472a85a91dc7751568fbfe748a30ed9e5d227bf44c29222ae2b55a7" gracePeriod=2 Jan 31 07:54:16 crc kubenswrapper[4908]: I0131 07:54:16.615027 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-86r88"] Jan 31 07:54:16 crc kubenswrapper[4908]: I0131 07:54:16.616572 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-86r88" podUID="ac6dcb59-9f90-4dbc-9d68-359f871a4527" containerName="registry-server" containerID="cri-o://3d2f50e0c6038c86ed745aef2738be108d13df98e6729680cc1f6eb8f6127f8e" gracePeriod=2 Jan 31 07:54:16 crc kubenswrapper[4908]: I0131 07:54:16.983652 4908 generic.go:334] "Generic (PLEG): container finished" podID="ac6dcb59-9f90-4dbc-9d68-359f871a4527" containerID="3d2f50e0c6038c86ed745aef2738be108d13df98e6729680cc1f6eb8f6127f8e" exitCode=0 Jan 31 07:54:16 crc kubenswrapper[4908]: I0131 07:54:16.983734 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-86r88" event={"ID":"ac6dcb59-9f90-4dbc-9d68-359f871a4527","Type":"ContainerDied","Data":"3d2f50e0c6038c86ed745aef2738be108d13df98e6729680cc1f6eb8f6127f8e"} Jan 31 07:54:16 crc kubenswrapper[4908]: I0131 07:54:16.985321 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fs7kz" event={"ID":"74bf529d-bd61-4ef6-98e4-22cfcf412e48","Type":"ContainerStarted","Data":"9d38c2f76bab226165e916c8bb0fbf290d3bbf3824bfa99c401a80a277346179"} Jan 31 07:54:16 crc kubenswrapper[4908]: I0131 07:54:16.987628 4908 generic.go:334] "Generic (PLEG): container finished" podID="0d52cc1f-60d2-4122-ab4a-94ef0fd0735d" containerID="4337787e8472a85a91dc7751568fbfe748a30ed9e5d227bf44c29222ae2b55a7" exitCode=0 Jan 31 07:54:16 crc kubenswrapper[4908]: I0131 07:54:16.987738 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4s4qj" event={"ID":"0d52cc1f-60d2-4122-ab4a-94ef0fd0735d","Type":"ContainerDied","Data":"4337787e8472a85a91dc7751568fbfe748a30ed9e5d227bf44c29222ae2b55a7"} Jan 31 07:54:17 crc kubenswrapper[4908]: I0131 07:54:17.008007 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fs7kz" podStartSLOduration=2.941025422 podStartE2EDuration="10.007988808s" podCreationTimestamp="2026-01-31 07:54:07 +0000 UTC" firstStartedPulling="2026-01-31 07:54:08.903962521 +0000 UTC m=+1955.519907175" lastFinishedPulling="2026-01-31 07:54:15.970925907 +0000 UTC m=+1962.586870561" observedRunningTime="2026-01-31 07:54:17.00249621 +0000 UTC m=+1963.618440864" watchObservedRunningTime="2026-01-31 07:54:17.007988808 +0000 UTC m=+1963.623933462" Jan 31 07:54:17 crc kubenswrapper[4908]: I0131 07:54:17.774705 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4s4qj" Jan 31 07:54:17 crc kubenswrapper[4908]: I0131 07:54:17.833987 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d52cc1f-60d2-4122-ab4a-94ef0fd0735d-utilities\") pod \"0d52cc1f-60d2-4122-ab4a-94ef0fd0735d\" (UID: \"0d52cc1f-60d2-4122-ab4a-94ef0fd0735d\") " Jan 31 07:54:17 crc kubenswrapper[4908]: I0131 07:54:17.834029 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d52cc1f-60d2-4122-ab4a-94ef0fd0735d-catalog-content\") pod \"0d52cc1f-60d2-4122-ab4a-94ef0fd0735d\" (UID: \"0d52cc1f-60d2-4122-ab4a-94ef0fd0735d\") " Jan 31 07:54:17 crc kubenswrapper[4908]: I0131 07:54:17.834150 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc4mr\" (UniqueName: \"kubernetes.io/projected/0d52cc1f-60d2-4122-ab4a-94ef0fd0735d-kube-api-access-cc4mr\") pod \"0d52cc1f-60d2-4122-ab4a-94ef0fd0735d\" (UID: \"0d52cc1f-60d2-4122-ab4a-94ef0fd0735d\") " Jan 31 07:54:17 crc kubenswrapper[4908]: I0131 07:54:17.834829 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d52cc1f-60d2-4122-ab4a-94ef0fd0735d-utilities" (OuterVolumeSpecName: "utilities") pod "0d52cc1f-60d2-4122-ab4a-94ef0fd0735d" (UID: "0d52cc1f-60d2-4122-ab4a-94ef0fd0735d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:54:17 crc kubenswrapper[4908]: I0131 07:54:17.841463 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d52cc1f-60d2-4122-ab4a-94ef0fd0735d-kube-api-access-cc4mr" (OuterVolumeSpecName: "kube-api-access-cc4mr") pod "0d52cc1f-60d2-4122-ab4a-94ef0fd0735d" (UID: "0d52cc1f-60d2-4122-ab4a-94ef0fd0735d"). InnerVolumeSpecName "kube-api-access-cc4mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:54:17 crc kubenswrapper[4908]: I0131 07:54:17.896915 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d52cc1f-60d2-4122-ab4a-94ef0fd0735d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d52cc1f-60d2-4122-ab4a-94ef0fd0735d" (UID: "0d52cc1f-60d2-4122-ab4a-94ef0fd0735d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:54:17 crc kubenswrapper[4908]: I0131 07:54:17.920908 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-86r88" Jan 31 07:54:17 crc kubenswrapper[4908]: I0131 07:54:17.938490 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc4mr\" (UniqueName: \"kubernetes.io/projected/0d52cc1f-60d2-4122-ab4a-94ef0fd0735d-kube-api-access-cc4mr\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:17 crc kubenswrapper[4908]: I0131 07:54:17.938519 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d52cc1f-60d2-4122-ab4a-94ef0fd0735d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:17 crc kubenswrapper[4908]: I0131 07:54:17.938530 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d52cc1f-60d2-4122-ab4a-94ef0fd0735d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:17 crc kubenswrapper[4908]: I0131 07:54:17.999441 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4s4qj" event={"ID":"0d52cc1f-60d2-4122-ab4a-94ef0fd0735d","Type":"ContainerDied","Data":"2ddb5f278e5cdb6f1fe248a7bf1b4d05603b330a67c1fa5d3acb37d3128fb36f"} Jan 31 07:54:17 crc kubenswrapper[4908]: I0131 07:54:17.999495 4908 scope.go:117] "RemoveContainer" containerID="4337787e8472a85a91dc7751568fbfe748a30ed9e5d227bf44c29222ae2b55a7" Jan 31 07:54:17 crc kubenswrapper[4908]: I0131 07:54:17.999654 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4s4qj" Jan 31 07:54:18 crc kubenswrapper[4908]: I0131 07:54:18.005847 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-86r88" event={"ID":"ac6dcb59-9f90-4dbc-9d68-359f871a4527","Type":"ContainerDied","Data":"aedac3a5d6acdd2e764ad2056d5f5d0ff5e1b1e4565d066ae2abeffd5a3278f7"} Jan 31 07:54:18 crc kubenswrapper[4908]: I0131 07:54:18.005878 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-86r88" Jan 31 07:54:18 crc kubenswrapper[4908]: I0131 07:54:18.031451 4908 scope.go:117] "RemoveContainer" containerID="99cf23c7a10c7fe3cf511bc4f3a831fb6a0d26a6d2daafa3dd64ee319f45aedf" Jan 31 07:54:18 crc kubenswrapper[4908]: I0131 07:54:18.031638 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4s4qj"] Jan 31 07:54:18 crc kubenswrapper[4908]: I0131 07:54:18.051532 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac6dcb59-9f90-4dbc-9d68-359f871a4527-utilities\") pod \"ac6dcb59-9f90-4dbc-9d68-359f871a4527\" (UID: \"ac6dcb59-9f90-4dbc-9d68-359f871a4527\") " Jan 31 07:54:18 crc kubenswrapper[4908]: I0131 07:54:18.051644 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac6dcb59-9f90-4dbc-9d68-359f871a4527-catalog-content\") pod \"ac6dcb59-9f90-4dbc-9d68-359f871a4527\" (UID: \"ac6dcb59-9f90-4dbc-9d68-359f871a4527\") " Jan 31 07:54:18 crc kubenswrapper[4908]: I0131 07:54:18.051692 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njw6b\" (UniqueName: \"kubernetes.io/projected/ac6dcb59-9f90-4dbc-9d68-359f871a4527-kube-api-access-njw6b\") pod \"ac6dcb59-9f90-4dbc-9d68-359f871a4527\" (UID: \"ac6dcb59-9f90-4dbc-9d68-359f871a4527\") " Jan 31 07:54:18 crc kubenswrapper[4908]: I0131 07:54:18.052763 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac6dcb59-9f90-4dbc-9d68-359f871a4527-utilities" (OuterVolumeSpecName: "utilities") pod "ac6dcb59-9f90-4dbc-9d68-359f871a4527" (UID: "ac6dcb59-9f90-4dbc-9d68-359f871a4527"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:54:18 crc kubenswrapper[4908]: I0131 07:54:18.054087 4908 scope.go:117] "RemoveContainer" containerID="b650f5111127fb8ce5e903597a93bb143b52041f577e4eb63044d35f8119ad83" Jan 31 07:54:18 crc kubenswrapper[4908]: I0131 07:54:18.060172 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac6dcb59-9f90-4dbc-9d68-359f871a4527-kube-api-access-njw6b" (OuterVolumeSpecName: "kube-api-access-njw6b") pod "ac6dcb59-9f90-4dbc-9d68-359f871a4527" (UID: "ac6dcb59-9f90-4dbc-9d68-359f871a4527"). InnerVolumeSpecName "kube-api-access-njw6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:54:18 crc kubenswrapper[4908]: I0131 07:54:18.065268 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac6dcb59-9f90-4dbc-9d68-359f871a4527-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:18 crc kubenswrapper[4908]: I0131 07:54:18.065302 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njw6b\" (UniqueName: \"kubernetes.io/projected/ac6dcb59-9f90-4dbc-9d68-359f871a4527-kube-api-access-njw6b\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:18 crc kubenswrapper[4908]: I0131 07:54:18.072582 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac6dcb59-9f90-4dbc-9d68-359f871a4527-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac6dcb59-9f90-4dbc-9d68-359f871a4527" (UID: "ac6dcb59-9f90-4dbc-9d68-359f871a4527"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:54:18 crc kubenswrapper[4908]: I0131 07:54:18.082215 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4s4qj"] Jan 31 07:54:18 crc kubenswrapper[4908]: I0131 07:54:18.099490 4908 scope.go:117] "RemoveContainer" containerID="3d2f50e0c6038c86ed745aef2738be108d13df98e6729680cc1f6eb8f6127f8e" Jan 31 07:54:18 crc kubenswrapper[4908]: I0131 07:54:18.126708 4908 scope.go:117] "RemoveContainer" containerID="6b70a60f842983c12ae068abb04f22e7c73090443a0841c508e02bf2853f8e5f" Jan 31 07:54:18 crc kubenswrapper[4908]: I0131 07:54:18.153231 4908 scope.go:117] "RemoveContainer" containerID="ac90b9a93e48c9d81cc600e9fca6093d2d0abc269e17f7b42e8461a89012ca4e" Jan 31 07:54:18 crc kubenswrapper[4908]: I0131 07:54:18.166850 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac6dcb59-9f90-4dbc-9d68-359f871a4527-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:18 crc kubenswrapper[4908]: I0131 07:54:18.347753 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-86r88"] Jan 31 07:54:18 crc kubenswrapper[4908]: I0131 07:54:18.357539 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-86r88"] Jan 31 07:54:19 crc kubenswrapper[4908]: I0131 07:54:19.957785 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d52cc1f-60d2-4122-ab4a-94ef0fd0735d" path="/var/lib/kubelet/pods/0d52cc1f-60d2-4122-ab4a-94ef0fd0735d/volumes" Jan 31 07:54:19 crc kubenswrapper[4908]: I0131 07:54:19.959802 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac6dcb59-9f90-4dbc-9d68-359f871a4527" path="/var/lib/kubelet/pods/ac6dcb59-9f90-4dbc-9d68-359f871a4527/volumes" Jan 31 07:54:23 crc kubenswrapper[4908]: I0131 07:54:23.227113 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d7lmk" podUID="ab3ed7e6-c344-450a-b8dd-a2e7af655ddb" containerName="registry-server" probeResult="failure" output=< Jan 31 07:54:23 crc kubenswrapper[4908]: timeout: failed to connect service ":50051" within 1s Jan 31 07:54:23 crc kubenswrapper[4908]: > Jan 31 07:54:24 crc kubenswrapper[4908]: I0131 07:54:24.042037 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7ksrc"] Jan 31 07:54:24 crc kubenswrapper[4908]: I0131 07:54:24.052175 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7ksrc"] Jan 31 07:54:25 crc kubenswrapper[4908]: I0131 07:54:25.953241 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="668eb706-10d5-4310-9c66-d8d77bebe230" path="/var/lib/kubelet/pods/668eb706-10d5-4310-9c66-d8d77bebe230/volumes" Jan 31 07:54:26 crc kubenswrapper[4908]: I0131 07:54:26.081953 4908 generic.go:334] "Generic (PLEG): container finished" podID="74bf529d-bd61-4ef6-98e4-22cfcf412e48" containerID="9d38c2f76bab226165e916c8bb0fbf290d3bbf3824bfa99c401a80a277346179" exitCode=0 Jan 31 07:54:26 crc kubenswrapper[4908]: I0131 07:54:26.082017 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fs7kz" event={"ID":"74bf529d-bd61-4ef6-98e4-22cfcf412e48","Type":"ContainerDied","Data":"9d38c2f76bab226165e916c8bb0fbf290d3bbf3824bfa99c401a80a277346179"} Jan 31 07:54:27 crc kubenswrapper[4908]: I0131 07:54:27.551465 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fs7kz" Jan 31 07:54:27 crc kubenswrapper[4908]: I0131 07:54:27.649203 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74bf529d-bd61-4ef6-98e4-22cfcf412e48-inventory\") pod \"74bf529d-bd61-4ef6-98e4-22cfcf412e48\" (UID: \"74bf529d-bd61-4ef6-98e4-22cfcf412e48\") " Jan 31 07:54:27 crc kubenswrapper[4908]: I0131 07:54:27.649353 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74bf529d-bd61-4ef6-98e4-22cfcf412e48-ssh-key-openstack-edpm-ipam\") pod \"74bf529d-bd61-4ef6-98e4-22cfcf412e48\" (UID: \"74bf529d-bd61-4ef6-98e4-22cfcf412e48\") " Jan 31 07:54:27 crc kubenswrapper[4908]: I0131 07:54:27.649601 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cvw4\" (UniqueName: \"kubernetes.io/projected/74bf529d-bd61-4ef6-98e4-22cfcf412e48-kube-api-access-9cvw4\") pod \"74bf529d-bd61-4ef6-98e4-22cfcf412e48\" (UID: \"74bf529d-bd61-4ef6-98e4-22cfcf412e48\") " Jan 31 07:54:27 crc kubenswrapper[4908]: I0131 07:54:27.656189 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74bf529d-bd61-4ef6-98e4-22cfcf412e48-kube-api-access-9cvw4" (OuterVolumeSpecName: "kube-api-access-9cvw4") pod "74bf529d-bd61-4ef6-98e4-22cfcf412e48" (UID: "74bf529d-bd61-4ef6-98e4-22cfcf412e48"). InnerVolumeSpecName "kube-api-access-9cvw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:54:27 crc kubenswrapper[4908]: I0131 07:54:27.683193 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74bf529d-bd61-4ef6-98e4-22cfcf412e48-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "74bf529d-bd61-4ef6-98e4-22cfcf412e48" (UID: "74bf529d-bd61-4ef6-98e4-22cfcf412e48"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:27 crc kubenswrapper[4908]: I0131 07:54:27.691711 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74bf529d-bd61-4ef6-98e4-22cfcf412e48-inventory" (OuterVolumeSpecName: "inventory") pod "74bf529d-bd61-4ef6-98e4-22cfcf412e48" (UID: "74bf529d-bd61-4ef6-98e4-22cfcf412e48"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:27 crc kubenswrapper[4908]: I0131 07:54:27.752026 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cvw4\" (UniqueName: \"kubernetes.io/projected/74bf529d-bd61-4ef6-98e4-22cfcf412e48-kube-api-access-9cvw4\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:27 crc kubenswrapper[4908]: I0131 07:54:27.752058 4908 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74bf529d-bd61-4ef6-98e4-22cfcf412e48-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:27 crc kubenswrapper[4908]: I0131 07:54:27.752067 4908 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74bf529d-bd61-4ef6-98e4-22cfcf412e48-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:28 crc kubenswrapper[4908]: I0131 07:54:28.100862 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fs7kz" event={"ID":"74bf529d-bd61-4ef6-98e4-22cfcf412e48","Type":"ContainerDied","Data":"b99511f29d6e87c8ec17448852b4466602b77665015f924e2885138d35ff4b06"} Jan 31 07:54:28 crc kubenswrapper[4908]: I0131 07:54:28.100918 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b99511f29d6e87c8ec17448852b4466602b77665015f924e2885138d35ff4b06" Jan 31 07:54:28 crc kubenswrapper[4908]: I0131 07:54:28.101025 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fs7kz" Jan 31 07:54:28 crc kubenswrapper[4908]: I0131 07:54:28.197734 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9"] Jan 31 07:54:28 crc kubenswrapper[4908]: E0131 07:54:28.198182 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac6dcb59-9f90-4dbc-9d68-359f871a4527" containerName="extract-utilities" Jan 31 07:54:28 crc kubenswrapper[4908]: I0131 07:54:28.198201 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac6dcb59-9f90-4dbc-9d68-359f871a4527" containerName="extract-utilities" Jan 31 07:54:28 crc kubenswrapper[4908]: E0131 07:54:28.198219 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac6dcb59-9f90-4dbc-9d68-359f871a4527" containerName="registry-server" Jan 31 07:54:28 crc kubenswrapper[4908]: I0131 07:54:28.198226 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac6dcb59-9f90-4dbc-9d68-359f871a4527" containerName="registry-server" Jan 31 07:54:28 crc kubenswrapper[4908]: E0131 07:54:28.198235 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac6dcb59-9f90-4dbc-9d68-359f871a4527" containerName="extract-content" Jan 31 07:54:28 crc kubenswrapper[4908]: I0131 07:54:28.198243 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac6dcb59-9f90-4dbc-9d68-359f871a4527" containerName="extract-content" Jan 31 07:54:28 crc kubenswrapper[4908]: E0131 07:54:28.198255 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d52cc1f-60d2-4122-ab4a-94ef0fd0735d" containerName="extract-content" Jan 31 07:54:28 crc kubenswrapper[4908]: I0131 07:54:28.198262 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d52cc1f-60d2-4122-ab4a-94ef0fd0735d" containerName="extract-content" Jan 31 07:54:28 crc kubenswrapper[4908]: E0131 07:54:28.198278 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74bf529d-bd61-4ef6-98e4-22cfcf412e48" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 07:54:28 crc kubenswrapper[4908]: I0131 07:54:28.198284 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="74bf529d-bd61-4ef6-98e4-22cfcf412e48" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 07:54:28 crc kubenswrapper[4908]: E0131 07:54:28.198301 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d52cc1f-60d2-4122-ab4a-94ef0fd0735d" containerName="registry-server" Jan 31 07:54:28 crc kubenswrapper[4908]: I0131 07:54:28.198308 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d52cc1f-60d2-4122-ab4a-94ef0fd0735d" containerName="registry-server" Jan 31 07:54:28 crc kubenswrapper[4908]: E0131 07:54:28.198324 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d52cc1f-60d2-4122-ab4a-94ef0fd0735d" containerName="extract-utilities" Jan 31 07:54:28 crc kubenswrapper[4908]: I0131 07:54:28.198331 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d52cc1f-60d2-4122-ab4a-94ef0fd0735d" containerName="extract-utilities" Jan 31 07:54:28 crc kubenswrapper[4908]: I0131 07:54:28.198469 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac6dcb59-9f90-4dbc-9d68-359f871a4527" containerName="registry-server" Jan 31 07:54:28 crc kubenswrapper[4908]: I0131 07:54:28.198484 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d52cc1f-60d2-4122-ab4a-94ef0fd0735d" containerName="registry-server" Jan 31 07:54:28 crc kubenswrapper[4908]: I0131 07:54:28.198497 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="74bf529d-bd61-4ef6-98e4-22cfcf412e48" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 07:54:28 crc kubenswrapper[4908]: I0131 07:54:28.199122 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9" Jan 31 07:54:28 crc kubenswrapper[4908]: I0131 07:54:28.202337 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 07:54:28 crc kubenswrapper[4908]: I0131 07:54:28.202402 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vgwb9" Jan 31 07:54:28 crc kubenswrapper[4908]: I0131 07:54:28.202781 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 07:54:28 crc kubenswrapper[4908]: I0131 07:54:28.202873 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 07:54:28 crc kubenswrapper[4908]: I0131 07:54:28.209313 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9"] Jan 31 07:54:28 crc kubenswrapper[4908]: I0131 07:54:28.363549 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24fb4b71-168a-4659-80ff-445b761ea66a-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9\" (UID: \"24fb4b71-168a-4659-80ff-445b761ea66a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9" Jan 31 07:54:28 crc kubenswrapper[4908]: I0131 07:54:28.363643 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24fb4b71-168a-4659-80ff-445b761ea66a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9\" (UID: \"24fb4b71-168a-4659-80ff-445b761ea66a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9" Jan 31 07:54:28 crc kubenswrapper[4908]: I0131 07:54:28.363682 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6lsh\" (UniqueName: \"kubernetes.io/projected/24fb4b71-168a-4659-80ff-445b761ea66a-kube-api-access-n6lsh\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9\" (UID: \"24fb4b71-168a-4659-80ff-445b761ea66a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9" Jan 31 07:54:28 crc kubenswrapper[4908]: I0131 07:54:28.465329 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24fb4b71-168a-4659-80ff-445b761ea66a-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9\" (UID: \"24fb4b71-168a-4659-80ff-445b761ea66a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9" Jan 31 07:54:28 crc kubenswrapper[4908]: I0131 07:54:28.465413 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24fb4b71-168a-4659-80ff-445b761ea66a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9\" (UID: \"24fb4b71-168a-4659-80ff-445b761ea66a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9" Jan 31 07:54:28 crc kubenswrapper[4908]: I0131 07:54:28.465457 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6lsh\" (UniqueName: \"kubernetes.io/projected/24fb4b71-168a-4659-80ff-445b761ea66a-kube-api-access-n6lsh\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9\" (UID: \"24fb4b71-168a-4659-80ff-445b761ea66a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9" Jan 31 07:54:28 crc kubenswrapper[4908]: I0131 07:54:28.480658 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24fb4b71-168a-4659-80ff-445b761ea66a-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9\" (UID: \"24fb4b71-168a-4659-80ff-445b761ea66a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9" Jan 31 07:54:28 crc kubenswrapper[4908]: I0131 07:54:28.481608 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24fb4b71-168a-4659-80ff-445b761ea66a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9\" (UID: \"24fb4b71-168a-4659-80ff-445b761ea66a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9" Jan 31 07:54:28 crc kubenswrapper[4908]: I0131 07:54:28.483381 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6lsh\" (UniqueName: \"kubernetes.io/projected/24fb4b71-168a-4659-80ff-445b761ea66a-kube-api-access-n6lsh\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9\" (UID: \"24fb4b71-168a-4659-80ff-445b761ea66a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9" Jan 31 07:54:28 crc kubenswrapper[4908]: I0131 07:54:28.513104 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9" Jan 31 07:54:29 crc kubenswrapper[4908]: I0131 07:54:29.037831 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9"] Jan 31 07:54:29 crc kubenswrapper[4908]: I0131 07:54:29.111177 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9" event={"ID":"24fb4b71-168a-4659-80ff-445b761ea66a","Type":"ContainerStarted","Data":"fce86a111a074aebc7f1b42be31ff851f0b355a9de20459b2f143dad2f46cd93"} Jan 31 07:54:29 crc kubenswrapper[4908]: I0131 07:54:29.284594 4908 scope.go:117] "RemoveContainer" containerID="ba7fa44fe398d73e051b3d97c9c9d2ca28595d34d22e982b1fad5a8abc928307" Jan 31 07:54:29 crc kubenswrapper[4908]: I0131 07:54:29.312378 4908 scope.go:117] "RemoveContainer" containerID="50eda1ac36c54e872cf51f718cccffdc29dfb09a5e5827257c12b06db26bafc6" Jan 31 07:54:29 crc kubenswrapper[4908]: I0131 07:54:29.347272 4908 scope.go:117] "RemoveContainer" containerID="9dee05cc9703721636256b8027e2ba92f2bbb0b1fda6ee47ea382fedae0af7d1" Jan 31 07:54:29 crc kubenswrapper[4908]: I0131 07:54:29.371050 4908 scope.go:117] "RemoveContainer" containerID="292464a81710a3ad057c52f4a6867bd255c4ad37143b5c763f45c9dd8012504b" Jan 31 07:54:29 crc kubenswrapper[4908]: I0131 07:54:29.407571 4908 scope.go:117] "RemoveContainer" containerID="c4111be580f71c81f7e5629e3ab6cec9e978eda21a807fca55e56e0d00a7596f" Jan 31 07:54:29 crc kubenswrapper[4908]: I0131 07:54:29.423883 4908 scope.go:117] "RemoveContainer" containerID="6b4776641867526effabb111ab2cb2e8ab183558e547005297d967fc452316eb" Jan 31 07:54:29 crc kubenswrapper[4908]: I0131 07:54:29.443487 4908 scope.go:117] "RemoveContainer" containerID="6082979425b80523e7336fd990b3cbabac41140f2b3c15cc96cc5cc0d13cd27a" Jan 31 07:54:33 crc kubenswrapper[4908]: I0131 07:54:33.242089 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d7lmk" podUID="ab3ed7e6-c344-450a-b8dd-a2e7af655ddb" containerName="registry-server" probeResult="failure" output=< Jan 31 07:54:33 crc kubenswrapper[4908]: timeout: failed to connect service ":50051" within 1s Jan 31 07:54:33 crc kubenswrapper[4908]: > Jan 31 07:54:34 crc kubenswrapper[4908]: I0131 07:54:34.152086 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9" event={"ID":"24fb4b71-168a-4659-80ff-445b761ea66a","Type":"ContainerStarted","Data":"adc9480650a8ae918fbebe6ecd320355121b55c388bbcb5660eb33a5c1a79e75"} Jan 31 07:54:34 crc kubenswrapper[4908]: I0131 07:54:34.167636 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9" podStartSLOduration=3.010755787 podStartE2EDuration="6.167615626s" podCreationTimestamp="2026-01-31 07:54:28 +0000 UTC" firstStartedPulling="2026-01-31 07:54:29.044073463 +0000 UTC m=+1975.660018127" lastFinishedPulling="2026-01-31 07:54:32.200933312 +0000 UTC m=+1978.816877966" observedRunningTime="2026-01-31 07:54:34.166032247 +0000 UTC m=+1980.781976901" watchObservedRunningTime="2026-01-31 07:54:34.167615626 +0000 UTC m=+1980.783560280" Jan 31 07:54:43 crc kubenswrapper[4908]: I0131 07:54:43.055124 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-9wmzx"] Jan 31 07:54:43 crc kubenswrapper[4908]: I0131 07:54:43.062034 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-9wmzx"] Jan 31 07:54:43 crc kubenswrapper[4908]: I0131 07:54:43.230835 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d7lmk" podUID="ab3ed7e6-c344-450a-b8dd-a2e7af655ddb" containerName="registry-server" probeResult="failure" output=< Jan 31 07:54:43 crc kubenswrapper[4908]: timeout: failed to connect service ":50051" within 1s Jan 31 07:54:43 crc kubenswrapper[4908]: > Jan 31 07:54:43 crc kubenswrapper[4908]: I0131 07:54:43.252963 4908 generic.go:334] "Generic (PLEG): container finished" podID="24fb4b71-168a-4659-80ff-445b761ea66a" containerID="adc9480650a8ae918fbebe6ecd320355121b55c388bbcb5660eb33a5c1a79e75" exitCode=0 Jan 31 07:54:43 crc kubenswrapper[4908]: I0131 07:54:43.253048 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9" event={"ID":"24fb4b71-168a-4659-80ff-445b761ea66a","Type":"ContainerDied","Data":"adc9480650a8ae918fbebe6ecd320355121b55c388bbcb5660eb33a5c1a79e75"} Jan 31 07:54:43 crc kubenswrapper[4908]: I0131 07:54:43.957902 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5db9441f-9c32-4ef1-a91c-1e6e76b57a81" path="/var/lib/kubelet/pods/5db9441f-9c32-4ef1-a91c-1e6e76b57a81/volumes" Jan 31 07:54:44 crc kubenswrapper[4908]: I0131 07:54:44.669348 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9" Jan 31 07:54:44 crc kubenswrapper[4908]: I0131 07:54:44.758266 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6lsh\" (UniqueName: \"kubernetes.io/projected/24fb4b71-168a-4659-80ff-445b761ea66a-kube-api-access-n6lsh\") pod \"24fb4b71-168a-4659-80ff-445b761ea66a\" (UID: \"24fb4b71-168a-4659-80ff-445b761ea66a\") " Jan 31 07:54:44 crc kubenswrapper[4908]: I0131 07:54:44.758353 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24fb4b71-168a-4659-80ff-445b761ea66a-inventory\") pod \"24fb4b71-168a-4659-80ff-445b761ea66a\" (UID: \"24fb4b71-168a-4659-80ff-445b761ea66a\") " Jan 31 07:54:44 crc kubenswrapper[4908]: I0131 07:54:44.758521 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24fb4b71-168a-4659-80ff-445b761ea66a-ssh-key-openstack-edpm-ipam\") pod \"24fb4b71-168a-4659-80ff-445b761ea66a\" (UID: \"24fb4b71-168a-4659-80ff-445b761ea66a\") " Jan 31 07:54:44 crc kubenswrapper[4908]: I0131 07:54:44.765545 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24fb4b71-168a-4659-80ff-445b761ea66a-kube-api-access-n6lsh" (OuterVolumeSpecName: "kube-api-access-n6lsh") pod "24fb4b71-168a-4659-80ff-445b761ea66a" (UID: "24fb4b71-168a-4659-80ff-445b761ea66a"). InnerVolumeSpecName "kube-api-access-n6lsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:54:44 crc kubenswrapper[4908]: I0131 07:54:44.789308 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24fb4b71-168a-4659-80ff-445b761ea66a-inventory" (OuterVolumeSpecName: "inventory") pod "24fb4b71-168a-4659-80ff-445b761ea66a" (UID: "24fb4b71-168a-4659-80ff-445b761ea66a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:44 crc kubenswrapper[4908]: I0131 07:54:44.819442 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24fb4b71-168a-4659-80ff-445b761ea66a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "24fb4b71-168a-4659-80ff-445b761ea66a" (UID: "24fb4b71-168a-4659-80ff-445b761ea66a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:44 crc kubenswrapper[4908]: I0131 07:54:44.861115 4908 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24fb4b71-168a-4659-80ff-445b761ea66a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:44 crc kubenswrapper[4908]: I0131 07:54:44.861153 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6lsh\" (UniqueName: \"kubernetes.io/projected/24fb4b71-168a-4659-80ff-445b761ea66a-kube-api-access-n6lsh\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:44 crc kubenswrapper[4908]: I0131 07:54:44.861164 4908 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24fb4b71-168a-4659-80ff-445b761ea66a-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:45 crc kubenswrapper[4908]: I0131 07:54:45.273490 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9" event={"ID":"24fb4b71-168a-4659-80ff-445b761ea66a","Type":"ContainerDied","Data":"fce86a111a074aebc7f1b42be31ff851f0b355a9de20459b2f143dad2f46cd93"} Jan 31 07:54:45 crc kubenswrapper[4908]: I0131 07:54:45.273564 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fce86a111a074aebc7f1b42be31ff851f0b355a9de20459b2f143dad2f46cd93" Jan 31 07:54:45 crc kubenswrapper[4908]: I0131 07:54:45.273658 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9" Jan 31 07:54:53 crc kubenswrapper[4908]: I0131 07:54:53.243341 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d7lmk" podUID="ab3ed7e6-c344-450a-b8dd-a2e7af655ddb" containerName="registry-server" probeResult="failure" output=< Jan 31 07:54:53 crc kubenswrapper[4908]: timeout: failed to connect service ":50051" within 1s Jan 31 07:54:53 crc kubenswrapper[4908]: > Jan 31 07:55:02 crc kubenswrapper[4908]: I0131 07:55:02.067041 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-x7zxw"] Jan 31 07:55:02 crc kubenswrapper[4908]: I0131 07:55:02.074527 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-x7zxw"] Jan 31 07:55:03 crc kubenswrapper[4908]: I0131 07:55:03.228327 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d7lmk" podUID="ab3ed7e6-c344-450a-b8dd-a2e7af655ddb" containerName="registry-server" probeResult="failure" output=< Jan 31 07:55:03 crc kubenswrapper[4908]: timeout: failed to connect service ":50051" within 1s Jan 31 07:55:03 crc kubenswrapper[4908]: > Jan 31 07:55:03 crc kubenswrapper[4908]: I0131 07:55:03.951747 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="905d2170-5f0c-4ee0-86b9-d659c80ad9f7" path="/var/lib/kubelet/pods/905d2170-5f0c-4ee0-86b9-d659c80ad9f7/volumes" Jan 31 07:55:13 crc kubenswrapper[4908]: I0131 07:55:13.231765 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d7lmk" podUID="ab3ed7e6-c344-450a-b8dd-a2e7af655ddb" containerName="registry-server" probeResult="failure" output=< Jan 31 07:55:13 crc kubenswrapper[4908]: timeout: failed to connect service ":50051" within 1s Jan 31 07:55:13 crc kubenswrapper[4908]: > Jan 31 07:55:22 crc kubenswrapper[4908]: I0131 07:55:22.236620 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d7lmk" Jan 31 07:55:22 crc kubenswrapper[4908]: I0131 07:55:22.290874 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d7lmk" Jan 31 07:55:22 crc kubenswrapper[4908]: I0131 07:55:22.475723 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d7lmk"] Jan 31 07:55:23 crc kubenswrapper[4908]: I0131 07:55:23.643234 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d7lmk" podUID="ab3ed7e6-c344-450a-b8dd-a2e7af655ddb" containerName="registry-server" containerID="cri-o://d81d928a9849beda2fc354d99b832539b8329393ea46eb6f72a84fd2e228e486" gracePeriod=2 Jan 31 07:55:24 crc kubenswrapper[4908]: I0131 07:55:24.654652 4908 generic.go:334] "Generic (PLEG): container finished" podID="ab3ed7e6-c344-450a-b8dd-a2e7af655ddb" containerID="d81d928a9849beda2fc354d99b832539b8329393ea46eb6f72a84fd2e228e486" exitCode=0 Jan 31 07:55:24 crc kubenswrapper[4908]: I0131 07:55:24.654752 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7lmk" event={"ID":"ab3ed7e6-c344-450a-b8dd-a2e7af655ddb","Type":"ContainerDied","Data":"d81d928a9849beda2fc354d99b832539b8329393ea46eb6f72a84fd2e228e486"} Jan 31 07:55:24 crc kubenswrapper[4908]: I0131 07:55:24.766638 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7lmk" Jan 31 07:55:24 crc kubenswrapper[4908]: I0131 07:55:24.927123 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab3ed7e6-c344-450a-b8dd-a2e7af655ddb-catalog-content\") pod \"ab3ed7e6-c344-450a-b8dd-a2e7af655ddb\" (UID: \"ab3ed7e6-c344-450a-b8dd-a2e7af655ddb\") " Jan 31 07:55:24 crc kubenswrapper[4908]: I0131 07:55:24.927252 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab3ed7e6-c344-450a-b8dd-a2e7af655ddb-utilities\") pod \"ab3ed7e6-c344-450a-b8dd-a2e7af655ddb\" (UID: \"ab3ed7e6-c344-450a-b8dd-a2e7af655ddb\") " Jan 31 07:55:24 crc kubenswrapper[4908]: I0131 07:55:24.927332 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kkzr\" (UniqueName: \"kubernetes.io/projected/ab3ed7e6-c344-450a-b8dd-a2e7af655ddb-kube-api-access-7kkzr\") pod \"ab3ed7e6-c344-450a-b8dd-a2e7af655ddb\" (UID: \"ab3ed7e6-c344-450a-b8dd-a2e7af655ddb\") " Jan 31 07:55:24 crc kubenswrapper[4908]: I0131 07:55:24.927709 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab3ed7e6-c344-450a-b8dd-a2e7af655ddb-utilities" (OuterVolumeSpecName: "utilities") pod "ab3ed7e6-c344-450a-b8dd-a2e7af655ddb" (UID: "ab3ed7e6-c344-450a-b8dd-a2e7af655ddb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:55:24 crc kubenswrapper[4908]: I0131 07:55:24.933140 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab3ed7e6-c344-450a-b8dd-a2e7af655ddb-kube-api-access-7kkzr" (OuterVolumeSpecName: "kube-api-access-7kkzr") pod "ab3ed7e6-c344-450a-b8dd-a2e7af655ddb" (UID: "ab3ed7e6-c344-450a-b8dd-a2e7af655ddb"). InnerVolumeSpecName "kube-api-access-7kkzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:55:25 crc kubenswrapper[4908]: I0131 07:55:25.030445 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab3ed7e6-c344-450a-b8dd-a2e7af655ddb-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:25 crc kubenswrapper[4908]: I0131 07:55:25.030932 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kkzr\" (UniqueName: \"kubernetes.io/projected/ab3ed7e6-c344-450a-b8dd-a2e7af655ddb-kube-api-access-7kkzr\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:25 crc kubenswrapper[4908]: I0131 07:55:25.063864 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab3ed7e6-c344-450a-b8dd-a2e7af655ddb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab3ed7e6-c344-450a-b8dd-a2e7af655ddb" (UID: "ab3ed7e6-c344-450a-b8dd-a2e7af655ddb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:55:25 crc kubenswrapper[4908]: I0131 07:55:25.133251 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab3ed7e6-c344-450a-b8dd-a2e7af655ddb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:25 crc kubenswrapper[4908]: I0131 07:55:25.666226 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7lmk" event={"ID":"ab3ed7e6-c344-450a-b8dd-a2e7af655ddb","Type":"ContainerDied","Data":"64de0e010b59e43cf426fac0d89a1317babc15e2c968e1879a31613c60dfc4a6"} Jan 31 07:55:25 crc kubenswrapper[4908]: I0131 07:55:25.666281 4908 scope.go:117] "RemoveContainer" containerID="d81d928a9849beda2fc354d99b832539b8329393ea46eb6f72a84fd2e228e486" Jan 31 07:55:25 crc kubenswrapper[4908]: I0131 07:55:25.666298 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7lmk" Jan 31 07:55:25 crc kubenswrapper[4908]: I0131 07:55:25.693561 4908 scope.go:117] "RemoveContainer" containerID="f48ffb42db4080b7c1f7608d9ef73845fe36d9c73661a3ce3b8a160f6820e692" Jan 31 07:55:25 crc kubenswrapper[4908]: I0131 07:55:25.701881 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d7lmk"] Jan 31 07:55:25 crc kubenswrapper[4908]: I0131 07:55:25.724813 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d7lmk"] Jan 31 07:55:25 crc kubenswrapper[4908]: I0131 07:55:25.749616 4908 scope.go:117] "RemoveContainer" containerID="3bdeb8030f06b3e30c4c6a4351a868fe69103f75b0438d8672b8240e96a8a404" Jan 31 07:55:25 crc kubenswrapper[4908]: I0131 07:55:25.954945 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab3ed7e6-c344-450a-b8dd-a2e7af655ddb" path="/var/lib/kubelet/pods/ab3ed7e6-c344-450a-b8dd-a2e7af655ddb/volumes" Jan 31 07:55:29 crc kubenswrapper[4908]: I0131 07:55:29.624200 4908 scope.go:117] "RemoveContainer" containerID="90fd9664b230b304a2cdfabbac2b9ff4b0d5078416e6527ab66d6fbb187cd89f" Jan 31 07:55:29 crc kubenswrapper[4908]: I0131 07:55:29.667325 4908 scope.go:117] "RemoveContainer" containerID="7f4a49a40ac208b54feea15b6d7d43ff1b3c2bc69f05951587dc554e55a7ab0d" Jan 31 07:55:30 crc kubenswrapper[4908]: I0131 07:55:30.045504 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-pws6x"] Jan 31 07:55:30 crc kubenswrapper[4908]: I0131 07:55:30.053728 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-pws6x"] Jan 31 07:55:31 crc kubenswrapper[4908]: I0131 07:55:31.954489 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f33284f-ffc2-4c8a-8b18-9b2b4083ed18" path="/var/lib/kubelet/pods/7f33284f-ffc2-4c8a-8b18-9b2b4083ed18/volumes" Jan 31 07:56:10 crc kubenswrapper[4908]: I0131 07:56:10.430846 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:56:10 crc kubenswrapper[4908]: I0131 07:56:10.431348 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:56:29 crc kubenswrapper[4908]: I0131 07:56:29.775181 4908 scope.go:117] "RemoveContainer" containerID="8637279c0317c8b9b4bef6804af9643143307db0805b845a2816105886485b27" Jan 31 07:56:40 crc kubenswrapper[4908]: I0131 07:56:40.431274 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:56:40 crc kubenswrapper[4908]: I0131 07:56:40.431860 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:57:10 crc kubenswrapper[4908]: I0131 07:57:10.430961 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:57:10 crc kubenswrapper[4908]: I0131 07:57:10.431605 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:57:10 crc kubenswrapper[4908]: I0131 07:57:10.431661 4908 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 07:57:10 crc kubenswrapper[4908]: I0131 07:57:10.432517 4908 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"be8190fad5773a1ed4c2064880cbb099b59976ded3b5fd0bc8e24e0b9a05cbc6"} pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 07:57:10 crc kubenswrapper[4908]: I0131 07:57:10.432586 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" containerID="cri-o://be8190fad5773a1ed4c2064880cbb099b59976ded3b5fd0bc8e24e0b9a05cbc6" gracePeriod=600 Jan 31 07:57:11 crc kubenswrapper[4908]: I0131 07:57:11.629068 4908 generic.go:334] "Generic (PLEG): container finished" podID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerID="be8190fad5773a1ed4c2064880cbb099b59976ded3b5fd0bc8e24e0b9a05cbc6" exitCode=0 Jan 31 07:57:11 crc kubenswrapper[4908]: I0131 07:57:11.630182 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerDied","Data":"be8190fad5773a1ed4c2064880cbb099b59976ded3b5fd0bc8e24e0b9a05cbc6"} Jan 31 07:57:11 crc kubenswrapper[4908]: I0131 07:57:11.630223 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerStarted","Data":"d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7"} Jan 31 07:57:11 crc kubenswrapper[4908]: I0131 07:57:11.630298 4908 scope.go:117] "RemoveContainer" containerID="3a8061e1a8e134cc5688df613e6b960e43ee581ec46198b97b778c0243981def" Jan 31 07:58:40 crc kubenswrapper[4908]: I0131 07:58:40.947399 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f9hx9"] Jan 31 07:58:40 crc kubenswrapper[4908]: E0131 07:58:40.948441 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab3ed7e6-c344-450a-b8dd-a2e7af655ddb" containerName="registry-server" Jan 31 07:58:40 crc kubenswrapper[4908]: I0131 07:58:40.948458 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab3ed7e6-c344-450a-b8dd-a2e7af655ddb" containerName="registry-server" Jan 31 07:58:40 crc kubenswrapper[4908]: E0131 07:58:40.948476 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24fb4b71-168a-4659-80ff-445b761ea66a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 07:58:40 crc kubenswrapper[4908]: I0131 07:58:40.948485 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="24fb4b71-168a-4659-80ff-445b761ea66a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 07:58:40 crc kubenswrapper[4908]: E0131 07:58:40.948504 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab3ed7e6-c344-450a-b8dd-a2e7af655ddb" containerName="extract-content" Jan 31 07:58:40 crc kubenswrapper[4908]: I0131 07:58:40.948510 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab3ed7e6-c344-450a-b8dd-a2e7af655ddb" containerName="extract-content" Jan 31 07:58:40 crc kubenswrapper[4908]: E0131 07:58:40.948521 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab3ed7e6-c344-450a-b8dd-a2e7af655ddb" containerName="extract-utilities" Jan 31 07:58:40 crc kubenswrapper[4908]: I0131 07:58:40.948527 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab3ed7e6-c344-450a-b8dd-a2e7af655ddb" containerName="extract-utilities" Jan 31 07:58:40 crc kubenswrapper[4908]: I0131 07:58:40.948682 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="24fb4b71-168a-4659-80ff-445b761ea66a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 07:58:40 crc kubenswrapper[4908]: I0131 07:58:40.948706 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab3ed7e6-c344-450a-b8dd-a2e7af655ddb" containerName="registry-server" Jan 31 07:58:40 crc kubenswrapper[4908]: I0131 07:58:40.949914 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f9hx9" Jan 31 07:58:40 crc kubenswrapper[4908]: I0131 07:58:40.960935 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f9hx9"] Jan 31 07:58:41 crc kubenswrapper[4908]: I0131 07:58:41.009219 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4fd7c3d-f896-4130-9841-ef7001f887ae-catalog-content\") pod \"certified-operators-f9hx9\" (UID: \"b4fd7c3d-f896-4130-9841-ef7001f887ae\") " pod="openshift-marketplace/certified-operators-f9hx9" Jan 31 07:58:41 crc kubenswrapper[4908]: I0131 07:58:41.009313 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4fd7c3d-f896-4130-9841-ef7001f887ae-utilities\") pod \"certified-operators-f9hx9\" (UID: \"b4fd7c3d-f896-4130-9841-ef7001f887ae\") " pod="openshift-marketplace/certified-operators-f9hx9" Jan 31 07:58:41 crc kubenswrapper[4908]: I0131 07:58:41.009387 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l55jm\" (UniqueName: \"kubernetes.io/projected/b4fd7c3d-f896-4130-9841-ef7001f887ae-kube-api-access-l55jm\") pod \"certified-operators-f9hx9\" (UID: \"b4fd7c3d-f896-4130-9841-ef7001f887ae\") " pod="openshift-marketplace/certified-operators-f9hx9" Jan 31 07:58:41 crc kubenswrapper[4908]: I0131 07:58:41.110884 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4fd7c3d-f896-4130-9841-ef7001f887ae-utilities\") pod \"certified-operators-f9hx9\" (UID: \"b4fd7c3d-f896-4130-9841-ef7001f887ae\") " pod="openshift-marketplace/certified-operators-f9hx9" Jan 31 07:58:41 crc kubenswrapper[4908]: I0131 07:58:41.111033 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l55jm\" (UniqueName: \"kubernetes.io/projected/b4fd7c3d-f896-4130-9841-ef7001f887ae-kube-api-access-l55jm\") pod \"certified-operators-f9hx9\" (UID: \"b4fd7c3d-f896-4130-9841-ef7001f887ae\") " pod="openshift-marketplace/certified-operators-f9hx9" Jan 31 07:58:41 crc kubenswrapper[4908]: I0131 07:58:41.111137 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4fd7c3d-f896-4130-9841-ef7001f887ae-catalog-content\") pod \"certified-operators-f9hx9\" (UID: \"b4fd7c3d-f896-4130-9841-ef7001f887ae\") " pod="openshift-marketplace/certified-operators-f9hx9" Jan 31 07:58:41 crc kubenswrapper[4908]: I0131 07:58:41.111644 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4fd7c3d-f896-4130-9841-ef7001f887ae-catalog-content\") pod \"certified-operators-f9hx9\" (UID: \"b4fd7c3d-f896-4130-9841-ef7001f887ae\") " pod="openshift-marketplace/certified-operators-f9hx9" Jan 31 07:58:41 crc kubenswrapper[4908]: I0131 07:58:41.111801 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4fd7c3d-f896-4130-9841-ef7001f887ae-utilities\") pod \"certified-operators-f9hx9\" (UID: \"b4fd7c3d-f896-4130-9841-ef7001f887ae\") " pod="openshift-marketplace/certified-operators-f9hx9" Jan 31 07:58:41 crc kubenswrapper[4908]: I0131 07:58:41.132601 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l55jm\" (UniqueName: \"kubernetes.io/projected/b4fd7c3d-f896-4130-9841-ef7001f887ae-kube-api-access-l55jm\") pod \"certified-operators-f9hx9\" (UID: \"b4fd7c3d-f896-4130-9841-ef7001f887ae\") " pod="openshift-marketplace/certified-operators-f9hx9" Jan 31 07:58:41 crc kubenswrapper[4908]: I0131 07:58:41.269195 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f9hx9" Jan 31 07:58:41 crc kubenswrapper[4908]: I0131 07:58:41.859498 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f9hx9"] Jan 31 07:58:42 crc kubenswrapper[4908]: I0131 07:58:42.116761 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f9hx9" event={"ID":"b4fd7c3d-f896-4130-9841-ef7001f887ae","Type":"ContainerStarted","Data":"02c5f521f44e880471df3bf85ed74fdd295c5b388ef08519d23c9306d7b65c28"} Jan 31 07:58:43 crc kubenswrapper[4908]: I0131 07:58:43.126175 4908 generic.go:334] "Generic (PLEG): container finished" podID="b4fd7c3d-f896-4130-9841-ef7001f887ae" containerID="b9ae082b18e3bf51a2106fc84f7af442cbb17f14ed0fd6cc9401746a3accc3fa" exitCode=0 Jan 31 07:58:43 crc kubenswrapper[4908]: I0131 07:58:43.126232 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f9hx9" event={"ID":"b4fd7c3d-f896-4130-9841-ef7001f887ae","Type":"ContainerDied","Data":"b9ae082b18e3bf51a2106fc84f7af442cbb17f14ed0fd6cc9401746a3accc3fa"} Jan 31 07:58:43 crc kubenswrapper[4908]: I0131 07:58:43.128488 4908 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 07:58:46 crc kubenswrapper[4908]: I0131 07:58:46.151666 4908 generic.go:334] "Generic (PLEG): container finished" podID="b4fd7c3d-f896-4130-9841-ef7001f887ae" containerID="2308efa9b6937b863320cf44d4d8716fe502e5fa94adab75ca3f901f75b23d0b" exitCode=0 Jan 31 07:58:46 crc kubenswrapper[4908]: I0131 07:58:46.151730 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f9hx9" event={"ID":"b4fd7c3d-f896-4130-9841-ef7001f887ae","Type":"ContainerDied","Data":"2308efa9b6937b863320cf44d4d8716fe502e5fa94adab75ca3f901f75b23d0b"} Jan 31 07:58:48 crc kubenswrapper[4908]: I0131 07:58:48.170393 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f9hx9" event={"ID":"b4fd7c3d-f896-4130-9841-ef7001f887ae","Type":"ContainerStarted","Data":"f5d902bad1ef05980ba17ca4baf5fc953c7691c021773ecce6da7d71b6491fd9"} Jan 31 07:58:48 crc kubenswrapper[4908]: I0131 07:58:48.196662 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f9hx9" podStartSLOduration=3.433196832 podStartE2EDuration="8.196640691s" podCreationTimestamp="2026-01-31 07:58:40 +0000 UTC" firstStartedPulling="2026-01-31 07:58:43.128251971 +0000 UTC m=+2229.744196625" lastFinishedPulling="2026-01-31 07:58:47.89169582 +0000 UTC m=+2234.507640484" observedRunningTime="2026-01-31 07:58:48.188266332 +0000 UTC m=+2234.804210976" watchObservedRunningTime="2026-01-31 07:58:48.196640691 +0000 UTC m=+2234.812585345" Jan 31 07:58:51 crc kubenswrapper[4908]: I0131 07:58:51.269729 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f9hx9" Jan 31 07:58:51 crc kubenswrapper[4908]: I0131 07:58:51.270088 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f9hx9" Jan 31 07:58:51 crc kubenswrapper[4908]: I0131 07:58:51.337537 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f9hx9" Jan 31 07:59:01 crc kubenswrapper[4908]: I0131 07:59:01.322471 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f9hx9" Jan 31 07:59:01 crc kubenswrapper[4908]: I0131 07:59:01.374025 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f9hx9"] Jan 31 07:59:02 crc kubenswrapper[4908]: I0131 07:59:02.283725 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f9hx9" podUID="b4fd7c3d-f896-4130-9841-ef7001f887ae" containerName="registry-server" containerID="cri-o://f5d902bad1ef05980ba17ca4baf5fc953c7691c021773ecce6da7d71b6491fd9" gracePeriod=2 Jan 31 07:59:02 crc kubenswrapper[4908]: I0131 07:59:02.870817 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f9hx9" Jan 31 07:59:02 crc kubenswrapper[4908]: I0131 07:59:02.938881 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4fd7c3d-f896-4130-9841-ef7001f887ae-utilities\") pod \"b4fd7c3d-f896-4130-9841-ef7001f887ae\" (UID: \"b4fd7c3d-f896-4130-9841-ef7001f887ae\") " Jan 31 07:59:02 crc kubenswrapper[4908]: I0131 07:59:02.939485 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4fd7c3d-f896-4130-9841-ef7001f887ae-catalog-content\") pod \"b4fd7c3d-f896-4130-9841-ef7001f887ae\" (UID: \"b4fd7c3d-f896-4130-9841-ef7001f887ae\") " Jan 31 07:59:02 crc kubenswrapper[4908]: I0131 07:59:02.939855 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4fd7c3d-f896-4130-9841-ef7001f887ae-utilities" (OuterVolumeSpecName: "utilities") pod "b4fd7c3d-f896-4130-9841-ef7001f887ae" (UID: "b4fd7c3d-f896-4130-9841-ef7001f887ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:59:02 crc kubenswrapper[4908]: I0131 07:59:02.940621 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l55jm\" (UniqueName: \"kubernetes.io/projected/b4fd7c3d-f896-4130-9841-ef7001f887ae-kube-api-access-l55jm\") pod \"b4fd7c3d-f896-4130-9841-ef7001f887ae\" (UID: \"b4fd7c3d-f896-4130-9841-ef7001f887ae\") " Jan 31 07:59:02 crc kubenswrapper[4908]: I0131 07:59:02.942288 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4fd7c3d-f896-4130-9841-ef7001f887ae-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:59:02 crc kubenswrapper[4908]: I0131 07:59:02.946647 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4fd7c3d-f896-4130-9841-ef7001f887ae-kube-api-access-l55jm" (OuterVolumeSpecName: "kube-api-access-l55jm") pod "b4fd7c3d-f896-4130-9841-ef7001f887ae" (UID: "b4fd7c3d-f896-4130-9841-ef7001f887ae"). InnerVolumeSpecName "kube-api-access-l55jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:59:02 crc kubenswrapper[4908]: I0131 07:59:02.989378 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4fd7c3d-f896-4130-9841-ef7001f887ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4fd7c3d-f896-4130-9841-ef7001f887ae" (UID: "b4fd7c3d-f896-4130-9841-ef7001f887ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:59:03 crc kubenswrapper[4908]: I0131 07:59:03.044300 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4fd7c3d-f896-4130-9841-ef7001f887ae-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:59:03 crc kubenswrapper[4908]: I0131 07:59:03.044798 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l55jm\" (UniqueName: \"kubernetes.io/projected/b4fd7c3d-f896-4130-9841-ef7001f887ae-kube-api-access-l55jm\") on node \"crc\" DevicePath \"\"" Jan 31 07:59:03 crc kubenswrapper[4908]: I0131 07:59:03.296441 4908 generic.go:334] "Generic (PLEG): container finished" podID="b4fd7c3d-f896-4130-9841-ef7001f887ae" containerID="f5d902bad1ef05980ba17ca4baf5fc953c7691c021773ecce6da7d71b6491fd9" exitCode=0 Jan 31 07:59:03 crc kubenswrapper[4908]: I0131 07:59:03.296495 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f9hx9" event={"ID":"b4fd7c3d-f896-4130-9841-ef7001f887ae","Type":"ContainerDied","Data":"f5d902bad1ef05980ba17ca4baf5fc953c7691c021773ecce6da7d71b6491fd9"} Jan 31 07:59:03 crc kubenswrapper[4908]: I0131 07:59:03.296509 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f9hx9" Jan 31 07:59:03 crc kubenswrapper[4908]: I0131 07:59:03.296530 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f9hx9" event={"ID":"b4fd7c3d-f896-4130-9841-ef7001f887ae","Type":"ContainerDied","Data":"02c5f521f44e880471df3bf85ed74fdd295c5b388ef08519d23c9306d7b65c28"} Jan 31 07:59:03 crc kubenswrapper[4908]: I0131 07:59:03.296554 4908 scope.go:117] "RemoveContainer" containerID="f5d902bad1ef05980ba17ca4baf5fc953c7691c021773ecce6da7d71b6491fd9" Jan 31 07:59:03 crc kubenswrapper[4908]: I0131 07:59:03.329208 4908 scope.go:117] "RemoveContainer" containerID="2308efa9b6937b863320cf44d4d8716fe502e5fa94adab75ca3f901f75b23d0b" Jan 31 07:59:03 crc kubenswrapper[4908]: I0131 07:59:03.337467 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f9hx9"] Jan 31 07:59:03 crc kubenswrapper[4908]: I0131 07:59:03.347710 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f9hx9"] Jan 31 07:59:03 crc kubenswrapper[4908]: I0131 07:59:03.353304 4908 scope.go:117] "RemoveContainer" containerID="b9ae082b18e3bf51a2106fc84f7af442cbb17f14ed0fd6cc9401746a3accc3fa" Jan 31 07:59:03 crc kubenswrapper[4908]: I0131 07:59:03.418328 4908 scope.go:117] "RemoveContainer" containerID="f5d902bad1ef05980ba17ca4baf5fc953c7691c021773ecce6da7d71b6491fd9" Jan 31 07:59:03 crc kubenswrapper[4908]: E0131 07:59:03.419154 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5d902bad1ef05980ba17ca4baf5fc953c7691c021773ecce6da7d71b6491fd9\": container with ID starting with f5d902bad1ef05980ba17ca4baf5fc953c7691c021773ecce6da7d71b6491fd9 not found: ID does not exist" containerID="f5d902bad1ef05980ba17ca4baf5fc953c7691c021773ecce6da7d71b6491fd9" Jan 31 07:59:03 crc kubenswrapper[4908]: I0131 07:59:03.419204 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5d902bad1ef05980ba17ca4baf5fc953c7691c021773ecce6da7d71b6491fd9"} err="failed to get container status \"f5d902bad1ef05980ba17ca4baf5fc953c7691c021773ecce6da7d71b6491fd9\": rpc error: code = NotFound desc = could not find container \"f5d902bad1ef05980ba17ca4baf5fc953c7691c021773ecce6da7d71b6491fd9\": container with ID starting with f5d902bad1ef05980ba17ca4baf5fc953c7691c021773ecce6da7d71b6491fd9 not found: ID does not exist" Jan 31 07:59:03 crc kubenswrapper[4908]: I0131 07:59:03.419235 4908 scope.go:117] "RemoveContainer" containerID="2308efa9b6937b863320cf44d4d8716fe502e5fa94adab75ca3f901f75b23d0b" Jan 31 07:59:03 crc kubenswrapper[4908]: E0131 07:59:03.419628 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2308efa9b6937b863320cf44d4d8716fe502e5fa94adab75ca3f901f75b23d0b\": container with ID starting with 2308efa9b6937b863320cf44d4d8716fe502e5fa94adab75ca3f901f75b23d0b not found: ID does not exist" containerID="2308efa9b6937b863320cf44d4d8716fe502e5fa94adab75ca3f901f75b23d0b" Jan 31 07:59:03 crc kubenswrapper[4908]: I0131 07:59:03.419666 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2308efa9b6937b863320cf44d4d8716fe502e5fa94adab75ca3f901f75b23d0b"} err="failed to get container status \"2308efa9b6937b863320cf44d4d8716fe502e5fa94adab75ca3f901f75b23d0b\": rpc error: code = NotFound desc = could not find container \"2308efa9b6937b863320cf44d4d8716fe502e5fa94adab75ca3f901f75b23d0b\": container with ID starting with 2308efa9b6937b863320cf44d4d8716fe502e5fa94adab75ca3f901f75b23d0b not found: ID does not exist" Jan 31 07:59:03 crc kubenswrapper[4908]: I0131 07:59:03.419689 4908 scope.go:117] "RemoveContainer" containerID="b9ae082b18e3bf51a2106fc84f7af442cbb17f14ed0fd6cc9401746a3accc3fa" Jan 31 07:59:03 crc kubenswrapper[4908]: E0131 07:59:03.420007 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9ae082b18e3bf51a2106fc84f7af442cbb17f14ed0fd6cc9401746a3accc3fa\": container with ID starting with b9ae082b18e3bf51a2106fc84f7af442cbb17f14ed0fd6cc9401746a3accc3fa not found: ID does not exist" containerID="b9ae082b18e3bf51a2106fc84f7af442cbb17f14ed0fd6cc9401746a3accc3fa" Jan 31 07:59:03 crc kubenswrapper[4908]: I0131 07:59:03.420028 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9ae082b18e3bf51a2106fc84f7af442cbb17f14ed0fd6cc9401746a3accc3fa"} err="failed to get container status \"b9ae082b18e3bf51a2106fc84f7af442cbb17f14ed0fd6cc9401746a3accc3fa\": rpc error: code = NotFound desc = could not find container \"b9ae082b18e3bf51a2106fc84f7af442cbb17f14ed0fd6cc9401746a3accc3fa\": container with ID starting with b9ae082b18e3bf51a2106fc84f7af442cbb17f14ed0fd6cc9401746a3accc3fa not found: ID does not exist" Jan 31 07:59:03 crc kubenswrapper[4908]: I0131 07:59:03.956283 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4fd7c3d-f896-4130-9841-ef7001f887ae" path="/var/lib/kubelet/pods/b4fd7c3d-f896-4130-9841-ef7001f887ae/volumes" Jan 31 07:59:40 crc kubenswrapper[4908]: I0131 07:59:40.431959 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:59:40 crc kubenswrapper[4908]: I0131 07:59:40.432617 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:00:00 crc kubenswrapper[4908]: I0131 08:00:00.152759 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497440-zrh8b"] Jan 31 08:00:00 crc kubenswrapper[4908]: E0131 08:00:00.158566 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4fd7c3d-f896-4130-9841-ef7001f887ae" containerName="extract-content" Jan 31 08:00:00 crc kubenswrapper[4908]: I0131 08:00:00.158587 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4fd7c3d-f896-4130-9841-ef7001f887ae" containerName="extract-content" Jan 31 08:00:00 crc kubenswrapper[4908]: E0131 08:00:00.158617 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4fd7c3d-f896-4130-9841-ef7001f887ae" containerName="extract-utilities" Jan 31 08:00:00 crc kubenswrapper[4908]: I0131 08:00:00.158625 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4fd7c3d-f896-4130-9841-ef7001f887ae" containerName="extract-utilities" Jan 31 08:00:00 crc kubenswrapper[4908]: E0131 08:00:00.158639 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4fd7c3d-f896-4130-9841-ef7001f887ae" containerName="registry-server" Jan 31 08:00:00 crc kubenswrapper[4908]: I0131 08:00:00.158645 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4fd7c3d-f896-4130-9841-ef7001f887ae" containerName="registry-server" Jan 31 08:00:00 crc kubenswrapper[4908]: I0131 08:00:00.158824 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4fd7c3d-f896-4130-9841-ef7001f887ae" containerName="registry-server" Jan 31 08:00:00 crc kubenswrapper[4908]: I0131 08:00:00.159482 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-zrh8b" Jan 31 08:00:00 crc kubenswrapper[4908]: I0131 08:00:00.161658 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 08:00:00 crc kubenswrapper[4908]: I0131 08:00:00.161692 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 08:00:00 crc kubenswrapper[4908]: I0131 08:00:00.165413 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497440-zrh8b"] Jan 31 08:00:00 crc kubenswrapper[4908]: I0131 08:00:00.275273 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpncb\" (UniqueName: \"kubernetes.io/projected/705e20d7-875b-4674-9d26-463b7b47e9a7-kube-api-access-jpncb\") pod \"collect-profiles-29497440-zrh8b\" (UID: \"705e20d7-875b-4674-9d26-463b7b47e9a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-zrh8b" Jan 31 08:00:00 crc kubenswrapper[4908]: I0131 08:00:00.275339 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/705e20d7-875b-4674-9d26-463b7b47e9a7-secret-volume\") pod \"collect-profiles-29497440-zrh8b\" (UID: \"705e20d7-875b-4674-9d26-463b7b47e9a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-zrh8b" Jan 31 08:00:00 crc kubenswrapper[4908]: I0131 08:00:00.275432 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/705e20d7-875b-4674-9d26-463b7b47e9a7-config-volume\") pod \"collect-profiles-29497440-zrh8b\" (UID: \"705e20d7-875b-4674-9d26-463b7b47e9a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-zrh8b" Jan 31 08:00:00 crc kubenswrapper[4908]: I0131 08:00:00.378204 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/705e20d7-875b-4674-9d26-463b7b47e9a7-config-volume\") pod \"collect-profiles-29497440-zrh8b\" (UID: \"705e20d7-875b-4674-9d26-463b7b47e9a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-zrh8b" Jan 31 08:00:00 crc kubenswrapper[4908]: I0131 08:00:00.378304 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpncb\" (UniqueName: \"kubernetes.io/projected/705e20d7-875b-4674-9d26-463b7b47e9a7-kube-api-access-jpncb\") pod \"collect-profiles-29497440-zrh8b\" (UID: \"705e20d7-875b-4674-9d26-463b7b47e9a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-zrh8b" Jan 31 08:00:00 crc kubenswrapper[4908]: I0131 08:00:00.378369 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/705e20d7-875b-4674-9d26-463b7b47e9a7-secret-volume\") pod \"collect-profiles-29497440-zrh8b\" (UID: \"705e20d7-875b-4674-9d26-463b7b47e9a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-zrh8b" Jan 31 08:00:00 crc kubenswrapper[4908]: I0131 08:00:00.380299 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/705e20d7-875b-4674-9d26-463b7b47e9a7-config-volume\") pod \"collect-profiles-29497440-zrh8b\" (UID: \"705e20d7-875b-4674-9d26-463b7b47e9a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-zrh8b" Jan 31 08:00:00 crc kubenswrapper[4908]: I0131 08:00:00.385229 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/705e20d7-875b-4674-9d26-463b7b47e9a7-secret-volume\") pod \"collect-profiles-29497440-zrh8b\" (UID: \"705e20d7-875b-4674-9d26-463b7b47e9a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-zrh8b" Jan 31 08:00:00 crc kubenswrapper[4908]: I0131 08:00:00.399795 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpncb\" (UniqueName: \"kubernetes.io/projected/705e20d7-875b-4674-9d26-463b7b47e9a7-kube-api-access-jpncb\") pod \"collect-profiles-29497440-zrh8b\" (UID: \"705e20d7-875b-4674-9d26-463b7b47e9a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-zrh8b" Jan 31 08:00:00 crc kubenswrapper[4908]: I0131 08:00:00.484963 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-zrh8b" Jan 31 08:00:00 crc kubenswrapper[4908]: I0131 08:00:00.934047 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497440-zrh8b"] Jan 31 08:00:01 crc kubenswrapper[4908]: I0131 08:00:01.814202 4908 generic.go:334] "Generic (PLEG): container finished" podID="705e20d7-875b-4674-9d26-463b7b47e9a7" containerID="d68382d098de1613d80dd8bae23151a2588d0996705c258c153095b41d7d72fb" exitCode=0 Jan 31 08:00:01 crc kubenswrapper[4908]: I0131 08:00:01.814290 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-zrh8b" event={"ID":"705e20d7-875b-4674-9d26-463b7b47e9a7","Type":"ContainerDied","Data":"d68382d098de1613d80dd8bae23151a2588d0996705c258c153095b41d7d72fb"} Jan 31 08:00:01 crc kubenswrapper[4908]: I0131 08:00:01.814498 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-zrh8b" event={"ID":"705e20d7-875b-4674-9d26-463b7b47e9a7","Type":"ContainerStarted","Data":"28e2038eb567c663f57d3237f2a90874e9075872e974185cbb66ddc894508692"} Jan 31 08:00:03 crc kubenswrapper[4908]: I0131 08:00:03.187476 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-zrh8b" Jan 31 08:00:03 crc kubenswrapper[4908]: I0131 08:00:03.337501 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/705e20d7-875b-4674-9d26-463b7b47e9a7-config-volume\") pod \"705e20d7-875b-4674-9d26-463b7b47e9a7\" (UID: \"705e20d7-875b-4674-9d26-463b7b47e9a7\") " Jan 31 08:00:03 crc kubenswrapper[4908]: I0131 08:00:03.337548 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpncb\" (UniqueName: \"kubernetes.io/projected/705e20d7-875b-4674-9d26-463b7b47e9a7-kube-api-access-jpncb\") pod \"705e20d7-875b-4674-9d26-463b7b47e9a7\" (UID: \"705e20d7-875b-4674-9d26-463b7b47e9a7\") " Jan 31 08:00:03 crc kubenswrapper[4908]: I0131 08:00:03.337590 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/705e20d7-875b-4674-9d26-463b7b47e9a7-secret-volume\") pod \"705e20d7-875b-4674-9d26-463b7b47e9a7\" (UID: \"705e20d7-875b-4674-9d26-463b7b47e9a7\") " Jan 31 08:00:03 crc kubenswrapper[4908]: I0131 08:00:03.338294 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/705e20d7-875b-4674-9d26-463b7b47e9a7-config-volume" (OuterVolumeSpecName: "config-volume") pod "705e20d7-875b-4674-9d26-463b7b47e9a7" (UID: "705e20d7-875b-4674-9d26-463b7b47e9a7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:00:03 crc kubenswrapper[4908]: I0131 08:00:03.338944 4908 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/705e20d7-875b-4674-9d26-463b7b47e9a7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 08:00:03 crc kubenswrapper[4908]: I0131 08:00:03.342966 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/705e20d7-875b-4674-9d26-463b7b47e9a7-kube-api-access-jpncb" (OuterVolumeSpecName: "kube-api-access-jpncb") pod "705e20d7-875b-4674-9d26-463b7b47e9a7" (UID: "705e20d7-875b-4674-9d26-463b7b47e9a7"). InnerVolumeSpecName "kube-api-access-jpncb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:00:03 crc kubenswrapper[4908]: I0131 08:00:03.351162 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/705e20d7-875b-4674-9d26-463b7b47e9a7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "705e20d7-875b-4674-9d26-463b7b47e9a7" (UID: "705e20d7-875b-4674-9d26-463b7b47e9a7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:00:03 crc kubenswrapper[4908]: I0131 08:00:03.441549 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpncb\" (UniqueName: \"kubernetes.io/projected/705e20d7-875b-4674-9d26-463b7b47e9a7-kube-api-access-jpncb\") on node \"crc\" DevicePath \"\"" Jan 31 08:00:03 crc kubenswrapper[4908]: I0131 08:00:03.441607 4908 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/705e20d7-875b-4674-9d26-463b7b47e9a7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 08:00:03 crc kubenswrapper[4908]: I0131 08:00:03.843424 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-zrh8b" event={"ID":"705e20d7-875b-4674-9d26-463b7b47e9a7","Type":"ContainerDied","Data":"28e2038eb567c663f57d3237f2a90874e9075872e974185cbb66ddc894508692"} Jan 31 08:00:03 crc kubenswrapper[4908]: I0131 08:00:03.843464 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28e2038eb567c663f57d3237f2a90874e9075872e974185cbb66ddc894508692" Jan 31 08:00:03 crc kubenswrapper[4908]: I0131 08:00:03.843489 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-zrh8b" Jan 31 08:00:04 crc kubenswrapper[4908]: I0131 08:00:04.274184 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497395-k4x5t"] Jan 31 08:00:04 crc kubenswrapper[4908]: I0131 08:00:04.282243 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497395-k4x5t"] Jan 31 08:00:05 crc kubenswrapper[4908]: I0131 08:00:05.952538 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe51cf59-5f34-4f01-8404-2f95b7ca742b" path="/var/lib/kubelet/pods/fe51cf59-5f34-4f01-8404-2f95b7ca742b/volumes" Jan 31 08:00:10 crc kubenswrapper[4908]: I0131 08:00:10.431357 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:00:10 crc kubenswrapper[4908]: I0131 08:00:10.431929 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:00:30 crc kubenswrapper[4908]: I0131 08:00:30.017129 4908 scope.go:117] "RemoveContainer" containerID="c1e155866104b6dc0710b8f68dc7c3386a414264ab5b5cce8f826379dfa04c58" Jan 31 08:00:40 crc kubenswrapper[4908]: I0131 08:00:40.431646 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:00:40 crc kubenswrapper[4908]: I0131 08:00:40.432249 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:00:40 crc kubenswrapper[4908]: I0131 08:00:40.432309 4908 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 08:00:40 crc kubenswrapper[4908]: I0131 08:00:40.433134 4908 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7"} pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 08:00:40 crc kubenswrapper[4908]: I0131 08:00:40.433186 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" containerID="cri-o://d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7" gracePeriod=600 Jan 31 08:00:41 crc kubenswrapper[4908]: E0131 08:00:41.073625 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:00:41 crc kubenswrapper[4908]: I0131 08:00:41.177216 4908 generic.go:334] "Generic (PLEG): container finished" podID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerID="d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7" exitCode=0 Jan 31 08:00:41 crc kubenswrapper[4908]: I0131 08:00:41.177261 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerDied","Data":"d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7"} Jan 31 08:00:41 crc kubenswrapper[4908]: I0131 08:00:41.177298 4908 scope.go:117] "RemoveContainer" containerID="be8190fad5773a1ed4c2064880cbb099b59976ded3b5fd0bc8e24e0b9a05cbc6" Jan 31 08:00:41 crc kubenswrapper[4908]: I0131 08:00:41.178220 4908 scope.go:117] "RemoveContainer" containerID="d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7" Jan 31 08:00:41 crc kubenswrapper[4908]: E0131 08:00:41.178529 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:00:53 crc kubenswrapper[4908]: I0131 08:00:53.947863 4908 scope.go:117] "RemoveContainer" containerID="d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7" Jan 31 08:00:53 crc kubenswrapper[4908]: E0131 08:00:53.949173 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:00:55 crc kubenswrapper[4908]: E0131 08:00:55.461811 4908 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.46:47002->38.102.83.46:44229: read tcp 38.102.83.46:47002->38.102.83.46:44229: read: connection reset by peer Jan 31 08:01:00 crc kubenswrapper[4908]: I0131 08:01:00.150440 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29497441-tkvc9"] Jan 31 08:01:00 crc kubenswrapper[4908]: E0131 08:01:00.151270 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="705e20d7-875b-4674-9d26-463b7b47e9a7" containerName="collect-profiles" Jan 31 08:01:00 crc kubenswrapper[4908]: I0131 08:01:00.151283 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="705e20d7-875b-4674-9d26-463b7b47e9a7" containerName="collect-profiles" Jan 31 08:01:00 crc kubenswrapper[4908]: I0131 08:01:00.151469 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="705e20d7-875b-4674-9d26-463b7b47e9a7" containerName="collect-profiles" Jan 31 08:01:00 crc kubenswrapper[4908]: I0131 08:01:00.152016 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497441-tkvc9" Jan 31 08:01:00 crc kubenswrapper[4908]: I0131 08:01:00.163503 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29497441-tkvc9"] Jan 31 08:01:00 crc kubenswrapper[4908]: I0131 08:01:00.291526 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2dd5446d-4cd7-4de0-83cb-0420400ee416-fernet-keys\") pod \"keystone-cron-29497441-tkvc9\" (UID: \"2dd5446d-4cd7-4de0-83cb-0420400ee416\") " pod="openstack/keystone-cron-29497441-tkvc9" Jan 31 08:01:00 crc kubenswrapper[4908]: I0131 08:01:00.291851 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd5446d-4cd7-4de0-83cb-0420400ee416-config-data\") pod \"keystone-cron-29497441-tkvc9\" (UID: \"2dd5446d-4cd7-4de0-83cb-0420400ee416\") " pod="openstack/keystone-cron-29497441-tkvc9" Jan 31 08:01:00 crc kubenswrapper[4908]: I0131 08:01:00.292002 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd5446d-4cd7-4de0-83cb-0420400ee416-combined-ca-bundle\") pod \"keystone-cron-29497441-tkvc9\" (UID: \"2dd5446d-4cd7-4de0-83cb-0420400ee416\") " pod="openstack/keystone-cron-29497441-tkvc9" Jan 31 08:01:00 crc kubenswrapper[4908]: I0131 08:01:00.292046 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf824\" (UniqueName: \"kubernetes.io/projected/2dd5446d-4cd7-4de0-83cb-0420400ee416-kube-api-access-lf824\") pod \"keystone-cron-29497441-tkvc9\" (UID: \"2dd5446d-4cd7-4de0-83cb-0420400ee416\") " pod="openstack/keystone-cron-29497441-tkvc9" Jan 31 08:01:00 crc kubenswrapper[4908]: I0131 08:01:00.394248 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2dd5446d-4cd7-4de0-83cb-0420400ee416-fernet-keys\") pod \"keystone-cron-29497441-tkvc9\" (UID: \"2dd5446d-4cd7-4de0-83cb-0420400ee416\") " pod="openstack/keystone-cron-29497441-tkvc9" Jan 31 08:01:00 crc kubenswrapper[4908]: I0131 08:01:00.394661 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd5446d-4cd7-4de0-83cb-0420400ee416-config-data\") pod \"keystone-cron-29497441-tkvc9\" (UID: \"2dd5446d-4cd7-4de0-83cb-0420400ee416\") " pod="openstack/keystone-cron-29497441-tkvc9" Jan 31 08:01:00 crc kubenswrapper[4908]: I0131 08:01:00.394857 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd5446d-4cd7-4de0-83cb-0420400ee416-combined-ca-bundle\") pod \"keystone-cron-29497441-tkvc9\" (UID: \"2dd5446d-4cd7-4de0-83cb-0420400ee416\") " pod="openstack/keystone-cron-29497441-tkvc9" Jan 31 08:01:00 crc kubenswrapper[4908]: I0131 08:01:00.395145 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf824\" (UniqueName: \"kubernetes.io/projected/2dd5446d-4cd7-4de0-83cb-0420400ee416-kube-api-access-lf824\") pod \"keystone-cron-29497441-tkvc9\" (UID: \"2dd5446d-4cd7-4de0-83cb-0420400ee416\") " pod="openstack/keystone-cron-29497441-tkvc9" Jan 31 08:01:00 crc kubenswrapper[4908]: I0131 08:01:00.400469 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd5446d-4cd7-4de0-83cb-0420400ee416-combined-ca-bundle\") pod \"keystone-cron-29497441-tkvc9\" (UID: \"2dd5446d-4cd7-4de0-83cb-0420400ee416\") " pod="openstack/keystone-cron-29497441-tkvc9" Jan 31 08:01:00 crc kubenswrapper[4908]: I0131 08:01:00.400908 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2dd5446d-4cd7-4de0-83cb-0420400ee416-fernet-keys\") pod \"keystone-cron-29497441-tkvc9\" (UID: \"2dd5446d-4cd7-4de0-83cb-0420400ee416\") " pod="openstack/keystone-cron-29497441-tkvc9" Jan 31 08:01:00 crc kubenswrapper[4908]: I0131 08:01:00.407186 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd5446d-4cd7-4de0-83cb-0420400ee416-config-data\") pod \"keystone-cron-29497441-tkvc9\" (UID: \"2dd5446d-4cd7-4de0-83cb-0420400ee416\") " pod="openstack/keystone-cron-29497441-tkvc9" Jan 31 08:01:00 crc kubenswrapper[4908]: I0131 08:01:00.411341 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf824\" (UniqueName: \"kubernetes.io/projected/2dd5446d-4cd7-4de0-83cb-0420400ee416-kube-api-access-lf824\") pod \"keystone-cron-29497441-tkvc9\" (UID: \"2dd5446d-4cd7-4de0-83cb-0420400ee416\") " pod="openstack/keystone-cron-29497441-tkvc9" Jan 31 08:01:00 crc kubenswrapper[4908]: I0131 08:01:00.506679 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497441-tkvc9" Jan 31 08:01:00 crc kubenswrapper[4908]: I0131 08:01:00.954649 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29497441-tkvc9"] Jan 31 08:01:01 crc kubenswrapper[4908]: I0131 08:01:01.343960 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497441-tkvc9" event={"ID":"2dd5446d-4cd7-4de0-83cb-0420400ee416","Type":"ContainerStarted","Data":"1121adca5d7b0bceb1e170e5a61b3c967bb8afad709c603362301c1b3fd9d607"} Jan 31 08:01:01 crc kubenswrapper[4908]: I0131 08:01:01.344023 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497441-tkvc9" event={"ID":"2dd5446d-4cd7-4de0-83cb-0420400ee416","Type":"ContainerStarted","Data":"2d111a31bdb4bf0b813737e8a9e7b235e5887de1e80ed56568e1e9b3836ea589"} Jan 31 08:01:04 crc kubenswrapper[4908]: I0131 08:01:04.368051 4908 generic.go:334] "Generic (PLEG): container finished" podID="2dd5446d-4cd7-4de0-83cb-0420400ee416" containerID="1121adca5d7b0bceb1e170e5a61b3c967bb8afad709c603362301c1b3fd9d607" exitCode=0 Jan 31 08:01:04 crc kubenswrapper[4908]: I0131 08:01:04.368154 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497441-tkvc9" event={"ID":"2dd5446d-4cd7-4de0-83cb-0420400ee416","Type":"ContainerDied","Data":"1121adca5d7b0bceb1e170e5a61b3c967bb8afad709c603362301c1b3fd9d607"} Jan 31 08:01:05 crc kubenswrapper[4908]: I0131 08:01:05.751408 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497441-tkvc9" Jan 31 08:01:05 crc kubenswrapper[4908]: I0131 08:01:05.796064 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2dd5446d-4cd7-4de0-83cb-0420400ee416-fernet-keys\") pod \"2dd5446d-4cd7-4de0-83cb-0420400ee416\" (UID: \"2dd5446d-4cd7-4de0-83cb-0420400ee416\") " Jan 31 08:01:05 crc kubenswrapper[4908]: I0131 08:01:05.796221 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd5446d-4cd7-4de0-83cb-0420400ee416-config-data\") pod \"2dd5446d-4cd7-4de0-83cb-0420400ee416\" (UID: \"2dd5446d-4cd7-4de0-83cb-0420400ee416\") " Jan 31 08:01:05 crc kubenswrapper[4908]: I0131 08:01:05.796303 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf824\" (UniqueName: \"kubernetes.io/projected/2dd5446d-4cd7-4de0-83cb-0420400ee416-kube-api-access-lf824\") pod \"2dd5446d-4cd7-4de0-83cb-0420400ee416\" (UID: \"2dd5446d-4cd7-4de0-83cb-0420400ee416\") " Jan 31 08:01:05 crc kubenswrapper[4908]: I0131 08:01:05.796357 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd5446d-4cd7-4de0-83cb-0420400ee416-combined-ca-bundle\") pod \"2dd5446d-4cd7-4de0-83cb-0420400ee416\" (UID: \"2dd5446d-4cd7-4de0-83cb-0420400ee416\") " Jan 31 08:01:05 crc kubenswrapper[4908]: I0131 08:01:05.804285 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dd5446d-4cd7-4de0-83cb-0420400ee416-kube-api-access-lf824" (OuterVolumeSpecName: "kube-api-access-lf824") pod "2dd5446d-4cd7-4de0-83cb-0420400ee416" (UID: "2dd5446d-4cd7-4de0-83cb-0420400ee416"). InnerVolumeSpecName "kube-api-access-lf824". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:01:05 crc kubenswrapper[4908]: I0131 08:01:05.806338 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dd5446d-4cd7-4de0-83cb-0420400ee416-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2dd5446d-4cd7-4de0-83cb-0420400ee416" (UID: "2dd5446d-4cd7-4de0-83cb-0420400ee416"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:01:05 crc kubenswrapper[4908]: I0131 08:01:05.847297 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dd5446d-4cd7-4de0-83cb-0420400ee416-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2dd5446d-4cd7-4de0-83cb-0420400ee416" (UID: "2dd5446d-4cd7-4de0-83cb-0420400ee416"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:01:05 crc kubenswrapper[4908]: I0131 08:01:05.854693 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dd5446d-4cd7-4de0-83cb-0420400ee416-config-data" (OuterVolumeSpecName: "config-data") pod "2dd5446d-4cd7-4de0-83cb-0420400ee416" (UID: "2dd5446d-4cd7-4de0-83cb-0420400ee416"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:01:05 crc kubenswrapper[4908]: I0131 08:01:05.898617 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf824\" (UniqueName: \"kubernetes.io/projected/2dd5446d-4cd7-4de0-83cb-0420400ee416-kube-api-access-lf824\") on node \"crc\" DevicePath \"\"" Jan 31 08:01:05 crc kubenswrapper[4908]: I0131 08:01:05.898653 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd5446d-4cd7-4de0-83cb-0420400ee416-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:01:05 crc kubenswrapper[4908]: I0131 08:01:05.898663 4908 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2dd5446d-4cd7-4de0-83cb-0420400ee416-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 08:01:05 crc kubenswrapper[4908]: I0131 08:01:05.898673 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd5446d-4cd7-4de0-83cb-0420400ee416-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 08:01:06 crc kubenswrapper[4908]: I0131 08:01:06.387093 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497441-tkvc9" event={"ID":"2dd5446d-4cd7-4de0-83cb-0420400ee416","Type":"ContainerDied","Data":"2d111a31bdb4bf0b813737e8a9e7b235e5887de1e80ed56568e1e9b3836ea589"} Jan 31 08:01:06 crc kubenswrapper[4908]: I0131 08:01:06.387141 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d111a31bdb4bf0b813737e8a9e7b235e5887de1e80ed56568e1e9b3836ea589" Jan 31 08:01:06 crc kubenswrapper[4908]: I0131 08:01:06.387172 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497441-tkvc9" Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.395517 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp"] Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.407820 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fs7kz"] Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.415752 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qfkgh"] Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.422759 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-bpj2x"] Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.429421 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk"] Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.435308 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sdvct"] Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.442108 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l447t"] Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.448568 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9"] Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.455087 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r"] Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.461485 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fs7kz"] Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.467664 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-z6hbk"] Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.474162 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-bpj2x"] Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.479730 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p4c6r"] Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.485933 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sdvct"] Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.491784 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l447t"] Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.498514 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qfkgh"] Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.504584 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9wgf9"] Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.510118 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7s8fp"] Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.516816 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx"] Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.523330 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ng5hx"] Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.946832 4908 scope.go:117] "RemoveContainer" containerID="d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7" Jan 31 08:01:07 crc kubenswrapper[4908]: E0131 08:01:07.947220 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.952813 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b59cc14-57f8-4c8d-9807-fdd0194ea923" path="/var/lib/kubelet/pods/1b59cc14-57f8-4c8d-9807-fdd0194ea923/volumes" Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.953657 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24fb4b71-168a-4659-80ff-445b761ea66a" path="/var/lib/kubelet/pods/24fb4b71-168a-4659-80ff-445b761ea66a/volumes" Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.954285 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="277022d5-bc50-4ebf-a267-f835f1656f8d" path="/var/lib/kubelet/pods/277022d5-bc50-4ebf-a267-f835f1656f8d/volumes" Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.955057 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38cb28b5-2325-4864-8a7d-fb90210a6053" path="/var/lib/kubelet/pods/38cb28b5-2325-4864-8a7d-fb90210a6053/volumes" Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.956617 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ca2ee4-9af6-4ab5-91f0-0e5203ad585d" path="/var/lib/kubelet/pods/49ca2ee4-9af6-4ab5-91f0-0e5203ad585d/volumes" Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.957321 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b900b02-6c80-43b1-96ac-7a4687cc4e65" path="/var/lib/kubelet/pods/5b900b02-6c80-43b1-96ac-7a4687cc4e65/volumes" Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.958012 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ab03dca-2cc4-4689-bac0-8d3fa92855c5" path="/var/lib/kubelet/pods/6ab03dca-2cc4-4689-bac0-8d3fa92855c5/volumes" Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.959379 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74bf529d-bd61-4ef6-98e4-22cfcf412e48" path="/var/lib/kubelet/pods/74bf529d-bd61-4ef6-98e4-22cfcf412e48/volumes" Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.960113 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87d37be8-5527-495f-9013-36361f697414" path="/var/lib/kubelet/pods/87d37be8-5527-495f-9013-36361f697414/volumes" Jan 31 08:01:07 crc kubenswrapper[4908]: I0131 08:01:07.960724 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf7eef61-f6ee-4f41-8207-d010fcde59e5" path="/var/lib/kubelet/pods/bf7eef61-f6ee-4f41-8207-d010fcde59e5/volumes" Jan 31 08:01:13 crc kubenswrapper[4908]: I0131 08:01:13.399708 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5"] Jan 31 08:01:13 crc kubenswrapper[4908]: E0131 08:01:13.401210 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd5446d-4cd7-4de0-83cb-0420400ee416" containerName="keystone-cron" Jan 31 08:01:13 crc kubenswrapper[4908]: I0131 08:01:13.401512 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd5446d-4cd7-4de0-83cb-0420400ee416" containerName="keystone-cron" Jan 31 08:01:13 crc kubenswrapper[4908]: I0131 08:01:13.404512 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dd5446d-4cd7-4de0-83cb-0420400ee416" containerName="keystone-cron" Jan 31 08:01:13 crc kubenswrapper[4908]: I0131 08:01:13.405955 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5" Jan 31 08:01:13 crc kubenswrapper[4908]: I0131 08:01:13.413506 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:01:13 crc kubenswrapper[4908]: I0131 08:01:13.413742 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:01:13 crc kubenswrapper[4908]: I0131 08:01:13.414309 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:01:13 crc kubenswrapper[4908]: I0131 08:01:13.415202 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:01:13 crc kubenswrapper[4908]: I0131 08:01:13.415371 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vgwb9" Jan 31 08:01:13 crc kubenswrapper[4908]: I0131 08:01:13.428100 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5"] Jan 31 08:01:13 crc kubenswrapper[4908]: I0131 08:01:13.462791 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a117b4-fbbf-464c-a5bb-301f52736dee-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5\" (UID: \"48a117b4-fbbf-464c-a5bb-301f52736dee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5" Jan 31 08:01:13 crc kubenswrapper[4908]: I0131 08:01:13.462969 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/48a117b4-fbbf-464c-a5bb-301f52736dee-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5\" (UID: \"48a117b4-fbbf-464c-a5bb-301f52736dee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5" Jan 31 08:01:13 crc kubenswrapper[4908]: I0131 08:01:13.463034 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rm2j\" (UniqueName: \"kubernetes.io/projected/48a117b4-fbbf-464c-a5bb-301f52736dee-kube-api-access-9rm2j\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5\" (UID: \"48a117b4-fbbf-464c-a5bb-301f52736dee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5" Jan 31 08:01:13 crc kubenswrapper[4908]: I0131 08:01:13.463091 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48a117b4-fbbf-464c-a5bb-301f52736dee-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5\" (UID: \"48a117b4-fbbf-464c-a5bb-301f52736dee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5" Jan 31 08:01:13 crc kubenswrapper[4908]: I0131 08:01:13.463114 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48a117b4-fbbf-464c-a5bb-301f52736dee-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5\" (UID: \"48a117b4-fbbf-464c-a5bb-301f52736dee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5" Jan 31 08:01:13 crc kubenswrapper[4908]: I0131 08:01:13.564797 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a117b4-fbbf-464c-a5bb-301f52736dee-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5\" (UID: \"48a117b4-fbbf-464c-a5bb-301f52736dee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5" Jan 31 08:01:13 crc kubenswrapper[4908]: I0131 08:01:13.564971 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/48a117b4-fbbf-464c-a5bb-301f52736dee-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5\" (UID: \"48a117b4-fbbf-464c-a5bb-301f52736dee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5" Jan 31 08:01:13 crc kubenswrapper[4908]: I0131 08:01:13.565040 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rm2j\" (UniqueName: \"kubernetes.io/projected/48a117b4-fbbf-464c-a5bb-301f52736dee-kube-api-access-9rm2j\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5\" (UID: \"48a117b4-fbbf-464c-a5bb-301f52736dee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5" Jan 31 08:01:13 crc kubenswrapper[4908]: I0131 08:01:13.565099 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48a117b4-fbbf-464c-a5bb-301f52736dee-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5\" (UID: \"48a117b4-fbbf-464c-a5bb-301f52736dee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5" Jan 31 08:01:13 crc kubenswrapper[4908]: I0131 08:01:13.565123 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48a117b4-fbbf-464c-a5bb-301f52736dee-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5\" (UID: \"48a117b4-fbbf-464c-a5bb-301f52736dee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5" Jan 31 08:01:13 crc kubenswrapper[4908]: I0131 08:01:13.572622 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48a117b4-fbbf-464c-a5bb-301f52736dee-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5\" (UID: \"48a117b4-fbbf-464c-a5bb-301f52736dee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5" Jan 31 08:01:13 crc kubenswrapper[4908]: I0131 08:01:13.572916 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48a117b4-fbbf-464c-a5bb-301f52736dee-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5\" (UID: \"48a117b4-fbbf-464c-a5bb-301f52736dee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5" Jan 31 08:01:13 crc kubenswrapper[4908]: I0131 08:01:13.580049 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a117b4-fbbf-464c-a5bb-301f52736dee-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5\" (UID: \"48a117b4-fbbf-464c-a5bb-301f52736dee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5" Jan 31 08:01:13 crc kubenswrapper[4908]: I0131 08:01:13.581688 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/48a117b4-fbbf-464c-a5bb-301f52736dee-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5\" (UID: \"48a117b4-fbbf-464c-a5bb-301f52736dee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5" Jan 31 08:01:13 crc kubenswrapper[4908]: I0131 08:01:13.582859 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rm2j\" (UniqueName: \"kubernetes.io/projected/48a117b4-fbbf-464c-a5bb-301f52736dee-kube-api-access-9rm2j\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5\" (UID: \"48a117b4-fbbf-464c-a5bb-301f52736dee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5" Jan 31 08:01:13 crc kubenswrapper[4908]: I0131 08:01:13.727315 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5" Jan 31 08:01:14 crc kubenswrapper[4908]: I0131 08:01:14.228091 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5"] Jan 31 08:01:14 crc kubenswrapper[4908]: W0131 08:01:14.233615 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48a117b4_fbbf_464c_a5bb_301f52736dee.slice/crio-7768ddd3b507843af831dcda57cac5f9c40572f9ad851a343631629bf91c3a9b WatchSource:0}: Error finding container 7768ddd3b507843af831dcda57cac5f9c40572f9ad851a343631629bf91c3a9b: Status 404 returned error can't find the container with id 7768ddd3b507843af831dcda57cac5f9c40572f9ad851a343631629bf91c3a9b Jan 31 08:01:14 crc kubenswrapper[4908]: I0131 08:01:14.454419 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5" event={"ID":"48a117b4-fbbf-464c-a5bb-301f52736dee","Type":"ContainerStarted","Data":"7768ddd3b507843af831dcda57cac5f9c40572f9ad851a343631629bf91c3a9b"} Jan 31 08:01:15 crc kubenswrapper[4908]: I0131 08:01:15.465710 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5" event={"ID":"48a117b4-fbbf-464c-a5bb-301f52736dee","Type":"ContainerStarted","Data":"9b686c895118d792de335148217f15435dec2299ef2561549cf479c9cb7233bf"} Jan 31 08:01:15 crc kubenswrapper[4908]: I0131 08:01:15.486203 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5" podStartSLOduration=1.58099787 podStartE2EDuration="2.486182419s" podCreationTimestamp="2026-01-31 08:01:13 +0000 UTC" firstStartedPulling="2026-01-31 08:01:14.235826485 +0000 UTC m=+2380.851771139" lastFinishedPulling="2026-01-31 08:01:15.141011034 +0000 UTC m=+2381.756955688" observedRunningTime="2026-01-31 08:01:15.485324377 +0000 UTC m=+2382.101269031" watchObservedRunningTime="2026-01-31 08:01:15.486182419 +0000 UTC m=+2382.102127073" Jan 31 08:01:20 crc kubenswrapper[4908]: I0131 08:01:20.940594 4908 scope.go:117] "RemoveContainer" containerID="d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7" Jan 31 08:01:20 crc kubenswrapper[4908]: E0131 08:01:20.941354 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:01:28 crc kubenswrapper[4908]: I0131 08:01:28.576873 4908 generic.go:334] "Generic (PLEG): container finished" podID="48a117b4-fbbf-464c-a5bb-301f52736dee" containerID="9b686c895118d792de335148217f15435dec2299ef2561549cf479c9cb7233bf" exitCode=0 Jan 31 08:01:28 crc kubenswrapper[4908]: I0131 08:01:28.577013 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5" event={"ID":"48a117b4-fbbf-464c-a5bb-301f52736dee","Type":"ContainerDied","Data":"9b686c895118d792de335148217f15435dec2299ef2561549cf479c9cb7233bf"} Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.007082 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.098787 4908 scope.go:117] "RemoveContainer" containerID="15902a77fcc65c29b755ea676545ef17f4ad368a0f4f8a6c3b3eac1513110b57" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.111544 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48a117b4-fbbf-464c-a5bb-301f52736dee-ssh-key-openstack-edpm-ipam\") pod \"48a117b4-fbbf-464c-a5bb-301f52736dee\" (UID: \"48a117b4-fbbf-464c-a5bb-301f52736dee\") " Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.111598 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a117b4-fbbf-464c-a5bb-301f52736dee-repo-setup-combined-ca-bundle\") pod \"48a117b4-fbbf-464c-a5bb-301f52736dee\" (UID: \"48a117b4-fbbf-464c-a5bb-301f52736dee\") " Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.111722 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48a117b4-fbbf-464c-a5bb-301f52736dee-inventory\") pod \"48a117b4-fbbf-464c-a5bb-301f52736dee\" (UID: \"48a117b4-fbbf-464c-a5bb-301f52736dee\") " Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.111811 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/48a117b4-fbbf-464c-a5bb-301f52736dee-ceph\") pod \"48a117b4-fbbf-464c-a5bb-301f52736dee\" (UID: \"48a117b4-fbbf-464c-a5bb-301f52736dee\") " Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.111841 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rm2j\" (UniqueName: \"kubernetes.io/projected/48a117b4-fbbf-464c-a5bb-301f52736dee-kube-api-access-9rm2j\") pod \"48a117b4-fbbf-464c-a5bb-301f52736dee\" (UID: \"48a117b4-fbbf-464c-a5bb-301f52736dee\") " Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.118561 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48a117b4-fbbf-464c-a5bb-301f52736dee-kube-api-access-9rm2j" (OuterVolumeSpecName: "kube-api-access-9rm2j") pod "48a117b4-fbbf-464c-a5bb-301f52736dee" (UID: "48a117b4-fbbf-464c-a5bb-301f52736dee"). InnerVolumeSpecName "kube-api-access-9rm2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.123054 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48a117b4-fbbf-464c-a5bb-301f52736dee-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "48a117b4-fbbf-464c-a5bb-301f52736dee" (UID: "48a117b4-fbbf-464c-a5bb-301f52736dee"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.128178 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48a117b4-fbbf-464c-a5bb-301f52736dee-ceph" (OuterVolumeSpecName: "ceph") pod "48a117b4-fbbf-464c-a5bb-301f52736dee" (UID: "48a117b4-fbbf-464c-a5bb-301f52736dee"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.141080 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48a117b4-fbbf-464c-a5bb-301f52736dee-inventory" (OuterVolumeSpecName: "inventory") pod "48a117b4-fbbf-464c-a5bb-301f52736dee" (UID: "48a117b4-fbbf-464c-a5bb-301f52736dee"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.150468 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48a117b4-fbbf-464c-a5bb-301f52736dee-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "48a117b4-fbbf-464c-a5bb-301f52736dee" (UID: "48a117b4-fbbf-464c-a5bb-301f52736dee"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.161375 4908 scope.go:117] "RemoveContainer" containerID="a9337c4a5e3bcca4a2af413c574c3947514656569bd223d6910e2d228e240d7c" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.215643 4908 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48a117b4-fbbf-464c-a5bb-301f52736dee-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.215679 4908 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a117b4-fbbf-464c-a5bb-301f52736dee-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.215691 4908 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48a117b4-fbbf-464c-a5bb-301f52736dee-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.215703 4908 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/48a117b4-fbbf-464c-a5bb-301f52736dee-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.215712 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rm2j\" (UniqueName: \"kubernetes.io/projected/48a117b4-fbbf-464c-a5bb-301f52736dee-kube-api-access-9rm2j\") on node \"crc\" DevicePath \"\"" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.247414 4908 scope.go:117] "RemoveContainer" containerID="84e6c77034c3a1cc91f63f7a76fa5a4a775864ee8981d19e4d54d45ff3b63730" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.308505 4908 scope.go:117] "RemoveContainer" containerID="c3af099a0d4748bb0e756818e73874c4f9ad162ffb0d9208446d192e005c4b99" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.337949 4908 scope.go:117] "RemoveContainer" containerID="adc9480650a8ae918fbebe6ecd320355121b55c388bbcb5660eb33a5c1a79e75" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.372090 4908 scope.go:117] "RemoveContainer" containerID="cfed6f9e002ec9468c878446ccbdb018141c87f9cf96960d0e47756e1eeaa2e8" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.403748 4908 scope.go:117] "RemoveContainer" containerID="48f57522321517580267732d3cd4630dd0066853433e4e0c7b51fc2c1df85952" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.452490 4908 scope.go:117] "RemoveContainer" containerID="9d38c2f76bab226165e916c8bb0fbf290d3bbf3824bfa99c401a80a277346179" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.490204 4908 scope.go:117] "RemoveContainer" containerID="c7e4e8e329926e10155fa62e2fe09582cb3c7547c2623eda72edea5312f6d757" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.525893 4908 scope.go:117] "RemoveContainer" containerID="d6b08802f446c2274874f7d7b48ef3089160d151f3a33a24f50951594f9aa7bc" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.615864 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5" event={"ID":"48a117b4-fbbf-464c-a5bb-301f52736dee","Type":"ContainerDied","Data":"7768ddd3b507843af831dcda57cac5f9c40572f9ad851a343631629bf91c3a9b"} Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.615905 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7768ddd3b507843af831dcda57cac5f9c40572f9ad851a343631629bf91c3a9b" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.616004 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.688584 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq"] Jan 31 08:01:30 crc kubenswrapper[4908]: E0131 08:01:30.689025 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a117b4-fbbf-464c-a5bb-301f52736dee" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.689044 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a117b4-fbbf-464c-a5bb-301f52736dee" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.689237 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="48a117b4-fbbf-464c-a5bb-301f52736dee" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.689957 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.696415 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.696570 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.697134 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.697338 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vgwb9" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.697528 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.700167 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq"] Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.734688 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46102316-b0a9-463d-9b57-6499478e3031-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq\" (UID: \"46102316-b0a9-463d-9b57-6499478e3031\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.735059 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk9r6\" (UniqueName: \"kubernetes.io/projected/46102316-b0a9-463d-9b57-6499478e3031-kube-api-access-nk9r6\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq\" (UID: \"46102316-b0a9-463d-9b57-6499478e3031\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.735128 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46102316-b0a9-463d-9b57-6499478e3031-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq\" (UID: \"46102316-b0a9-463d-9b57-6499478e3031\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.735412 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46102316-b0a9-463d-9b57-6499478e3031-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq\" (UID: \"46102316-b0a9-463d-9b57-6499478e3031\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.735560 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/46102316-b0a9-463d-9b57-6499478e3031-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq\" (UID: \"46102316-b0a9-463d-9b57-6499478e3031\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.842055 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46102316-b0a9-463d-9b57-6499478e3031-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq\" (UID: \"46102316-b0a9-463d-9b57-6499478e3031\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.842129 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk9r6\" (UniqueName: \"kubernetes.io/projected/46102316-b0a9-463d-9b57-6499478e3031-kube-api-access-nk9r6\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq\" (UID: \"46102316-b0a9-463d-9b57-6499478e3031\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.842175 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46102316-b0a9-463d-9b57-6499478e3031-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq\" (UID: \"46102316-b0a9-463d-9b57-6499478e3031\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.842228 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46102316-b0a9-463d-9b57-6499478e3031-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq\" (UID: \"46102316-b0a9-463d-9b57-6499478e3031\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.842258 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/46102316-b0a9-463d-9b57-6499478e3031-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq\" (UID: \"46102316-b0a9-463d-9b57-6499478e3031\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.848777 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46102316-b0a9-463d-9b57-6499478e3031-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq\" (UID: \"46102316-b0a9-463d-9b57-6499478e3031\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.848809 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46102316-b0a9-463d-9b57-6499478e3031-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq\" (UID: \"46102316-b0a9-463d-9b57-6499478e3031\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.849702 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/46102316-b0a9-463d-9b57-6499478e3031-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq\" (UID: \"46102316-b0a9-463d-9b57-6499478e3031\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.849843 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46102316-b0a9-463d-9b57-6499478e3031-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq\" (UID: \"46102316-b0a9-463d-9b57-6499478e3031\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq" Jan 31 08:01:30 crc kubenswrapper[4908]: I0131 08:01:30.861274 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk9r6\" (UniqueName: \"kubernetes.io/projected/46102316-b0a9-463d-9b57-6499478e3031-kube-api-access-nk9r6\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq\" (UID: \"46102316-b0a9-463d-9b57-6499478e3031\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq" Jan 31 08:01:31 crc kubenswrapper[4908]: I0131 08:01:31.026654 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq" Jan 31 08:01:31 crc kubenswrapper[4908]: I0131 08:01:31.531220 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq"] Jan 31 08:01:31 crc kubenswrapper[4908]: W0131 08:01:31.534394 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46102316_b0a9_463d_9b57_6499478e3031.slice/crio-d650d259a9bcbb8d58a7da259e99b82cffb593ae0d02684378a86564b0f5d0f7 WatchSource:0}: Error finding container d650d259a9bcbb8d58a7da259e99b82cffb593ae0d02684378a86564b0f5d0f7: Status 404 returned error can't find the container with id d650d259a9bcbb8d58a7da259e99b82cffb593ae0d02684378a86564b0f5d0f7 Jan 31 08:01:31 crc kubenswrapper[4908]: I0131 08:01:31.638576 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq" event={"ID":"46102316-b0a9-463d-9b57-6499478e3031","Type":"ContainerStarted","Data":"d650d259a9bcbb8d58a7da259e99b82cffb593ae0d02684378a86564b0f5d0f7"} Jan 31 08:01:32 crc kubenswrapper[4908]: I0131 08:01:32.939845 4908 scope.go:117] "RemoveContainer" containerID="d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7" Jan 31 08:01:32 crc kubenswrapper[4908]: E0131 08:01:32.941264 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:01:33 crc kubenswrapper[4908]: I0131 08:01:33.655179 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq" event={"ID":"46102316-b0a9-463d-9b57-6499478e3031","Type":"ContainerStarted","Data":"490b8e4b69eb6ef7de59c3203ecc34e6055ee8967c09142048a0fb6325ccd002"} Jan 31 08:01:33 crc kubenswrapper[4908]: I0131 08:01:33.671292 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq" podStartSLOduration=2.047161093 podStartE2EDuration="3.671273551s" podCreationTimestamp="2026-01-31 08:01:30 +0000 UTC" firstStartedPulling="2026-01-31 08:01:31.537184865 +0000 UTC m=+2398.153129539" lastFinishedPulling="2026-01-31 08:01:33.161297343 +0000 UTC m=+2399.777241997" observedRunningTime="2026-01-31 08:01:33.670115063 +0000 UTC m=+2400.286059717" watchObservedRunningTime="2026-01-31 08:01:33.671273551 +0000 UTC m=+2400.287218205" Jan 31 08:01:45 crc kubenswrapper[4908]: I0131 08:01:45.940459 4908 scope.go:117] "RemoveContainer" containerID="d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7" Jan 31 08:01:45 crc kubenswrapper[4908]: E0131 08:01:45.941301 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:01:57 crc kubenswrapper[4908]: I0131 08:01:57.945092 4908 scope.go:117] "RemoveContainer" containerID="d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7" Jan 31 08:01:57 crc kubenswrapper[4908]: E0131 08:01:57.947300 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:02:11 crc kubenswrapper[4908]: I0131 08:02:11.940908 4908 scope.go:117] "RemoveContainer" containerID="d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7" Jan 31 08:02:11 crc kubenswrapper[4908]: E0131 08:02:11.942050 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:02:22 crc kubenswrapper[4908]: I0131 08:02:22.941190 4908 scope.go:117] "RemoveContainer" containerID="d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7" Jan 31 08:02:22 crc kubenswrapper[4908]: E0131 08:02:22.942085 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:02:36 crc kubenswrapper[4908]: I0131 08:02:36.940210 4908 scope.go:117] "RemoveContainer" containerID="d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7" Jan 31 08:02:36 crc kubenswrapper[4908]: E0131 08:02:36.941014 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:02:50 crc kubenswrapper[4908]: I0131 08:02:50.940237 4908 scope.go:117] "RemoveContainer" containerID="d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7" Jan 31 08:02:50 crc kubenswrapper[4908]: E0131 08:02:50.941025 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:03:02 crc kubenswrapper[4908]: I0131 08:03:02.941527 4908 scope.go:117] "RemoveContainer" containerID="d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7" Jan 31 08:03:02 crc kubenswrapper[4908]: E0131 08:03:02.942805 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:03:08 crc kubenswrapper[4908]: I0131 08:03:08.527079 4908 generic.go:334] "Generic (PLEG): container finished" podID="46102316-b0a9-463d-9b57-6499478e3031" containerID="490b8e4b69eb6ef7de59c3203ecc34e6055ee8967c09142048a0fb6325ccd002" exitCode=0 Jan 31 08:03:08 crc kubenswrapper[4908]: I0131 08:03:08.527146 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq" event={"ID":"46102316-b0a9-463d-9b57-6499478e3031","Type":"ContainerDied","Data":"490b8e4b69eb6ef7de59c3203ecc34e6055ee8967c09142048a0fb6325ccd002"} Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.033023 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.133221 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/46102316-b0a9-463d-9b57-6499478e3031-ceph\") pod \"46102316-b0a9-463d-9b57-6499478e3031\" (UID: \"46102316-b0a9-463d-9b57-6499478e3031\") " Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.133324 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46102316-b0a9-463d-9b57-6499478e3031-ssh-key-openstack-edpm-ipam\") pod \"46102316-b0a9-463d-9b57-6499478e3031\" (UID: \"46102316-b0a9-463d-9b57-6499478e3031\") " Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.133368 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk9r6\" (UniqueName: \"kubernetes.io/projected/46102316-b0a9-463d-9b57-6499478e3031-kube-api-access-nk9r6\") pod \"46102316-b0a9-463d-9b57-6499478e3031\" (UID: \"46102316-b0a9-463d-9b57-6499478e3031\") " Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.133509 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46102316-b0a9-463d-9b57-6499478e3031-inventory\") pod \"46102316-b0a9-463d-9b57-6499478e3031\" (UID: \"46102316-b0a9-463d-9b57-6499478e3031\") " Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.133577 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46102316-b0a9-463d-9b57-6499478e3031-bootstrap-combined-ca-bundle\") pod \"46102316-b0a9-463d-9b57-6499478e3031\" (UID: \"46102316-b0a9-463d-9b57-6499478e3031\") " Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.139700 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46102316-b0a9-463d-9b57-6499478e3031-ceph" (OuterVolumeSpecName: "ceph") pod "46102316-b0a9-463d-9b57-6499478e3031" (UID: "46102316-b0a9-463d-9b57-6499478e3031"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.139827 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46102316-b0a9-463d-9b57-6499478e3031-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "46102316-b0a9-463d-9b57-6499478e3031" (UID: "46102316-b0a9-463d-9b57-6499478e3031"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.140308 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46102316-b0a9-463d-9b57-6499478e3031-kube-api-access-nk9r6" (OuterVolumeSpecName: "kube-api-access-nk9r6") pod "46102316-b0a9-463d-9b57-6499478e3031" (UID: "46102316-b0a9-463d-9b57-6499478e3031"). InnerVolumeSpecName "kube-api-access-nk9r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.165159 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46102316-b0a9-463d-9b57-6499478e3031-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "46102316-b0a9-463d-9b57-6499478e3031" (UID: "46102316-b0a9-463d-9b57-6499478e3031"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.168190 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46102316-b0a9-463d-9b57-6499478e3031-inventory" (OuterVolumeSpecName: "inventory") pod "46102316-b0a9-463d-9b57-6499478e3031" (UID: "46102316-b0a9-463d-9b57-6499478e3031"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.236020 4908 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46102316-b0a9-463d-9b57-6499478e3031-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.236050 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nk9r6\" (UniqueName: \"kubernetes.io/projected/46102316-b0a9-463d-9b57-6499478e3031-kube-api-access-nk9r6\") on node \"crc\" DevicePath \"\"" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.236060 4908 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46102316-b0a9-463d-9b57-6499478e3031-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.236069 4908 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46102316-b0a9-463d-9b57-6499478e3031-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.236079 4908 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/46102316-b0a9-463d-9b57-6499478e3031-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.546526 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq" event={"ID":"46102316-b0a9-463d-9b57-6499478e3031","Type":"ContainerDied","Data":"d650d259a9bcbb8d58a7da259e99b82cffb593ae0d02684378a86564b0f5d0f7"} Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.546789 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.546798 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d650d259a9bcbb8d58a7da259e99b82cffb593ae0d02684378a86564b0f5d0f7" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.655421 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2"] Jan 31 08:03:10 crc kubenswrapper[4908]: E0131 08:03:10.656071 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46102316-b0a9-463d-9b57-6499478e3031" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.656096 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="46102316-b0a9-463d-9b57-6499478e3031" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.656642 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="46102316-b0a9-463d-9b57-6499478e3031" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.657700 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.660405 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.661354 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.661754 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.662058 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vgwb9" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.662718 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.667225 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2"] Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.745320 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfs6d\" (UniqueName: \"kubernetes.io/projected/a5ab199c-fca0-4b18-aaf0-572c43e84695-kube-api-access-pfs6d\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2\" (UID: \"a5ab199c-fca0-4b18-aaf0-572c43e84695\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.745385 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a5ab199c-fca0-4b18-aaf0-572c43e84695-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2\" (UID: \"a5ab199c-fca0-4b18-aaf0-572c43e84695\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.745455 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a5ab199c-fca0-4b18-aaf0-572c43e84695-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2\" (UID: \"a5ab199c-fca0-4b18-aaf0-572c43e84695\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.745544 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a5ab199c-fca0-4b18-aaf0-572c43e84695-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2\" (UID: \"a5ab199c-fca0-4b18-aaf0-572c43e84695\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.847122 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a5ab199c-fca0-4b18-aaf0-572c43e84695-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2\" (UID: \"a5ab199c-fca0-4b18-aaf0-572c43e84695\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.847433 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfs6d\" (UniqueName: \"kubernetes.io/projected/a5ab199c-fca0-4b18-aaf0-572c43e84695-kube-api-access-pfs6d\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2\" (UID: \"a5ab199c-fca0-4b18-aaf0-572c43e84695\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.847594 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a5ab199c-fca0-4b18-aaf0-572c43e84695-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2\" (UID: \"a5ab199c-fca0-4b18-aaf0-572c43e84695\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.847696 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a5ab199c-fca0-4b18-aaf0-572c43e84695-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2\" (UID: \"a5ab199c-fca0-4b18-aaf0-572c43e84695\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.851535 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a5ab199c-fca0-4b18-aaf0-572c43e84695-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2\" (UID: \"a5ab199c-fca0-4b18-aaf0-572c43e84695\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.852321 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a5ab199c-fca0-4b18-aaf0-572c43e84695-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2\" (UID: \"a5ab199c-fca0-4b18-aaf0-572c43e84695\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.852453 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a5ab199c-fca0-4b18-aaf0-572c43e84695-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2\" (UID: \"a5ab199c-fca0-4b18-aaf0-572c43e84695\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.870131 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfs6d\" (UniqueName: \"kubernetes.io/projected/a5ab199c-fca0-4b18-aaf0-572c43e84695-kube-api-access-pfs6d\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2\" (UID: \"a5ab199c-fca0-4b18-aaf0-572c43e84695\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2" Jan 31 08:03:10 crc kubenswrapper[4908]: I0131 08:03:10.984470 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2" Jan 31 08:03:11 crc kubenswrapper[4908]: I0131 08:03:11.473688 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2"] Jan 31 08:03:11 crc kubenswrapper[4908]: I0131 08:03:11.563132 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2" event={"ID":"a5ab199c-fca0-4b18-aaf0-572c43e84695","Type":"ContainerStarted","Data":"cc81954f6e0cf46b80d7fcd09449d0573edd4d869dc12844d7e5b81a222b65ba"} Jan 31 08:03:12 crc kubenswrapper[4908]: I0131 08:03:12.574626 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2" event={"ID":"a5ab199c-fca0-4b18-aaf0-572c43e84695","Type":"ContainerStarted","Data":"484a3303accad52c2816632a7c09ee7eb418207fc8ad91ea56cb02ef8eee308a"} Jan 31 08:03:12 crc kubenswrapper[4908]: I0131 08:03:12.602485 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2" podStartSLOduration=2.192746649 podStartE2EDuration="2.602451501s" podCreationTimestamp="2026-01-31 08:03:10 +0000 UTC" firstStartedPulling="2026-01-31 08:03:11.488732041 +0000 UTC m=+2498.104676695" lastFinishedPulling="2026-01-31 08:03:11.898436893 +0000 UTC m=+2498.514381547" observedRunningTime="2026-01-31 08:03:12.596285458 +0000 UTC m=+2499.212230112" watchObservedRunningTime="2026-01-31 08:03:12.602451501 +0000 UTC m=+2499.218396175" Jan 31 08:03:13 crc kubenswrapper[4908]: I0131 08:03:13.942342 4908 scope.go:117] "RemoveContainer" containerID="d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7" Jan 31 08:03:13 crc kubenswrapper[4908]: E0131 08:03:13.942862 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:03:24 crc kubenswrapper[4908]: I0131 08:03:24.940282 4908 scope.go:117] "RemoveContainer" containerID="d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7" Jan 31 08:03:24 crc kubenswrapper[4908]: E0131 08:03:24.941055 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:03:35 crc kubenswrapper[4908]: I0131 08:03:35.784625 4908 generic.go:334] "Generic (PLEG): container finished" podID="a5ab199c-fca0-4b18-aaf0-572c43e84695" containerID="484a3303accad52c2816632a7c09ee7eb418207fc8ad91ea56cb02ef8eee308a" exitCode=0 Jan 31 08:03:35 crc kubenswrapper[4908]: I0131 08:03:35.784711 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2" event={"ID":"a5ab199c-fca0-4b18-aaf0-572c43e84695","Type":"ContainerDied","Data":"484a3303accad52c2816632a7c09ee7eb418207fc8ad91ea56cb02ef8eee308a"} Jan 31 08:03:36 crc kubenswrapper[4908]: I0131 08:03:36.940646 4908 scope.go:117] "RemoveContainer" containerID="d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7" Jan 31 08:03:36 crc kubenswrapper[4908]: E0131 08:03:36.941223 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:03:37 crc kubenswrapper[4908]: I0131 08:03:37.182916 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2" Jan 31 08:03:37 crc kubenswrapper[4908]: I0131 08:03:37.287600 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfs6d\" (UniqueName: \"kubernetes.io/projected/a5ab199c-fca0-4b18-aaf0-572c43e84695-kube-api-access-pfs6d\") pod \"a5ab199c-fca0-4b18-aaf0-572c43e84695\" (UID: \"a5ab199c-fca0-4b18-aaf0-572c43e84695\") " Jan 31 08:03:37 crc kubenswrapper[4908]: I0131 08:03:37.287720 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a5ab199c-fca0-4b18-aaf0-572c43e84695-ssh-key-openstack-edpm-ipam\") pod \"a5ab199c-fca0-4b18-aaf0-572c43e84695\" (UID: \"a5ab199c-fca0-4b18-aaf0-572c43e84695\") " Jan 31 08:03:37 crc kubenswrapper[4908]: I0131 08:03:37.287880 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a5ab199c-fca0-4b18-aaf0-572c43e84695-inventory\") pod \"a5ab199c-fca0-4b18-aaf0-572c43e84695\" (UID: \"a5ab199c-fca0-4b18-aaf0-572c43e84695\") " Jan 31 08:03:37 crc kubenswrapper[4908]: I0131 08:03:37.287932 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a5ab199c-fca0-4b18-aaf0-572c43e84695-ceph\") pod \"a5ab199c-fca0-4b18-aaf0-572c43e84695\" (UID: \"a5ab199c-fca0-4b18-aaf0-572c43e84695\") " Jan 31 08:03:37 crc kubenswrapper[4908]: I0131 08:03:37.294619 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5ab199c-fca0-4b18-aaf0-572c43e84695-kube-api-access-pfs6d" (OuterVolumeSpecName: "kube-api-access-pfs6d") pod "a5ab199c-fca0-4b18-aaf0-572c43e84695" (UID: "a5ab199c-fca0-4b18-aaf0-572c43e84695"). InnerVolumeSpecName "kube-api-access-pfs6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:03:37 crc kubenswrapper[4908]: I0131 08:03:37.308526 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5ab199c-fca0-4b18-aaf0-572c43e84695-ceph" (OuterVolumeSpecName: "ceph") pod "a5ab199c-fca0-4b18-aaf0-572c43e84695" (UID: "a5ab199c-fca0-4b18-aaf0-572c43e84695"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:03:37 crc kubenswrapper[4908]: I0131 08:03:37.328342 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5ab199c-fca0-4b18-aaf0-572c43e84695-inventory" (OuterVolumeSpecName: "inventory") pod "a5ab199c-fca0-4b18-aaf0-572c43e84695" (UID: "a5ab199c-fca0-4b18-aaf0-572c43e84695"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:03:37 crc kubenswrapper[4908]: I0131 08:03:37.338578 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5ab199c-fca0-4b18-aaf0-572c43e84695-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a5ab199c-fca0-4b18-aaf0-572c43e84695" (UID: "a5ab199c-fca0-4b18-aaf0-572c43e84695"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:03:37 crc kubenswrapper[4908]: I0131 08:03:37.392339 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfs6d\" (UniqueName: \"kubernetes.io/projected/a5ab199c-fca0-4b18-aaf0-572c43e84695-kube-api-access-pfs6d\") on node \"crc\" DevicePath \"\"" Jan 31 08:03:37 crc kubenswrapper[4908]: I0131 08:03:37.392385 4908 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a5ab199c-fca0-4b18-aaf0-572c43e84695-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:03:37 crc kubenswrapper[4908]: I0131 08:03:37.392403 4908 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a5ab199c-fca0-4b18-aaf0-572c43e84695-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:03:37 crc kubenswrapper[4908]: I0131 08:03:37.392417 4908 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a5ab199c-fca0-4b18-aaf0-572c43e84695-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:03:37 crc kubenswrapper[4908]: I0131 08:03:37.809159 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2" event={"ID":"a5ab199c-fca0-4b18-aaf0-572c43e84695","Type":"ContainerDied","Data":"cc81954f6e0cf46b80d7fcd09449d0573edd4d869dc12844d7e5b81a222b65ba"} Jan 31 08:03:37 crc kubenswrapper[4908]: I0131 08:03:37.809195 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc81954f6e0cf46b80d7fcd09449d0573edd4d869dc12844d7e5b81a222b65ba" Jan 31 08:03:37 crc kubenswrapper[4908]: I0131 08:03:37.809262 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2" Jan 31 08:03:37 crc kubenswrapper[4908]: I0131 08:03:37.923222 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2"] Jan 31 08:03:37 crc kubenswrapper[4908]: E0131 08:03:37.923947 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ab199c-fca0-4b18-aaf0-572c43e84695" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 08:03:37 crc kubenswrapper[4908]: I0131 08:03:37.924043 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ab199c-fca0-4b18-aaf0-572c43e84695" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 08:03:37 crc kubenswrapper[4908]: I0131 08:03:37.924321 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5ab199c-fca0-4b18-aaf0-572c43e84695" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 08:03:37 crc kubenswrapper[4908]: I0131 08:03:37.925080 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2" Jan 31 08:03:37 crc kubenswrapper[4908]: I0131 08:03:37.928574 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:03:37 crc kubenswrapper[4908]: I0131 08:03:37.928964 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:03:37 crc kubenswrapper[4908]: I0131 08:03:37.929460 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vgwb9" Jan 31 08:03:37 crc kubenswrapper[4908]: I0131 08:03:37.930480 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:03:37 crc kubenswrapper[4908]: I0131 08:03:37.931473 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:03:37 crc kubenswrapper[4908]: I0131 08:03:37.938128 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2"] Jan 31 08:03:38 crc kubenswrapper[4908]: I0131 08:03:38.004640 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqwvl\" (UniqueName: \"kubernetes.io/projected/0c429649-0307-4288-87cb-43c90dc9bad2-kube-api-access-xqwvl\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2\" (UID: \"0c429649-0307-4288-87cb-43c90dc9bad2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2" Jan 31 08:03:38 crc kubenswrapper[4908]: I0131 08:03:38.004713 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c429649-0307-4288-87cb-43c90dc9bad2-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2\" (UID: \"0c429649-0307-4288-87cb-43c90dc9bad2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2" Jan 31 08:03:38 crc kubenswrapper[4908]: I0131 08:03:38.004751 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0c429649-0307-4288-87cb-43c90dc9bad2-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2\" (UID: \"0c429649-0307-4288-87cb-43c90dc9bad2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2" Jan 31 08:03:38 crc kubenswrapper[4908]: I0131 08:03:38.004792 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c429649-0307-4288-87cb-43c90dc9bad2-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2\" (UID: \"0c429649-0307-4288-87cb-43c90dc9bad2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2" Jan 31 08:03:38 crc kubenswrapper[4908]: I0131 08:03:38.106965 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqwvl\" (UniqueName: \"kubernetes.io/projected/0c429649-0307-4288-87cb-43c90dc9bad2-kube-api-access-xqwvl\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2\" (UID: \"0c429649-0307-4288-87cb-43c90dc9bad2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2" Jan 31 08:03:38 crc kubenswrapper[4908]: I0131 08:03:38.107033 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c429649-0307-4288-87cb-43c90dc9bad2-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2\" (UID: \"0c429649-0307-4288-87cb-43c90dc9bad2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2" Jan 31 08:03:38 crc kubenswrapper[4908]: I0131 08:03:38.107066 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0c429649-0307-4288-87cb-43c90dc9bad2-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2\" (UID: \"0c429649-0307-4288-87cb-43c90dc9bad2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2" Jan 31 08:03:38 crc kubenswrapper[4908]: I0131 08:03:38.107105 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c429649-0307-4288-87cb-43c90dc9bad2-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2\" (UID: \"0c429649-0307-4288-87cb-43c90dc9bad2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2" Jan 31 08:03:38 crc kubenswrapper[4908]: I0131 08:03:38.111192 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c429649-0307-4288-87cb-43c90dc9bad2-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2\" (UID: \"0c429649-0307-4288-87cb-43c90dc9bad2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2" Jan 31 08:03:38 crc kubenswrapper[4908]: I0131 08:03:38.111205 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0c429649-0307-4288-87cb-43c90dc9bad2-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2\" (UID: \"0c429649-0307-4288-87cb-43c90dc9bad2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2" Jan 31 08:03:38 crc kubenswrapper[4908]: I0131 08:03:38.116501 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c429649-0307-4288-87cb-43c90dc9bad2-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2\" (UID: \"0c429649-0307-4288-87cb-43c90dc9bad2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2" Jan 31 08:03:38 crc kubenswrapper[4908]: I0131 08:03:38.126328 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqwvl\" (UniqueName: \"kubernetes.io/projected/0c429649-0307-4288-87cb-43c90dc9bad2-kube-api-access-xqwvl\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2\" (UID: \"0c429649-0307-4288-87cb-43c90dc9bad2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2" Jan 31 08:03:38 crc kubenswrapper[4908]: I0131 08:03:38.288265 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2" Jan 31 08:03:38 crc kubenswrapper[4908]: I0131 08:03:38.820373 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2"] Jan 31 08:03:39 crc kubenswrapper[4908]: I0131 08:03:39.832074 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2" event={"ID":"0c429649-0307-4288-87cb-43c90dc9bad2","Type":"ContainerStarted","Data":"2295a36175b45f8748c29030464053e77a6258a28d8b5bfc6436d7457469a191"} Jan 31 08:03:39 crc kubenswrapper[4908]: I0131 08:03:39.833088 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2" event={"ID":"0c429649-0307-4288-87cb-43c90dc9bad2","Type":"ContainerStarted","Data":"f095e2692514aa7a5bab7effbc41ff24f7a49b790053035d5b2b01c4e9fe23f2"} Jan 31 08:03:39 crc kubenswrapper[4908]: I0131 08:03:39.854922 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2" podStartSLOduration=2.354722978 podStartE2EDuration="2.854906217s" podCreationTimestamp="2026-01-31 08:03:37 +0000 UTC" firstStartedPulling="2026-01-31 08:03:38.827525067 +0000 UTC m=+2525.443469721" lastFinishedPulling="2026-01-31 08:03:39.327708306 +0000 UTC m=+2525.943652960" observedRunningTime="2026-01-31 08:03:39.847670515 +0000 UTC m=+2526.463615169" watchObservedRunningTime="2026-01-31 08:03:39.854906217 +0000 UTC m=+2526.470850871" Jan 31 08:03:44 crc kubenswrapper[4908]: I0131 08:03:44.874132 4908 generic.go:334] "Generic (PLEG): container finished" podID="0c429649-0307-4288-87cb-43c90dc9bad2" containerID="2295a36175b45f8748c29030464053e77a6258a28d8b5bfc6436d7457469a191" exitCode=0 Jan 31 08:03:44 crc kubenswrapper[4908]: I0131 08:03:44.874245 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2" event={"ID":"0c429649-0307-4288-87cb-43c90dc9bad2","Type":"ContainerDied","Data":"2295a36175b45f8748c29030464053e77a6258a28d8b5bfc6436d7457469a191"} Jan 31 08:03:46 crc kubenswrapper[4908]: I0131 08:03:46.267548 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2" Jan 31 08:03:46 crc kubenswrapper[4908]: I0131 08:03:46.410165 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c429649-0307-4288-87cb-43c90dc9bad2-inventory\") pod \"0c429649-0307-4288-87cb-43c90dc9bad2\" (UID: \"0c429649-0307-4288-87cb-43c90dc9bad2\") " Jan 31 08:03:46 crc kubenswrapper[4908]: I0131 08:03:46.410270 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0c429649-0307-4288-87cb-43c90dc9bad2-ceph\") pod \"0c429649-0307-4288-87cb-43c90dc9bad2\" (UID: \"0c429649-0307-4288-87cb-43c90dc9bad2\") " Jan 31 08:03:46 crc kubenswrapper[4908]: I0131 08:03:46.410291 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c429649-0307-4288-87cb-43c90dc9bad2-ssh-key-openstack-edpm-ipam\") pod \"0c429649-0307-4288-87cb-43c90dc9bad2\" (UID: \"0c429649-0307-4288-87cb-43c90dc9bad2\") " Jan 31 08:03:46 crc kubenswrapper[4908]: I0131 08:03:46.410330 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqwvl\" (UniqueName: \"kubernetes.io/projected/0c429649-0307-4288-87cb-43c90dc9bad2-kube-api-access-xqwvl\") pod \"0c429649-0307-4288-87cb-43c90dc9bad2\" (UID: \"0c429649-0307-4288-87cb-43c90dc9bad2\") " Jan 31 08:03:46 crc kubenswrapper[4908]: I0131 08:03:46.416206 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c429649-0307-4288-87cb-43c90dc9bad2-kube-api-access-xqwvl" (OuterVolumeSpecName: "kube-api-access-xqwvl") pod "0c429649-0307-4288-87cb-43c90dc9bad2" (UID: "0c429649-0307-4288-87cb-43c90dc9bad2"). InnerVolumeSpecName "kube-api-access-xqwvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:03:46 crc kubenswrapper[4908]: I0131 08:03:46.416389 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c429649-0307-4288-87cb-43c90dc9bad2-ceph" (OuterVolumeSpecName: "ceph") pod "0c429649-0307-4288-87cb-43c90dc9bad2" (UID: "0c429649-0307-4288-87cb-43c90dc9bad2"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:03:46 crc kubenswrapper[4908]: I0131 08:03:46.436646 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c429649-0307-4288-87cb-43c90dc9bad2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0c429649-0307-4288-87cb-43c90dc9bad2" (UID: "0c429649-0307-4288-87cb-43c90dc9bad2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:03:46 crc kubenswrapper[4908]: I0131 08:03:46.438476 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c429649-0307-4288-87cb-43c90dc9bad2-inventory" (OuterVolumeSpecName: "inventory") pod "0c429649-0307-4288-87cb-43c90dc9bad2" (UID: "0c429649-0307-4288-87cb-43c90dc9bad2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:03:46 crc kubenswrapper[4908]: I0131 08:03:46.513395 4908 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c429649-0307-4288-87cb-43c90dc9bad2-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:03:46 crc kubenswrapper[4908]: I0131 08:03:46.513430 4908 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0c429649-0307-4288-87cb-43c90dc9bad2-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:03:46 crc kubenswrapper[4908]: I0131 08:03:46.513440 4908 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c429649-0307-4288-87cb-43c90dc9bad2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:03:46 crc kubenswrapper[4908]: I0131 08:03:46.513450 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqwvl\" (UniqueName: \"kubernetes.io/projected/0c429649-0307-4288-87cb-43c90dc9bad2-kube-api-access-xqwvl\") on node \"crc\" DevicePath \"\"" Jan 31 08:03:46 crc kubenswrapper[4908]: I0131 08:03:46.892583 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2" event={"ID":"0c429649-0307-4288-87cb-43c90dc9bad2","Type":"ContainerDied","Data":"f095e2692514aa7a5bab7effbc41ff24f7a49b790053035d5b2b01c4e9fe23f2"} Jan 31 08:03:46 crc kubenswrapper[4908]: I0131 08:03:46.892865 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f095e2692514aa7a5bab7effbc41ff24f7a49b790053035d5b2b01c4e9fe23f2" Jan 31 08:03:46 crc kubenswrapper[4908]: I0131 08:03:46.892662 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2" Jan 31 08:03:46 crc kubenswrapper[4908]: I0131 08:03:46.974352 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-jxn85"] Jan 31 08:03:46 crc kubenswrapper[4908]: E0131 08:03:46.974720 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c429649-0307-4288-87cb-43c90dc9bad2" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 08:03:46 crc kubenswrapper[4908]: I0131 08:03:46.974738 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c429649-0307-4288-87cb-43c90dc9bad2" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 08:03:46 crc kubenswrapper[4908]: I0131 08:03:46.974896 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c429649-0307-4288-87cb-43c90dc9bad2" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 08:03:46 crc kubenswrapper[4908]: I0131 08:03:46.975583 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jxn85" Jan 31 08:03:46 crc kubenswrapper[4908]: I0131 08:03:46.979568 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:03:46 crc kubenswrapper[4908]: I0131 08:03:46.979749 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vgwb9" Jan 31 08:03:46 crc kubenswrapper[4908]: I0131 08:03:46.979879 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:03:46 crc kubenswrapper[4908]: I0131 08:03:46.980124 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:03:46 crc kubenswrapper[4908]: I0131 08:03:46.982173 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:03:46 crc kubenswrapper[4908]: I0131 08:03:46.983479 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-jxn85"] Jan 31 08:03:47 crc kubenswrapper[4908]: I0131 08:03:47.125105 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q8vv\" (UniqueName: \"kubernetes.io/projected/ff124f52-0985-4497-bc1a-4864a1973914-kube-api-access-4q8vv\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jxn85\" (UID: \"ff124f52-0985-4497-bc1a-4864a1973914\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jxn85" Jan 31 08:03:47 crc kubenswrapper[4908]: I0131 08:03:47.125225 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ff124f52-0985-4497-bc1a-4864a1973914-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jxn85\" (UID: \"ff124f52-0985-4497-bc1a-4864a1973914\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jxn85" Jan 31 08:03:47 crc kubenswrapper[4908]: I0131 08:03:47.125253 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff124f52-0985-4497-bc1a-4864a1973914-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jxn85\" (UID: \"ff124f52-0985-4497-bc1a-4864a1973914\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jxn85" Jan 31 08:03:47 crc kubenswrapper[4908]: I0131 08:03:47.125284 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff124f52-0985-4497-bc1a-4864a1973914-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jxn85\" (UID: \"ff124f52-0985-4497-bc1a-4864a1973914\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jxn85" Jan 31 08:03:47 crc kubenswrapper[4908]: I0131 08:03:47.226947 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ff124f52-0985-4497-bc1a-4864a1973914-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jxn85\" (UID: \"ff124f52-0985-4497-bc1a-4864a1973914\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jxn85" Jan 31 08:03:47 crc kubenswrapper[4908]: I0131 08:03:47.227017 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff124f52-0985-4497-bc1a-4864a1973914-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jxn85\" (UID: \"ff124f52-0985-4497-bc1a-4864a1973914\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jxn85" Jan 31 08:03:47 crc kubenswrapper[4908]: I0131 08:03:47.227059 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff124f52-0985-4497-bc1a-4864a1973914-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jxn85\" (UID: \"ff124f52-0985-4497-bc1a-4864a1973914\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jxn85" Jan 31 08:03:47 crc kubenswrapper[4908]: I0131 08:03:47.227122 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q8vv\" (UniqueName: \"kubernetes.io/projected/ff124f52-0985-4497-bc1a-4864a1973914-kube-api-access-4q8vv\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jxn85\" (UID: \"ff124f52-0985-4497-bc1a-4864a1973914\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jxn85" Jan 31 08:03:47 crc kubenswrapper[4908]: I0131 08:03:47.232818 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff124f52-0985-4497-bc1a-4864a1973914-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jxn85\" (UID: \"ff124f52-0985-4497-bc1a-4864a1973914\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jxn85" Jan 31 08:03:47 crc kubenswrapper[4908]: I0131 08:03:47.232865 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ff124f52-0985-4497-bc1a-4864a1973914-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jxn85\" (UID: \"ff124f52-0985-4497-bc1a-4864a1973914\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jxn85" Jan 31 08:03:47 crc kubenswrapper[4908]: I0131 08:03:47.240120 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff124f52-0985-4497-bc1a-4864a1973914-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jxn85\" (UID: \"ff124f52-0985-4497-bc1a-4864a1973914\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jxn85" Jan 31 08:03:47 crc kubenswrapper[4908]: I0131 08:03:47.244175 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q8vv\" (UniqueName: \"kubernetes.io/projected/ff124f52-0985-4497-bc1a-4864a1973914-kube-api-access-4q8vv\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jxn85\" (UID: \"ff124f52-0985-4497-bc1a-4864a1973914\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jxn85" Jan 31 08:03:47 crc kubenswrapper[4908]: I0131 08:03:47.294509 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jxn85" Jan 31 08:03:47 crc kubenswrapper[4908]: I0131 08:03:47.812233 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-jxn85"] Jan 31 08:03:47 crc kubenswrapper[4908]: I0131 08:03:47.822418 4908 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 08:03:47 crc kubenswrapper[4908]: I0131 08:03:47.902688 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jxn85" event={"ID":"ff124f52-0985-4497-bc1a-4864a1973914","Type":"ContainerStarted","Data":"f1cc5993f81e98cd0c3634efdfeea60ff7c2e77de6f05b413e7deee523947993"} Jan 31 08:03:48 crc kubenswrapper[4908]: I0131 08:03:48.914235 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jxn85" event={"ID":"ff124f52-0985-4497-bc1a-4864a1973914","Type":"ContainerStarted","Data":"40fd9efbb96073a61935f280a4cf097f4d97851cab99262cbc52acfcbeee5ff6"} Jan 31 08:03:48 crc kubenswrapper[4908]: I0131 08:03:48.935801 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jxn85" podStartSLOduration=2.520561111 podStartE2EDuration="2.935781736s" podCreationTimestamp="2026-01-31 08:03:46 +0000 UTC" firstStartedPulling="2026-01-31 08:03:47.822077798 +0000 UTC m=+2534.438022452" lastFinishedPulling="2026-01-31 08:03:48.237298433 +0000 UTC m=+2534.853243077" observedRunningTime="2026-01-31 08:03:48.929398105 +0000 UTC m=+2535.545342759" watchObservedRunningTime="2026-01-31 08:03:48.935781736 +0000 UTC m=+2535.551726390" Jan 31 08:03:48 crc kubenswrapper[4908]: I0131 08:03:48.939961 4908 scope.go:117] "RemoveContainer" containerID="d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7" Jan 31 08:03:48 crc kubenswrapper[4908]: E0131 08:03:48.940282 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:04:03 crc kubenswrapper[4908]: I0131 08:04:03.940062 4908 scope.go:117] "RemoveContainer" containerID="d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7" Jan 31 08:04:03 crc kubenswrapper[4908]: E0131 08:04:03.941027 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:04:16 crc kubenswrapper[4908]: I0131 08:04:16.940350 4908 scope.go:117] "RemoveContainer" containerID="d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7" Jan 31 08:04:16 crc kubenswrapper[4908]: E0131 08:04:16.942413 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:04:21 crc kubenswrapper[4908]: I0131 08:04:21.205190 4908 generic.go:334] "Generic (PLEG): container finished" podID="ff124f52-0985-4497-bc1a-4864a1973914" containerID="40fd9efbb96073a61935f280a4cf097f4d97851cab99262cbc52acfcbeee5ff6" exitCode=0 Jan 31 08:04:21 crc kubenswrapper[4908]: I0131 08:04:21.205310 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jxn85" event={"ID":"ff124f52-0985-4497-bc1a-4864a1973914","Type":"ContainerDied","Data":"40fd9efbb96073a61935f280a4cf097f4d97851cab99262cbc52acfcbeee5ff6"} Jan 31 08:04:22 crc kubenswrapper[4908]: I0131 08:04:22.619369 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jxn85" Jan 31 08:04:22 crc kubenswrapper[4908]: I0131 08:04:22.671943 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff124f52-0985-4497-bc1a-4864a1973914-inventory\") pod \"ff124f52-0985-4497-bc1a-4864a1973914\" (UID: \"ff124f52-0985-4497-bc1a-4864a1973914\") " Jan 31 08:04:22 crc kubenswrapper[4908]: I0131 08:04:22.672062 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ff124f52-0985-4497-bc1a-4864a1973914-ceph\") pod \"ff124f52-0985-4497-bc1a-4864a1973914\" (UID: \"ff124f52-0985-4497-bc1a-4864a1973914\") " Jan 31 08:04:22 crc kubenswrapper[4908]: I0131 08:04:22.672180 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff124f52-0985-4497-bc1a-4864a1973914-ssh-key-openstack-edpm-ipam\") pod \"ff124f52-0985-4497-bc1a-4864a1973914\" (UID: \"ff124f52-0985-4497-bc1a-4864a1973914\") " Jan 31 08:04:22 crc kubenswrapper[4908]: I0131 08:04:22.672298 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4q8vv\" (UniqueName: \"kubernetes.io/projected/ff124f52-0985-4497-bc1a-4864a1973914-kube-api-access-4q8vv\") pod \"ff124f52-0985-4497-bc1a-4864a1973914\" (UID: \"ff124f52-0985-4497-bc1a-4864a1973914\") " Jan 31 08:04:22 crc kubenswrapper[4908]: I0131 08:04:22.683497 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff124f52-0985-4497-bc1a-4864a1973914-kube-api-access-4q8vv" (OuterVolumeSpecName: "kube-api-access-4q8vv") pod "ff124f52-0985-4497-bc1a-4864a1973914" (UID: "ff124f52-0985-4497-bc1a-4864a1973914"). InnerVolumeSpecName "kube-api-access-4q8vv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:04:22 crc kubenswrapper[4908]: I0131 08:04:22.686526 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff124f52-0985-4497-bc1a-4864a1973914-ceph" (OuterVolumeSpecName: "ceph") pod "ff124f52-0985-4497-bc1a-4864a1973914" (UID: "ff124f52-0985-4497-bc1a-4864a1973914"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:04:22 crc kubenswrapper[4908]: I0131 08:04:22.703292 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff124f52-0985-4497-bc1a-4864a1973914-inventory" (OuterVolumeSpecName: "inventory") pod "ff124f52-0985-4497-bc1a-4864a1973914" (UID: "ff124f52-0985-4497-bc1a-4864a1973914"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:04:22 crc kubenswrapper[4908]: I0131 08:04:22.703899 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff124f52-0985-4497-bc1a-4864a1973914-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ff124f52-0985-4497-bc1a-4864a1973914" (UID: "ff124f52-0985-4497-bc1a-4864a1973914"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:04:22 crc kubenswrapper[4908]: I0131 08:04:22.774473 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4q8vv\" (UniqueName: \"kubernetes.io/projected/ff124f52-0985-4497-bc1a-4864a1973914-kube-api-access-4q8vv\") on node \"crc\" DevicePath \"\"" Jan 31 08:04:22 crc kubenswrapper[4908]: I0131 08:04:22.774521 4908 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff124f52-0985-4497-bc1a-4864a1973914-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:04:22 crc kubenswrapper[4908]: I0131 08:04:22.774534 4908 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ff124f52-0985-4497-bc1a-4864a1973914-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:04:22 crc kubenswrapper[4908]: I0131 08:04:22.774548 4908 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff124f52-0985-4497-bc1a-4864a1973914-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:04:23 crc kubenswrapper[4908]: I0131 08:04:23.221655 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jxn85" event={"ID":"ff124f52-0985-4497-bc1a-4864a1973914","Type":"ContainerDied","Data":"f1cc5993f81e98cd0c3634efdfeea60ff7c2e77de6f05b413e7deee523947993"} Jan 31 08:04:23 crc kubenswrapper[4908]: I0131 08:04:23.221698 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jxn85" Jan 31 08:04:23 crc kubenswrapper[4908]: I0131 08:04:23.221703 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1cc5993f81e98cd0c3634efdfeea60ff7c2e77de6f05b413e7deee523947993" Jan 31 08:04:23 crc kubenswrapper[4908]: I0131 08:04:23.319880 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n"] Jan 31 08:04:23 crc kubenswrapper[4908]: E0131 08:04:23.320295 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff124f52-0985-4497-bc1a-4864a1973914" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:04:23 crc kubenswrapper[4908]: I0131 08:04:23.320315 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff124f52-0985-4497-bc1a-4864a1973914" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:04:23 crc kubenswrapper[4908]: I0131 08:04:23.320489 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff124f52-0985-4497-bc1a-4864a1973914" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:04:23 crc kubenswrapper[4908]: I0131 08:04:23.321109 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n" Jan 31 08:04:23 crc kubenswrapper[4908]: I0131 08:04:23.325462 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:04:23 crc kubenswrapper[4908]: I0131 08:04:23.325814 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:04:23 crc kubenswrapper[4908]: I0131 08:04:23.325869 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:04:23 crc kubenswrapper[4908]: I0131 08:04:23.326219 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vgwb9" Jan 31 08:04:23 crc kubenswrapper[4908]: I0131 08:04:23.326401 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:04:23 crc kubenswrapper[4908]: I0131 08:04:23.334374 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n"] Jan 31 08:04:23 crc kubenswrapper[4908]: I0131 08:04:23.391088 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e7290989-e8f1-419b-8196-7e2acaaac9db-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n\" (UID: \"e7290989-e8f1-419b-8196-7e2acaaac9db\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n" Jan 31 08:04:23 crc kubenswrapper[4908]: I0131 08:04:23.391431 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7290989-e8f1-419b-8196-7e2acaaac9db-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n\" (UID: \"e7290989-e8f1-419b-8196-7e2acaaac9db\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n" Jan 31 08:04:23 crc kubenswrapper[4908]: I0131 08:04:23.391894 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e7290989-e8f1-419b-8196-7e2acaaac9db-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n\" (UID: \"e7290989-e8f1-419b-8196-7e2acaaac9db\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n" Jan 31 08:04:23 crc kubenswrapper[4908]: I0131 08:04:23.391953 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m67z\" (UniqueName: \"kubernetes.io/projected/e7290989-e8f1-419b-8196-7e2acaaac9db-kube-api-access-5m67z\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n\" (UID: \"e7290989-e8f1-419b-8196-7e2acaaac9db\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n" Jan 31 08:04:23 crc kubenswrapper[4908]: I0131 08:04:23.494135 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e7290989-e8f1-419b-8196-7e2acaaac9db-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n\" (UID: \"e7290989-e8f1-419b-8196-7e2acaaac9db\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n" Jan 31 08:04:23 crc kubenswrapper[4908]: I0131 08:04:23.494189 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m67z\" (UniqueName: \"kubernetes.io/projected/e7290989-e8f1-419b-8196-7e2acaaac9db-kube-api-access-5m67z\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n\" (UID: \"e7290989-e8f1-419b-8196-7e2acaaac9db\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n" Jan 31 08:04:23 crc kubenswrapper[4908]: I0131 08:04:23.494249 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e7290989-e8f1-419b-8196-7e2acaaac9db-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n\" (UID: \"e7290989-e8f1-419b-8196-7e2acaaac9db\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n" Jan 31 08:04:23 crc kubenswrapper[4908]: I0131 08:04:23.494718 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7290989-e8f1-419b-8196-7e2acaaac9db-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n\" (UID: \"e7290989-e8f1-419b-8196-7e2acaaac9db\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n" Jan 31 08:04:23 crc kubenswrapper[4908]: I0131 08:04:23.499080 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7290989-e8f1-419b-8196-7e2acaaac9db-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n\" (UID: \"e7290989-e8f1-419b-8196-7e2acaaac9db\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n" Jan 31 08:04:23 crc kubenswrapper[4908]: I0131 08:04:23.500248 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e7290989-e8f1-419b-8196-7e2acaaac9db-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n\" (UID: \"e7290989-e8f1-419b-8196-7e2acaaac9db\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n" Jan 31 08:04:23 crc kubenswrapper[4908]: I0131 08:04:23.500440 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e7290989-e8f1-419b-8196-7e2acaaac9db-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n\" (UID: \"e7290989-e8f1-419b-8196-7e2acaaac9db\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n" Jan 31 08:04:23 crc kubenswrapper[4908]: I0131 08:04:23.516122 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m67z\" (UniqueName: \"kubernetes.io/projected/e7290989-e8f1-419b-8196-7e2acaaac9db-kube-api-access-5m67z\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n\" (UID: \"e7290989-e8f1-419b-8196-7e2acaaac9db\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n" Jan 31 08:04:23 crc kubenswrapper[4908]: I0131 08:04:23.643557 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n" Jan 31 08:04:24 crc kubenswrapper[4908]: I0131 08:04:24.165745 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n"] Jan 31 08:04:24 crc kubenswrapper[4908]: I0131 08:04:24.229930 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n" event={"ID":"e7290989-e8f1-419b-8196-7e2acaaac9db","Type":"ContainerStarted","Data":"7743def77c3c611c46955d235c9f098958c5866e6e6a63833e95552dd0d5ed87"} Jan 31 08:04:26 crc kubenswrapper[4908]: I0131 08:04:26.248531 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n" event={"ID":"e7290989-e8f1-419b-8196-7e2acaaac9db","Type":"ContainerStarted","Data":"07a3347e1d359b2bf611b577dfcaecc409173191bd0d05c6de044290af04c75b"} Jan 31 08:04:26 crc kubenswrapper[4908]: I0131 08:04:26.274493 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n" podStartSLOduration=2.141588222 podStartE2EDuration="3.274468084s" podCreationTimestamp="2026-01-31 08:04:23 +0000 UTC" firstStartedPulling="2026-01-31 08:04:24.170025439 +0000 UTC m=+2570.785970093" lastFinishedPulling="2026-01-31 08:04:25.302905291 +0000 UTC m=+2571.918849955" observedRunningTime="2026-01-31 08:04:26.267330214 +0000 UTC m=+2572.883274868" watchObservedRunningTime="2026-01-31 08:04:26.274468084 +0000 UTC m=+2572.890412738" Jan 31 08:04:29 crc kubenswrapper[4908]: I0131 08:04:29.282605 4908 generic.go:334] "Generic (PLEG): container finished" podID="e7290989-e8f1-419b-8196-7e2acaaac9db" containerID="07a3347e1d359b2bf611b577dfcaecc409173191bd0d05c6de044290af04c75b" exitCode=0 Jan 31 08:04:29 crc kubenswrapper[4908]: I0131 08:04:29.282704 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n" event={"ID":"e7290989-e8f1-419b-8196-7e2acaaac9db","Type":"ContainerDied","Data":"07a3347e1d359b2bf611b577dfcaecc409173191bd0d05c6de044290af04c75b"} Jan 31 08:04:30 crc kubenswrapper[4908]: I0131 08:04:30.686141 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n" Jan 31 08:04:30 crc kubenswrapper[4908]: I0131 08:04:30.747436 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e7290989-e8f1-419b-8196-7e2acaaac9db-ceph\") pod \"e7290989-e8f1-419b-8196-7e2acaaac9db\" (UID: \"e7290989-e8f1-419b-8196-7e2acaaac9db\") " Jan 31 08:04:30 crc kubenswrapper[4908]: I0131 08:04:30.747579 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7290989-e8f1-419b-8196-7e2acaaac9db-inventory\") pod \"e7290989-e8f1-419b-8196-7e2acaaac9db\" (UID: \"e7290989-e8f1-419b-8196-7e2acaaac9db\") " Jan 31 08:04:30 crc kubenswrapper[4908]: I0131 08:04:30.747672 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m67z\" (UniqueName: \"kubernetes.io/projected/e7290989-e8f1-419b-8196-7e2acaaac9db-kube-api-access-5m67z\") pod \"e7290989-e8f1-419b-8196-7e2acaaac9db\" (UID: \"e7290989-e8f1-419b-8196-7e2acaaac9db\") " Jan 31 08:04:30 crc kubenswrapper[4908]: I0131 08:04:30.748117 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e7290989-e8f1-419b-8196-7e2acaaac9db-ssh-key-openstack-edpm-ipam\") pod \"e7290989-e8f1-419b-8196-7e2acaaac9db\" (UID: \"e7290989-e8f1-419b-8196-7e2acaaac9db\") " Jan 31 08:04:30 crc kubenswrapper[4908]: I0131 08:04:30.754416 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7290989-e8f1-419b-8196-7e2acaaac9db-kube-api-access-5m67z" (OuterVolumeSpecName: "kube-api-access-5m67z") pod "e7290989-e8f1-419b-8196-7e2acaaac9db" (UID: "e7290989-e8f1-419b-8196-7e2acaaac9db"). InnerVolumeSpecName "kube-api-access-5m67z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:04:30 crc kubenswrapper[4908]: I0131 08:04:30.756140 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7290989-e8f1-419b-8196-7e2acaaac9db-ceph" (OuterVolumeSpecName: "ceph") pod "e7290989-e8f1-419b-8196-7e2acaaac9db" (UID: "e7290989-e8f1-419b-8196-7e2acaaac9db"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:04:30 crc kubenswrapper[4908]: I0131 08:04:30.779321 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7290989-e8f1-419b-8196-7e2acaaac9db-inventory" (OuterVolumeSpecName: "inventory") pod "e7290989-e8f1-419b-8196-7e2acaaac9db" (UID: "e7290989-e8f1-419b-8196-7e2acaaac9db"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:04:30 crc kubenswrapper[4908]: I0131 08:04:30.785772 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7290989-e8f1-419b-8196-7e2acaaac9db-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e7290989-e8f1-419b-8196-7e2acaaac9db" (UID: "e7290989-e8f1-419b-8196-7e2acaaac9db"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:04:30 crc kubenswrapper[4908]: I0131 08:04:30.851229 4908 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e7290989-e8f1-419b-8196-7e2acaaac9db-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:04:30 crc kubenswrapper[4908]: I0131 08:04:30.851274 4908 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e7290989-e8f1-419b-8196-7e2acaaac9db-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:04:30 crc kubenswrapper[4908]: I0131 08:04:30.851288 4908 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7290989-e8f1-419b-8196-7e2acaaac9db-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:04:30 crc kubenswrapper[4908]: I0131 08:04:30.851301 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m67z\" (UniqueName: \"kubernetes.io/projected/e7290989-e8f1-419b-8196-7e2acaaac9db-kube-api-access-5m67z\") on node \"crc\" DevicePath \"\"" Jan 31 08:04:30 crc kubenswrapper[4908]: I0131 08:04:30.940008 4908 scope.go:117] "RemoveContainer" containerID="d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7" Jan 31 08:04:30 crc kubenswrapper[4908]: E0131 08:04:30.940383 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:04:31 crc kubenswrapper[4908]: I0131 08:04:31.305675 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n" event={"ID":"e7290989-e8f1-419b-8196-7e2acaaac9db","Type":"ContainerDied","Data":"7743def77c3c611c46955d235c9f098958c5866e6e6a63833e95552dd0d5ed87"} Jan 31 08:04:31 crc kubenswrapper[4908]: I0131 08:04:31.305726 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7743def77c3c611c46955d235c9f098958c5866e6e6a63833e95552dd0d5ed87" Jan 31 08:04:31 crc kubenswrapper[4908]: I0131 08:04:31.305749 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n" Jan 31 08:04:31 crc kubenswrapper[4908]: I0131 08:04:31.390021 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c82r6"] Jan 31 08:04:31 crc kubenswrapper[4908]: E0131 08:04:31.390670 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7290989-e8f1-419b-8196-7e2acaaac9db" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 31 08:04:31 crc kubenswrapper[4908]: I0131 08:04:31.390691 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7290989-e8f1-419b-8196-7e2acaaac9db" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 31 08:04:31 crc kubenswrapper[4908]: I0131 08:04:31.390905 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7290989-e8f1-419b-8196-7e2acaaac9db" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 31 08:04:31 crc kubenswrapper[4908]: I0131 08:04:31.391917 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c82r6" Jan 31 08:04:31 crc kubenswrapper[4908]: I0131 08:04:31.394591 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:04:31 crc kubenswrapper[4908]: I0131 08:04:31.394762 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:04:31 crc kubenswrapper[4908]: I0131 08:04:31.396174 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:04:31 crc kubenswrapper[4908]: I0131 08:04:31.396685 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:04:31 crc kubenswrapper[4908]: I0131 08:04:31.397091 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vgwb9" Jan 31 08:04:31 crc kubenswrapper[4908]: I0131 08:04:31.400371 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c82r6"] Jan 31 08:04:31 crc kubenswrapper[4908]: I0131 08:04:31.476011 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f7bb5704-7aa7-4021-bd84-2065fdda3980-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c82r6\" (UID: \"f7bb5704-7aa7-4021-bd84-2065fdda3980\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c82r6" Jan 31 08:04:31 crc kubenswrapper[4908]: I0131 08:04:31.476066 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfckj\" (UniqueName: \"kubernetes.io/projected/f7bb5704-7aa7-4021-bd84-2065fdda3980-kube-api-access-wfckj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c82r6\" (UID: \"f7bb5704-7aa7-4021-bd84-2065fdda3980\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c82r6" Jan 31 08:04:31 crc kubenswrapper[4908]: I0131 08:04:31.476142 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7bb5704-7aa7-4021-bd84-2065fdda3980-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c82r6\" (UID: \"f7bb5704-7aa7-4021-bd84-2065fdda3980\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c82r6" Jan 31 08:04:31 crc kubenswrapper[4908]: I0131 08:04:31.476167 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7bb5704-7aa7-4021-bd84-2065fdda3980-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c82r6\" (UID: \"f7bb5704-7aa7-4021-bd84-2065fdda3980\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c82r6" Jan 31 08:04:31 crc kubenswrapper[4908]: I0131 08:04:31.577509 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f7bb5704-7aa7-4021-bd84-2065fdda3980-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c82r6\" (UID: \"f7bb5704-7aa7-4021-bd84-2065fdda3980\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c82r6" Jan 31 08:04:31 crc kubenswrapper[4908]: I0131 08:04:31.577560 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfckj\" (UniqueName: \"kubernetes.io/projected/f7bb5704-7aa7-4021-bd84-2065fdda3980-kube-api-access-wfckj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c82r6\" (UID: \"f7bb5704-7aa7-4021-bd84-2065fdda3980\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c82r6" Jan 31 08:04:31 crc kubenswrapper[4908]: I0131 08:04:31.577604 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7bb5704-7aa7-4021-bd84-2065fdda3980-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c82r6\" (UID: \"f7bb5704-7aa7-4021-bd84-2065fdda3980\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c82r6" Jan 31 08:04:31 crc kubenswrapper[4908]: I0131 08:04:31.577635 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7bb5704-7aa7-4021-bd84-2065fdda3980-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c82r6\" (UID: \"f7bb5704-7aa7-4021-bd84-2065fdda3980\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c82r6" Jan 31 08:04:31 crc kubenswrapper[4908]: I0131 08:04:31.582835 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7bb5704-7aa7-4021-bd84-2065fdda3980-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c82r6\" (UID: \"f7bb5704-7aa7-4021-bd84-2065fdda3980\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c82r6" Jan 31 08:04:31 crc kubenswrapper[4908]: I0131 08:04:31.583048 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f7bb5704-7aa7-4021-bd84-2065fdda3980-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c82r6\" (UID: \"f7bb5704-7aa7-4021-bd84-2065fdda3980\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c82r6" Jan 31 08:04:31 crc kubenswrapper[4908]: I0131 08:04:31.582885 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7bb5704-7aa7-4021-bd84-2065fdda3980-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c82r6\" (UID: \"f7bb5704-7aa7-4021-bd84-2065fdda3980\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c82r6" Jan 31 08:04:31 crc kubenswrapper[4908]: I0131 08:04:31.595023 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfckj\" (UniqueName: \"kubernetes.io/projected/f7bb5704-7aa7-4021-bd84-2065fdda3980-kube-api-access-wfckj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c82r6\" (UID: \"f7bb5704-7aa7-4021-bd84-2065fdda3980\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c82r6" Jan 31 08:04:31 crc kubenswrapper[4908]: I0131 08:04:31.708018 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c82r6" Jan 31 08:04:32 crc kubenswrapper[4908]: I0131 08:04:32.245118 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c82r6"] Jan 31 08:04:32 crc kubenswrapper[4908]: I0131 08:04:32.316775 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c82r6" event={"ID":"f7bb5704-7aa7-4021-bd84-2065fdda3980","Type":"ContainerStarted","Data":"be48fd6c4031c15724f5d55eb984f9f4befc2d07352bcb25204f9ef659b20cff"} Jan 31 08:04:33 crc kubenswrapper[4908]: I0131 08:04:33.346564 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c82r6" event={"ID":"f7bb5704-7aa7-4021-bd84-2065fdda3980","Type":"ContainerStarted","Data":"18eb6ad1922e22ff0bab29c303029d11dd3d1f62435b7c4d226eec166a6d5594"} Jan 31 08:04:33 crc kubenswrapper[4908]: I0131 08:04:33.379887 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c82r6" podStartSLOduration=1.898929989 podStartE2EDuration="2.379854252s" podCreationTimestamp="2026-01-31 08:04:31 +0000 UTC" firstStartedPulling="2026-01-31 08:04:32.24816584 +0000 UTC m=+2578.864110504" lastFinishedPulling="2026-01-31 08:04:32.729090113 +0000 UTC m=+2579.345034767" observedRunningTime="2026-01-31 08:04:33.37143856 +0000 UTC m=+2579.987383214" watchObservedRunningTime="2026-01-31 08:04:33.379854252 +0000 UTC m=+2579.995798926" Jan 31 08:04:36 crc kubenswrapper[4908]: I0131 08:04:36.012720 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7dgqw"] Jan 31 08:04:36 crc kubenswrapper[4908]: I0131 08:04:36.017243 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dgqw" Jan 31 08:04:36 crc kubenswrapper[4908]: I0131 08:04:36.020940 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7dgqw"] Jan 31 08:04:36 crc kubenswrapper[4908]: I0131 08:04:36.096311 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85-utilities\") pod \"community-operators-7dgqw\" (UID: \"fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85\") " pod="openshift-marketplace/community-operators-7dgqw" Jan 31 08:04:36 crc kubenswrapper[4908]: I0131 08:04:36.096443 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85-catalog-content\") pod \"community-operators-7dgqw\" (UID: \"fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85\") " pod="openshift-marketplace/community-operators-7dgqw" Jan 31 08:04:36 crc kubenswrapper[4908]: I0131 08:04:36.096563 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bjps\" (UniqueName: \"kubernetes.io/projected/fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85-kube-api-access-5bjps\") pod \"community-operators-7dgqw\" (UID: \"fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85\") " pod="openshift-marketplace/community-operators-7dgqw" Jan 31 08:04:36 crc kubenswrapper[4908]: I0131 08:04:36.199021 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85-utilities\") pod \"community-operators-7dgqw\" (UID: \"fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85\") " pod="openshift-marketplace/community-operators-7dgqw" Jan 31 08:04:36 crc kubenswrapper[4908]: I0131 08:04:36.199103 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85-catalog-content\") pod \"community-operators-7dgqw\" (UID: \"fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85\") " pod="openshift-marketplace/community-operators-7dgqw" Jan 31 08:04:36 crc kubenswrapper[4908]: I0131 08:04:36.199157 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bjps\" (UniqueName: \"kubernetes.io/projected/fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85-kube-api-access-5bjps\") pod \"community-operators-7dgqw\" (UID: \"fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85\") " pod="openshift-marketplace/community-operators-7dgqw" Jan 31 08:04:36 crc kubenswrapper[4908]: I0131 08:04:36.199607 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85-utilities\") pod \"community-operators-7dgqw\" (UID: \"fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85\") " pod="openshift-marketplace/community-operators-7dgqw" Jan 31 08:04:36 crc kubenswrapper[4908]: I0131 08:04:36.199686 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85-catalog-content\") pod \"community-operators-7dgqw\" (UID: \"fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85\") " pod="openshift-marketplace/community-operators-7dgqw" Jan 31 08:04:36 crc kubenswrapper[4908]: I0131 08:04:36.221387 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bjps\" (UniqueName: \"kubernetes.io/projected/fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85-kube-api-access-5bjps\") pod \"community-operators-7dgqw\" (UID: \"fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85\") " pod="openshift-marketplace/community-operators-7dgqw" Jan 31 08:04:36 crc kubenswrapper[4908]: I0131 08:04:36.348890 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dgqw" Jan 31 08:04:36 crc kubenswrapper[4908]: I0131 08:04:36.916705 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7dgqw"] Jan 31 08:04:37 crc kubenswrapper[4908]: I0131 08:04:37.382499 4908 generic.go:334] "Generic (PLEG): container finished" podID="fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85" containerID="33f5847f84932bfae176cfc40d0fa6e047c439ff54cbde82b2543ca3e2975680" exitCode=0 Jan 31 08:04:37 crc kubenswrapper[4908]: I0131 08:04:37.382546 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dgqw" event={"ID":"fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85","Type":"ContainerDied","Data":"33f5847f84932bfae176cfc40d0fa6e047c439ff54cbde82b2543ca3e2975680"} Jan 31 08:04:37 crc kubenswrapper[4908]: I0131 08:04:37.382570 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dgqw" event={"ID":"fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85","Type":"ContainerStarted","Data":"2083d25c67e52f3066e3208bc55871ac5ab1cee6603271354fa1a9587553f1e0"} Jan 31 08:04:38 crc kubenswrapper[4908]: I0131 08:04:38.394094 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dgqw" event={"ID":"fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85","Type":"ContainerStarted","Data":"6be758fd44194a614434011e089bd29fb62c8cd4f156e7c514eab49c3ca343ac"} Jan 31 08:04:39 crc kubenswrapper[4908]: I0131 08:04:39.401968 4908 generic.go:334] "Generic (PLEG): container finished" podID="fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85" containerID="6be758fd44194a614434011e089bd29fb62c8cd4f156e7c514eab49c3ca343ac" exitCode=0 Jan 31 08:04:39 crc kubenswrapper[4908]: I0131 08:04:39.402036 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dgqw" event={"ID":"fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85","Type":"ContainerDied","Data":"6be758fd44194a614434011e089bd29fb62c8cd4f156e7c514eab49c3ca343ac"} Jan 31 08:04:40 crc kubenswrapper[4908]: I0131 08:04:40.135859 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bk4j9"] Jan 31 08:04:40 crc kubenswrapper[4908]: I0131 08:04:40.140147 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bk4j9" Jan 31 08:04:40 crc kubenswrapper[4908]: I0131 08:04:40.156202 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bk4j9"] Jan 31 08:04:40 crc kubenswrapper[4908]: I0131 08:04:40.282427 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78cd704d-cc63-4644-9906-2035917f37fe-catalog-content\") pod \"redhat-operators-bk4j9\" (UID: \"78cd704d-cc63-4644-9906-2035917f37fe\") " pod="openshift-marketplace/redhat-operators-bk4j9" Jan 31 08:04:40 crc kubenswrapper[4908]: I0131 08:04:40.282483 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4trm6\" (UniqueName: \"kubernetes.io/projected/78cd704d-cc63-4644-9906-2035917f37fe-kube-api-access-4trm6\") pod \"redhat-operators-bk4j9\" (UID: \"78cd704d-cc63-4644-9906-2035917f37fe\") " pod="openshift-marketplace/redhat-operators-bk4j9" Jan 31 08:04:40 crc kubenswrapper[4908]: I0131 08:04:40.282539 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78cd704d-cc63-4644-9906-2035917f37fe-utilities\") pod \"redhat-operators-bk4j9\" (UID: \"78cd704d-cc63-4644-9906-2035917f37fe\") " pod="openshift-marketplace/redhat-operators-bk4j9" Jan 31 08:04:40 crc kubenswrapper[4908]: I0131 08:04:40.384766 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78cd704d-cc63-4644-9906-2035917f37fe-utilities\") pod \"redhat-operators-bk4j9\" (UID: \"78cd704d-cc63-4644-9906-2035917f37fe\") " pod="openshift-marketplace/redhat-operators-bk4j9" Jan 31 08:04:40 crc kubenswrapper[4908]: I0131 08:04:40.384923 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78cd704d-cc63-4644-9906-2035917f37fe-catalog-content\") pod \"redhat-operators-bk4j9\" (UID: \"78cd704d-cc63-4644-9906-2035917f37fe\") " pod="openshift-marketplace/redhat-operators-bk4j9" Jan 31 08:04:40 crc kubenswrapper[4908]: I0131 08:04:40.384950 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4trm6\" (UniqueName: \"kubernetes.io/projected/78cd704d-cc63-4644-9906-2035917f37fe-kube-api-access-4trm6\") pod \"redhat-operators-bk4j9\" (UID: \"78cd704d-cc63-4644-9906-2035917f37fe\") " pod="openshift-marketplace/redhat-operators-bk4j9" Jan 31 08:04:40 crc kubenswrapper[4908]: I0131 08:04:40.385491 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78cd704d-cc63-4644-9906-2035917f37fe-utilities\") pod \"redhat-operators-bk4j9\" (UID: \"78cd704d-cc63-4644-9906-2035917f37fe\") " pod="openshift-marketplace/redhat-operators-bk4j9" Jan 31 08:04:40 crc kubenswrapper[4908]: I0131 08:04:40.385491 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78cd704d-cc63-4644-9906-2035917f37fe-catalog-content\") pod \"redhat-operators-bk4j9\" (UID: \"78cd704d-cc63-4644-9906-2035917f37fe\") " pod="openshift-marketplace/redhat-operators-bk4j9" Jan 31 08:04:40 crc kubenswrapper[4908]: I0131 08:04:40.407185 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4trm6\" (UniqueName: \"kubernetes.io/projected/78cd704d-cc63-4644-9906-2035917f37fe-kube-api-access-4trm6\") pod \"redhat-operators-bk4j9\" (UID: \"78cd704d-cc63-4644-9906-2035917f37fe\") " pod="openshift-marketplace/redhat-operators-bk4j9" Jan 31 08:04:40 crc kubenswrapper[4908]: I0131 08:04:40.414991 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dgqw" event={"ID":"fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85","Type":"ContainerStarted","Data":"3f9b090deb9f25e593c5d22366f40a5de18072826fd9ce25058eb3c71a4f2f87"} Jan 31 08:04:40 crc kubenswrapper[4908]: I0131 08:04:40.434548 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7dgqw" podStartSLOduration=3.028063057 podStartE2EDuration="5.434531261s" podCreationTimestamp="2026-01-31 08:04:35 +0000 UTC" firstStartedPulling="2026-01-31 08:04:37.384045428 +0000 UTC m=+2583.999990082" lastFinishedPulling="2026-01-31 08:04:39.790513632 +0000 UTC m=+2586.406458286" observedRunningTime="2026-01-31 08:04:40.433588547 +0000 UTC m=+2587.049533221" watchObservedRunningTime="2026-01-31 08:04:40.434531261 +0000 UTC m=+2587.050475915" Jan 31 08:04:40 crc kubenswrapper[4908]: I0131 08:04:40.477601 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bk4j9" Jan 31 08:04:40 crc kubenswrapper[4908]: I0131 08:04:40.954527 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bk4j9"] Jan 31 08:04:41 crc kubenswrapper[4908]: I0131 08:04:41.425658 4908 generic.go:334] "Generic (PLEG): container finished" podID="78cd704d-cc63-4644-9906-2035917f37fe" containerID="bfb0f0cf5b45248356bcb466529e4244f795de783611a91e399bcf843e064bbb" exitCode=0 Jan 31 08:04:41 crc kubenswrapper[4908]: I0131 08:04:41.425691 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bk4j9" event={"ID":"78cd704d-cc63-4644-9906-2035917f37fe","Type":"ContainerDied","Data":"bfb0f0cf5b45248356bcb466529e4244f795de783611a91e399bcf843e064bbb"} Jan 31 08:04:41 crc kubenswrapper[4908]: I0131 08:04:41.426104 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bk4j9" event={"ID":"78cd704d-cc63-4644-9906-2035917f37fe","Type":"ContainerStarted","Data":"cf234ff7333258bc07d2b3ff4c3c4d27b2644638310fbee2e9d735acade5e5f1"} Jan 31 08:04:43 crc kubenswrapper[4908]: I0131 08:04:43.444533 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bk4j9" event={"ID":"78cd704d-cc63-4644-9906-2035917f37fe","Type":"ContainerStarted","Data":"5a4fb3b06ea97ab8fd28878973fa1274a7026b15c349492466dcb39199f2ff62"} Jan 31 08:04:44 crc kubenswrapper[4908]: I0131 08:04:44.939935 4908 scope.go:117] "RemoveContainer" containerID="d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7" Jan 31 08:04:44 crc kubenswrapper[4908]: E0131 08:04:44.940641 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:04:45 crc kubenswrapper[4908]: I0131 08:04:45.461700 4908 generic.go:334] "Generic (PLEG): container finished" podID="78cd704d-cc63-4644-9906-2035917f37fe" containerID="5a4fb3b06ea97ab8fd28878973fa1274a7026b15c349492466dcb39199f2ff62" exitCode=0 Jan 31 08:04:45 crc kubenswrapper[4908]: I0131 08:04:45.461743 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bk4j9" event={"ID":"78cd704d-cc63-4644-9906-2035917f37fe","Type":"ContainerDied","Data":"5a4fb3b06ea97ab8fd28878973fa1274a7026b15c349492466dcb39199f2ff62"} Jan 31 08:04:46 crc kubenswrapper[4908]: I0131 08:04:46.350078 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7dgqw" Jan 31 08:04:46 crc kubenswrapper[4908]: I0131 08:04:46.350178 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7dgqw" Jan 31 08:04:46 crc kubenswrapper[4908]: I0131 08:04:46.421149 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7dgqw" Jan 31 08:04:46 crc kubenswrapper[4908]: I0131 08:04:46.526641 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7dgqw" Jan 31 08:04:47 crc kubenswrapper[4908]: I0131 08:04:47.535382 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7dgqw"] Jan 31 08:04:48 crc kubenswrapper[4908]: I0131 08:04:48.487228 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7dgqw" podUID="fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85" containerName="registry-server" containerID="cri-o://3f9b090deb9f25e593c5d22366f40a5de18072826fd9ce25058eb3c71a4f2f87" gracePeriod=2 Jan 31 08:04:48 crc kubenswrapper[4908]: I0131 08:04:48.969332 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dgqw" Jan 31 08:04:49 crc kubenswrapper[4908]: I0131 08:04:49.053250 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85-utilities\") pod \"fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85\" (UID: \"fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85\") " Jan 31 08:04:49 crc kubenswrapper[4908]: I0131 08:04:49.053298 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bjps\" (UniqueName: \"kubernetes.io/projected/fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85-kube-api-access-5bjps\") pod \"fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85\" (UID: \"fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85\") " Jan 31 08:04:49 crc kubenswrapper[4908]: I0131 08:04:49.053443 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85-catalog-content\") pod \"fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85\" (UID: \"fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85\") " Jan 31 08:04:49 crc kubenswrapper[4908]: I0131 08:04:49.055510 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85-utilities" (OuterVolumeSpecName: "utilities") pod "fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85" (UID: "fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:04:49 crc kubenswrapper[4908]: I0131 08:04:49.058568 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85-kube-api-access-5bjps" (OuterVolumeSpecName: "kube-api-access-5bjps") pod "fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85" (UID: "fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85"). InnerVolumeSpecName "kube-api-access-5bjps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:04:49 crc kubenswrapper[4908]: I0131 08:04:49.111855 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85" (UID: "fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:04:49 crc kubenswrapper[4908]: I0131 08:04:49.156192 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:04:49 crc kubenswrapper[4908]: I0131 08:04:49.156230 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bjps\" (UniqueName: \"kubernetes.io/projected/fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85-kube-api-access-5bjps\") on node \"crc\" DevicePath \"\"" Jan 31 08:04:49 crc kubenswrapper[4908]: I0131 08:04:49.156242 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:04:49 crc kubenswrapper[4908]: I0131 08:04:49.504891 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bk4j9" event={"ID":"78cd704d-cc63-4644-9906-2035917f37fe","Type":"ContainerStarted","Data":"5a56d34722bf8b9aa6aa2f6eb1747259b623d29b68a1ecf65bf65afa0a021ead"} Jan 31 08:04:49 crc kubenswrapper[4908]: I0131 08:04:49.516408 4908 generic.go:334] "Generic (PLEG): container finished" podID="fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85" containerID="3f9b090deb9f25e593c5d22366f40a5de18072826fd9ce25058eb3c71a4f2f87" exitCode=0 Jan 31 08:04:49 crc kubenswrapper[4908]: I0131 08:04:49.516470 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dgqw" event={"ID":"fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85","Type":"ContainerDied","Data":"3f9b090deb9f25e593c5d22366f40a5de18072826fd9ce25058eb3c71a4f2f87"} Jan 31 08:04:49 crc kubenswrapper[4908]: I0131 08:04:49.516506 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dgqw" event={"ID":"fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85","Type":"ContainerDied","Data":"2083d25c67e52f3066e3208bc55871ac5ab1cee6603271354fa1a9587553f1e0"} Jan 31 08:04:49 crc kubenswrapper[4908]: I0131 08:04:49.516528 4908 scope.go:117] "RemoveContainer" containerID="3f9b090deb9f25e593c5d22366f40a5de18072826fd9ce25058eb3c71a4f2f87" Jan 31 08:04:49 crc kubenswrapper[4908]: I0131 08:04:49.516694 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dgqw" Jan 31 08:04:49 crc kubenswrapper[4908]: I0131 08:04:49.525911 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bk4j9" podStartSLOduration=2.487572348 podStartE2EDuration="9.525891362s" podCreationTimestamp="2026-01-31 08:04:40 +0000 UTC" firstStartedPulling="2026-01-31 08:04:41.427270998 +0000 UTC m=+2588.043215652" lastFinishedPulling="2026-01-31 08:04:48.465590012 +0000 UTC m=+2595.081534666" observedRunningTime="2026-01-31 08:04:49.524496697 +0000 UTC m=+2596.140441351" watchObservedRunningTime="2026-01-31 08:04:49.525891362 +0000 UTC m=+2596.141836016" Jan 31 08:04:49 crc kubenswrapper[4908]: I0131 08:04:49.551460 4908 scope.go:117] "RemoveContainer" containerID="6be758fd44194a614434011e089bd29fb62c8cd4f156e7c514eab49c3ca343ac" Jan 31 08:04:49 crc kubenswrapper[4908]: I0131 08:04:49.557664 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7dgqw"] Jan 31 08:04:49 crc kubenswrapper[4908]: I0131 08:04:49.567528 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7dgqw"] Jan 31 08:04:49 crc kubenswrapper[4908]: I0131 08:04:49.583406 4908 scope.go:117] "RemoveContainer" containerID="33f5847f84932bfae176cfc40d0fa6e047c439ff54cbde82b2543ca3e2975680" Jan 31 08:04:49 crc kubenswrapper[4908]: I0131 08:04:49.609709 4908 scope.go:117] "RemoveContainer" containerID="3f9b090deb9f25e593c5d22366f40a5de18072826fd9ce25058eb3c71a4f2f87" Jan 31 08:04:49 crc kubenswrapper[4908]: E0131 08:04:49.610195 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f9b090deb9f25e593c5d22366f40a5de18072826fd9ce25058eb3c71a4f2f87\": container with ID starting with 3f9b090deb9f25e593c5d22366f40a5de18072826fd9ce25058eb3c71a4f2f87 not found: ID does not exist" containerID="3f9b090deb9f25e593c5d22366f40a5de18072826fd9ce25058eb3c71a4f2f87" Jan 31 08:04:49 crc kubenswrapper[4908]: I0131 08:04:49.610243 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f9b090deb9f25e593c5d22366f40a5de18072826fd9ce25058eb3c71a4f2f87"} err="failed to get container status \"3f9b090deb9f25e593c5d22366f40a5de18072826fd9ce25058eb3c71a4f2f87\": rpc error: code = NotFound desc = could not find container \"3f9b090deb9f25e593c5d22366f40a5de18072826fd9ce25058eb3c71a4f2f87\": container with ID starting with 3f9b090deb9f25e593c5d22366f40a5de18072826fd9ce25058eb3c71a4f2f87 not found: ID does not exist" Jan 31 08:04:49 crc kubenswrapper[4908]: I0131 08:04:49.610274 4908 scope.go:117] "RemoveContainer" containerID="6be758fd44194a614434011e089bd29fb62c8cd4f156e7c514eab49c3ca343ac" Jan 31 08:04:49 crc kubenswrapper[4908]: E0131 08:04:49.610523 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6be758fd44194a614434011e089bd29fb62c8cd4f156e7c514eab49c3ca343ac\": container with ID starting with 6be758fd44194a614434011e089bd29fb62c8cd4f156e7c514eab49c3ca343ac not found: ID does not exist" containerID="6be758fd44194a614434011e089bd29fb62c8cd4f156e7c514eab49c3ca343ac" Jan 31 08:04:49 crc kubenswrapper[4908]: I0131 08:04:49.610552 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6be758fd44194a614434011e089bd29fb62c8cd4f156e7c514eab49c3ca343ac"} err="failed to get container status \"6be758fd44194a614434011e089bd29fb62c8cd4f156e7c514eab49c3ca343ac\": rpc error: code = NotFound desc = could not find container \"6be758fd44194a614434011e089bd29fb62c8cd4f156e7c514eab49c3ca343ac\": container with ID starting with 6be758fd44194a614434011e089bd29fb62c8cd4f156e7c514eab49c3ca343ac not found: ID does not exist" Jan 31 08:04:49 crc kubenswrapper[4908]: I0131 08:04:49.610573 4908 scope.go:117] "RemoveContainer" containerID="33f5847f84932bfae176cfc40d0fa6e047c439ff54cbde82b2543ca3e2975680" Jan 31 08:04:49 crc kubenswrapper[4908]: E0131 08:04:49.610837 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33f5847f84932bfae176cfc40d0fa6e047c439ff54cbde82b2543ca3e2975680\": container with ID starting with 33f5847f84932bfae176cfc40d0fa6e047c439ff54cbde82b2543ca3e2975680 not found: ID does not exist" containerID="33f5847f84932bfae176cfc40d0fa6e047c439ff54cbde82b2543ca3e2975680" Jan 31 08:04:49 crc kubenswrapper[4908]: I0131 08:04:49.610860 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33f5847f84932bfae176cfc40d0fa6e047c439ff54cbde82b2543ca3e2975680"} err="failed to get container status \"33f5847f84932bfae176cfc40d0fa6e047c439ff54cbde82b2543ca3e2975680\": rpc error: code = NotFound desc = could not find container \"33f5847f84932bfae176cfc40d0fa6e047c439ff54cbde82b2543ca3e2975680\": container with ID starting with 33f5847f84932bfae176cfc40d0fa6e047c439ff54cbde82b2543ca3e2975680 not found: ID does not exist" Jan 31 08:04:49 crc kubenswrapper[4908]: I0131 08:04:49.951082 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85" path="/var/lib/kubelet/pods/fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85/volumes" Jan 31 08:04:50 crc kubenswrapper[4908]: I0131 08:04:50.478267 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bk4j9" Jan 31 08:04:50 crc kubenswrapper[4908]: I0131 08:04:50.478324 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bk4j9" Jan 31 08:04:51 crc kubenswrapper[4908]: I0131 08:04:51.521572 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bk4j9" podUID="78cd704d-cc63-4644-9906-2035917f37fe" containerName="registry-server" probeResult="failure" output=< Jan 31 08:04:51 crc kubenswrapper[4908]: timeout: failed to connect service ":50051" within 1s Jan 31 08:04:51 crc kubenswrapper[4908]: > Jan 31 08:04:56 crc kubenswrapper[4908]: I0131 08:04:56.941372 4908 scope.go:117] "RemoveContainer" containerID="d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7" Jan 31 08:04:56 crc kubenswrapper[4908]: E0131 08:04:56.942526 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:05:00 crc kubenswrapper[4908]: I0131 08:05:00.528642 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bk4j9" Jan 31 08:05:00 crc kubenswrapper[4908]: I0131 08:05:00.580133 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bk4j9" Jan 31 08:05:00 crc kubenswrapper[4908]: I0131 08:05:00.768285 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bk4j9"] Jan 31 08:05:01 crc kubenswrapper[4908]: I0131 08:05:01.617269 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bk4j9" podUID="78cd704d-cc63-4644-9906-2035917f37fe" containerName="registry-server" containerID="cri-o://5a56d34722bf8b9aa6aa2f6eb1747259b623d29b68a1ecf65bf65afa0a021ead" gracePeriod=2 Jan 31 08:05:02 crc kubenswrapper[4908]: I0131 08:05:02.068856 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bk4j9" Jan 31 08:05:02 crc kubenswrapper[4908]: I0131 08:05:02.227882 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4trm6\" (UniqueName: \"kubernetes.io/projected/78cd704d-cc63-4644-9906-2035917f37fe-kube-api-access-4trm6\") pod \"78cd704d-cc63-4644-9906-2035917f37fe\" (UID: \"78cd704d-cc63-4644-9906-2035917f37fe\") " Jan 31 08:05:02 crc kubenswrapper[4908]: I0131 08:05:02.228013 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78cd704d-cc63-4644-9906-2035917f37fe-utilities\") pod \"78cd704d-cc63-4644-9906-2035917f37fe\" (UID: \"78cd704d-cc63-4644-9906-2035917f37fe\") " Jan 31 08:05:02 crc kubenswrapper[4908]: I0131 08:05:02.228081 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78cd704d-cc63-4644-9906-2035917f37fe-catalog-content\") pod \"78cd704d-cc63-4644-9906-2035917f37fe\" (UID: \"78cd704d-cc63-4644-9906-2035917f37fe\") " Jan 31 08:05:02 crc kubenswrapper[4908]: I0131 08:05:02.229414 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78cd704d-cc63-4644-9906-2035917f37fe-utilities" (OuterVolumeSpecName: "utilities") pod "78cd704d-cc63-4644-9906-2035917f37fe" (UID: "78cd704d-cc63-4644-9906-2035917f37fe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:05:02 crc kubenswrapper[4908]: I0131 08:05:02.234604 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78cd704d-cc63-4644-9906-2035917f37fe-kube-api-access-4trm6" (OuterVolumeSpecName: "kube-api-access-4trm6") pod "78cd704d-cc63-4644-9906-2035917f37fe" (UID: "78cd704d-cc63-4644-9906-2035917f37fe"). InnerVolumeSpecName "kube-api-access-4trm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:05:02 crc kubenswrapper[4908]: I0131 08:05:02.330240 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4trm6\" (UniqueName: \"kubernetes.io/projected/78cd704d-cc63-4644-9906-2035917f37fe-kube-api-access-4trm6\") on node \"crc\" DevicePath \"\"" Jan 31 08:05:02 crc kubenswrapper[4908]: I0131 08:05:02.330548 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78cd704d-cc63-4644-9906-2035917f37fe-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:05:02 crc kubenswrapper[4908]: I0131 08:05:02.353030 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78cd704d-cc63-4644-9906-2035917f37fe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78cd704d-cc63-4644-9906-2035917f37fe" (UID: "78cd704d-cc63-4644-9906-2035917f37fe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:05:02 crc kubenswrapper[4908]: I0131 08:05:02.431789 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78cd704d-cc63-4644-9906-2035917f37fe-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:05:02 crc kubenswrapper[4908]: I0131 08:05:02.629456 4908 generic.go:334] "Generic (PLEG): container finished" podID="78cd704d-cc63-4644-9906-2035917f37fe" containerID="5a56d34722bf8b9aa6aa2f6eb1747259b623d29b68a1ecf65bf65afa0a021ead" exitCode=0 Jan 31 08:05:02 crc kubenswrapper[4908]: I0131 08:05:02.629539 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bk4j9" Jan 31 08:05:02 crc kubenswrapper[4908]: I0131 08:05:02.629548 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bk4j9" event={"ID":"78cd704d-cc63-4644-9906-2035917f37fe","Type":"ContainerDied","Data":"5a56d34722bf8b9aa6aa2f6eb1747259b623d29b68a1ecf65bf65afa0a021ead"} Jan 31 08:05:02 crc kubenswrapper[4908]: I0131 08:05:02.630162 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bk4j9" event={"ID":"78cd704d-cc63-4644-9906-2035917f37fe","Type":"ContainerDied","Data":"cf234ff7333258bc07d2b3ff4c3c4d27b2644638310fbee2e9d735acade5e5f1"} Jan 31 08:05:02 crc kubenswrapper[4908]: I0131 08:05:02.630187 4908 scope.go:117] "RemoveContainer" containerID="5a56d34722bf8b9aa6aa2f6eb1747259b623d29b68a1ecf65bf65afa0a021ead" Jan 31 08:05:02 crc kubenswrapper[4908]: I0131 08:05:02.656337 4908 scope.go:117] "RemoveContainer" containerID="5a4fb3b06ea97ab8fd28878973fa1274a7026b15c349492466dcb39199f2ff62" Jan 31 08:05:02 crc kubenswrapper[4908]: I0131 08:05:02.668690 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bk4j9"] Jan 31 08:05:02 crc kubenswrapper[4908]: I0131 08:05:02.676233 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bk4j9"] Jan 31 08:05:02 crc kubenswrapper[4908]: I0131 08:05:02.684843 4908 scope.go:117] "RemoveContainer" containerID="bfb0f0cf5b45248356bcb466529e4244f795de783611a91e399bcf843e064bbb" Jan 31 08:05:02 crc kubenswrapper[4908]: I0131 08:05:02.719968 4908 scope.go:117] "RemoveContainer" containerID="5a56d34722bf8b9aa6aa2f6eb1747259b623d29b68a1ecf65bf65afa0a021ead" Jan 31 08:05:02 crc kubenswrapper[4908]: E0131 08:05:02.720437 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a56d34722bf8b9aa6aa2f6eb1747259b623d29b68a1ecf65bf65afa0a021ead\": container with ID starting with 5a56d34722bf8b9aa6aa2f6eb1747259b623d29b68a1ecf65bf65afa0a021ead not found: ID does not exist" containerID="5a56d34722bf8b9aa6aa2f6eb1747259b623d29b68a1ecf65bf65afa0a021ead" Jan 31 08:05:02 crc kubenswrapper[4908]: I0131 08:05:02.720487 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a56d34722bf8b9aa6aa2f6eb1747259b623d29b68a1ecf65bf65afa0a021ead"} err="failed to get container status \"5a56d34722bf8b9aa6aa2f6eb1747259b623d29b68a1ecf65bf65afa0a021ead\": rpc error: code = NotFound desc = could not find container \"5a56d34722bf8b9aa6aa2f6eb1747259b623d29b68a1ecf65bf65afa0a021ead\": container with ID starting with 5a56d34722bf8b9aa6aa2f6eb1747259b623d29b68a1ecf65bf65afa0a021ead not found: ID does not exist" Jan 31 08:05:02 crc kubenswrapper[4908]: I0131 08:05:02.720530 4908 scope.go:117] "RemoveContainer" containerID="5a4fb3b06ea97ab8fd28878973fa1274a7026b15c349492466dcb39199f2ff62" Jan 31 08:05:02 crc kubenswrapper[4908]: E0131 08:05:02.721823 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a4fb3b06ea97ab8fd28878973fa1274a7026b15c349492466dcb39199f2ff62\": container with ID starting with 5a4fb3b06ea97ab8fd28878973fa1274a7026b15c349492466dcb39199f2ff62 not found: ID does not exist" containerID="5a4fb3b06ea97ab8fd28878973fa1274a7026b15c349492466dcb39199f2ff62" Jan 31 08:05:02 crc kubenswrapper[4908]: I0131 08:05:02.721854 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a4fb3b06ea97ab8fd28878973fa1274a7026b15c349492466dcb39199f2ff62"} err="failed to get container status \"5a4fb3b06ea97ab8fd28878973fa1274a7026b15c349492466dcb39199f2ff62\": rpc error: code = NotFound desc = could not find container \"5a4fb3b06ea97ab8fd28878973fa1274a7026b15c349492466dcb39199f2ff62\": container with ID starting with 5a4fb3b06ea97ab8fd28878973fa1274a7026b15c349492466dcb39199f2ff62 not found: ID does not exist" Jan 31 08:05:02 crc kubenswrapper[4908]: I0131 08:05:02.721874 4908 scope.go:117] "RemoveContainer" containerID="bfb0f0cf5b45248356bcb466529e4244f795de783611a91e399bcf843e064bbb" Jan 31 08:05:02 crc kubenswrapper[4908]: E0131 08:05:02.722246 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfb0f0cf5b45248356bcb466529e4244f795de783611a91e399bcf843e064bbb\": container with ID starting with bfb0f0cf5b45248356bcb466529e4244f795de783611a91e399bcf843e064bbb not found: ID does not exist" containerID="bfb0f0cf5b45248356bcb466529e4244f795de783611a91e399bcf843e064bbb" Jan 31 08:05:02 crc kubenswrapper[4908]: I0131 08:05:02.722275 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfb0f0cf5b45248356bcb466529e4244f795de783611a91e399bcf843e064bbb"} err="failed to get container status \"bfb0f0cf5b45248356bcb466529e4244f795de783611a91e399bcf843e064bbb\": rpc error: code = NotFound desc = could not find container \"bfb0f0cf5b45248356bcb466529e4244f795de783611a91e399bcf843e064bbb\": container with ID starting with bfb0f0cf5b45248356bcb466529e4244f795de783611a91e399bcf843e064bbb not found: ID does not exist" Jan 31 08:05:03 crc kubenswrapper[4908]: I0131 08:05:03.951326 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78cd704d-cc63-4644-9906-2035917f37fe" path="/var/lib/kubelet/pods/78cd704d-cc63-4644-9906-2035917f37fe/volumes" Jan 31 08:05:09 crc kubenswrapper[4908]: I0131 08:05:09.940544 4908 scope.go:117] "RemoveContainer" containerID="d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7" Jan 31 08:05:09 crc kubenswrapper[4908]: E0131 08:05:09.941655 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:05:10 crc kubenswrapper[4908]: I0131 08:05:10.712331 4908 generic.go:334] "Generic (PLEG): container finished" podID="f7bb5704-7aa7-4021-bd84-2065fdda3980" containerID="18eb6ad1922e22ff0bab29c303029d11dd3d1f62435b7c4d226eec166a6d5594" exitCode=0 Jan 31 08:05:10 crc kubenswrapper[4908]: I0131 08:05:10.712378 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c82r6" event={"ID":"f7bb5704-7aa7-4021-bd84-2065fdda3980","Type":"ContainerDied","Data":"18eb6ad1922e22ff0bab29c303029d11dd3d1f62435b7c4d226eec166a6d5594"} Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.134743 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c82r6" Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.230771 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfckj\" (UniqueName: \"kubernetes.io/projected/f7bb5704-7aa7-4021-bd84-2065fdda3980-kube-api-access-wfckj\") pod \"f7bb5704-7aa7-4021-bd84-2065fdda3980\" (UID: \"f7bb5704-7aa7-4021-bd84-2065fdda3980\") " Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.230852 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7bb5704-7aa7-4021-bd84-2065fdda3980-inventory\") pod \"f7bb5704-7aa7-4021-bd84-2065fdda3980\" (UID: \"f7bb5704-7aa7-4021-bd84-2065fdda3980\") " Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.230912 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7bb5704-7aa7-4021-bd84-2065fdda3980-ssh-key-openstack-edpm-ipam\") pod \"f7bb5704-7aa7-4021-bd84-2065fdda3980\" (UID: \"f7bb5704-7aa7-4021-bd84-2065fdda3980\") " Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.230940 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f7bb5704-7aa7-4021-bd84-2065fdda3980-ceph\") pod \"f7bb5704-7aa7-4021-bd84-2065fdda3980\" (UID: \"f7bb5704-7aa7-4021-bd84-2065fdda3980\") " Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.237171 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7bb5704-7aa7-4021-bd84-2065fdda3980-kube-api-access-wfckj" (OuterVolumeSpecName: "kube-api-access-wfckj") pod "f7bb5704-7aa7-4021-bd84-2065fdda3980" (UID: "f7bb5704-7aa7-4021-bd84-2065fdda3980"). InnerVolumeSpecName "kube-api-access-wfckj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.237487 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7bb5704-7aa7-4021-bd84-2065fdda3980-ceph" (OuterVolumeSpecName: "ceph") pod "f7bb5704-7aa7-4021-bd84-2065fdda3980" (UID: "f7bb5704-7aa7-4021-bd84-2065fdda3980"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.258418 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7bb5704-7aa7-4021-bd84-2065fdda3980-inventory" (OuterVolumeSpecName: "inventory") pod "f7bb5704-7aa7-4021-bd84-2065fdda3980" (UID: "f7bb5704-7aa7-4021-bd84-2065fdda3980"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.260157 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7bb5704-7aa7-4021-bd84-2065fdda3980-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f7bb5704-7aa7-4021-bd84-2065fdda3980" (UID: "f7bb5704-7aa7-4021-bd84-2065fdda3980"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.334134 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfckj\" (UniqueName: \"kubernetes.io/projected/f7bb5704-7aa7-4021-bd84-2065fdda3980-kube-api-access-wfckj\") on node \"crc\" DevicePath \"\"" Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.334170 4908 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7bb5704-7aa7-4021-bd84-2065fdda3980-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.334181 4908 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7bb5704-7aa7-4021-bd84-2065fdda3980-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.334191 4908 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f7bb5704-7aa7-4021-bd84-2065fdda3980-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.731340 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c82r6" event={"ID":"f7bb5704-7aa7-4021-bd84-2065fdda3980","Type":"ContainerDied","Data":"be48fd6c4031c15724f5d55eb984f9f4befc2d07352bcb25204f9ef659b20cff"} Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.731656 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be48fd6c4031c15724f5d55eb984f9f4befc2d07352bcb25204f9ef659b20cff" Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.731400 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c82r6" Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.862034 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wp9w6"] Jan 31 08:05:12 crc kubenswrapper[4908]: E0131 08:05:12.862543 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7bb5704-7aa7-4021-bd84-2065fdda3980" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.862574 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7bb5704-7aa7-4021-bd84-2065fdda3980" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:05:12 crc kubenswrapper[4908]: E0131 08:05:12.862596 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78cd704d-cc63-4644-9906-2035917f37fe" containerName="registry-server" Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.862603 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="78cd704d-cc63-4644-9906-2035917f37fe" containerName="registry-server" Jan 31 08:05:12 crc kubenswrapper[4908]: E0131 08:05:12.862618 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85" containerName="extract-content" Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.862625 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85" containerName="extract-content" Jan 31 08:05:12 crc kubenswrapper[4908]: E0131 08:05:12.862632 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78cd704d-cc63-4644-9906-2035917f37fe" containerName="extract-utilities" Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.862641 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="78cd704d-cc63-4644-9906-2035917f37fe" containerName="extract-utilities" Jan 31 08:05:12 crc kubenswrapper[4908]: E0131 08:05:12.862650 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85" containerName="extract-utilities" Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.862656 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85" containerName="extract-utilities" Jan 31 08:05:12 crc kubenswrapper[4908]: E0131 08:05:12.862672 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78cd704d-cc63-4644-9906-2035917f37fe" containerName="extract-content" Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.862678 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="78cd704d-cc63-4644-9906-2035917f37fe" containerName="extract-content" Jan 31 08:05:12 crc kubenswrapper[4908]: E0131 08:05:12.862691 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85" containerName="registry-server" Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.862698 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85" containerName="registry-server" Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.862866 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="78cd704d-cc63-4644-9906-2035917f37fe" containerName="registry-server" Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.862881 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7bb5704-7aa7-4021-bd84-2065fdda3980" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.862896 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc5d9247-17a1-4eb4-ad13-3e02ffcb8a85" containerName="registry-server" Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.863555 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wp9w6" Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.866932 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.867073 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.867088 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.867134 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vgwb9" Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.867075 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:05:12 crc kubenswrapper[4908]: I0131 08:05:12.873337 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wp9w6"] Jan 31 08:05:13 crc kubenswrapper[4908]: I0131 08:05:13.047875 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4cl2\" (UniqueName: \"kubernetes.io/projected/cf1127bf-036b-4e90-ae26-84a790d46e73-kube-api-access-m4cl2\") pod \"ssh-known-hosts-edpm-deployment-wp9w6\" (UID: \"cf1127bf-036b-4e90-ae26-84a790d46e73\") " pod="openstack/ssh-known-hosts-edpm-deployment-wp9w6" Jan 31 08:05:13 crc kubenswrapper[4908]: I0131 08:05:13.048073 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cf1127bf-036b-4e90-ae26-84a790d46e73-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wp9w6\" (UID: \"cf1127bf-036b-4e90-ae26-84a790d46e73\") " pod="openstack/ssh-known-hosts-edpm-deployment-wp9w6" Jan 31 08:05:13 crc kubenswrapper[4908]: I0131 08:05:13.048793 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/cf1127bf-036b-4e90-ae26-84a790d46e73-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wp9w6\" (UID: \"cf1127bf-036b-4e90-ae26-84a790d46e73\") " pod="openstack/ssh-known-hosts-edpm-deployment-wp9w6" Jan 31 08:05:13 crc kubenswrapper[4908]: I0131 08:05:13.048878 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cf1127bf-036b-4e90-ae26-84a790d46e73-ceph\") pod \"ssh-known-hosts-edpm-deployment-wp9w6\" (UID: \"cf1127bf-036b-4e90-ae26-84a790d46e73\") " pod="openstack/ssh-known-hosts-edpm-deployment-wp9w6" Jan 31 08:05:13 crc kubenswrapper[4908]: I0131 08:05:13.151173 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4cl2\" (UniqueName: \"kubernetes.io/projected/cf1127bf-036b-4e90-ae26-84a790d46e73-kube-api-access-m4cl2\") pod \"ssh-known-hosts-edpm-deployment-wp9w6\" (UID: \"cf1127bf-036b-4e90-ae26-84a790d46e73\") " pod="openstack/ssh-known-hosts-edpm-deployment-wp9w6" Jan 31 08:05:13 crc kubenswrapper[4908]: I0131 08:05:13.151228 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cf1127bf-036b-4e90-ae26-84a790d46e73-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wp9w6\" (UID: \"cf1127bf-036b-4e90-ae26-84a790d46e73\") " pod="openstack/ssh-known-hosts-edpm-deployment-wp9w6" Jan 31 08:05:13 crc kubenswrapper[4908]: I0131 08:05:13.151358 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/cf1127bf-036b-4e90-ae26-84a790d46e73-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wp9w6\" (UID: \"cf1127bf-036b-4e90-ae26-84a790d46e73\") " pod="openstack/ssh-known-hosts-edpm-deployment-wp9w6" Jan 31 08:05:13 crc kubenswrapper[4908]: I0131 08:05:13.151390 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cf1127bf-036b-4e90-ae26-84a790d46e73-ceph\") pod \"ssh-known-hosts-edpm-deployment-wp9w6\" (UID: \"cf1127bf-036b-4e90-ae26-84a790d46e73\") " pod="openstack/ssh-known-hosts-edpm-deployment-wp9w6" Jan 31 08:05:13 crc kubenswrapper[4908]: I0131 08:05:13.156263 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/cf1127bf-036b-4e90-ae26-84a790d46e73-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wp9w6\" (UID: \"cf1127bf-036b-4e90-ae26-84a790d46e73\") " pod="openstack/ssh-known-hosts-edpm-deployment-wp9w6" Jan 31 08:05:13 crc kubenswrapper[4908]: I0131 08:05:13.156864 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cf1127bf-036b-4e90-ae26-84a790d46e73-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wp9w6\" (UID: \"cf1127bf-036b-4e90-ae26-84a790d46e73\") " pod="openstack/ssh-known-hosts-edpm-deployment-wp9w6" Jan 31 08:05:13 crc kubenswrapper[4908]: I0131 08:05:13.159552 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cf1127bf-036b-4e90-ae26-84a790d46e73-ceph\") pod \"ssh-known-hosts-edpm-deployment-wp9w6\" (UID: \"cf1127bf-036b-4e90-ae26-84a790d46e73\") " pod="openstack/ssh-known-hosts-edpm-deployment-wp9w6" Jan 31 08:05:13 crc kubenswrapper[4908]: I0131 08:05:13.170253 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4cl2\" (UniqueName: \"kubernetes.io/projected/cf1127bf-036b-4e90-ae26-84a790d46e73-kube-api-access-m4cl2\") pod \"ssh-known-hosts-edpm-deployment-wp9w6\" (UID: \"cf1127bf-036b-4e90-ae26-84a790d46e73\") " pod="openstack/ssh-known-hosts-edpm-deployment-wp9w6" Jan 31 08:05:13 crc kubenswrapper[4908]: I0131 08:05:13.178921 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wp9w6" Jan 31 08:05:13 crc kubenswrapper[4908]: I0131 08:05:13.737498 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wp9w6"] Jan 31 08:05:14 crc kubenswrapper[4908]: I0131 08:05:14.754260 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wp9w6" event={"ID":"cf1127bf-036b-4e90-ae26-84a790d46e73","Type":"ContainerStarted","Data":"1654558adde7ba06ce2366eff474c8c382f586222ceefe776dbbe91221dd91e3"} Jan 31 08:05:14 crc kubenswrapper[4908]: I0131 08:05:14.756635 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wp9w6" event={"ID":"cf1127bf-036b-4e90-ae26-84a790d46e73","Type":"ContainerStarted","Data":"fdbf9a715d921078daffbe62a705849338b1c9b73ff44da9068562779531eff2"} Jan 31 08:05:14 crc kubenswrapper[4908]: I0131 08:05:14.783521 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-wp9w6" podStartSLOduration=2.302845353 podStartE2EDuration="2.783498558s" podCreationTimestamp="2026-01-31 08:05:12 +0000 UTC" firstStartedPulling="2026-01-31 08:05:13.740879284 +0000 UTC m=+2620.356823928" lastFinishedPulling="2026-01-31 08:05:14.221532479 +0000 UTC m=+2620.837477133" observedRunningTime="2026-01-31 08:05:14.781578449 +0000 UTC m=+2621.397523103" watchObservedRunningTime="2026-01-31 08:05:14.783498558 +0000 UTC m=+2621.399443212" Jan 31 08:05:19 crc kubenswrapper[4908]: I0131 08:05:19.379270 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hhv59"] Jan 31 08:05:19 crc kubenswrapper[4908]: I0131 08:05:19.382010 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hhv59" Jan 31 08:05:19 crc kubenswrapper[4908]: I0131 08:05:19.394237 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hhv59"] Jan 31 08:05:19 crc kubenswrapper[4908]: I0131 08:05:19.574688 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znppx\" (UniqueName: \"kubernetes.io/projected/2aadeb55-1bc3-4fe4-8e95-d2973a9c4096-kube-api-access-znppx\") pod \"redhat-marketplace-hhv59\" (UID: \"2aadeb55-1bc3-4fe4-8e95-d2973a9c4096\") " pod="openshift-marketplace/redhat-marketplace-hhv59" Jan 31 08:05:19 crc kubenswrapper[4908]: I0131 08:05:19.574752 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aadeb55-1bc3-4fe4-8e95-d2973a9c4096-utilities\") pod \"redhat-marketplace-hhv59\" (UID: \"2aadeb55-1bc3-4fe4-8e95-d2973a9c4096\") " pod="openshift-marketplace/redhat-marketplace-hhv59" Jan 31 08:05:19 crc kubenswrapper[4908]: I0131 08:05:19.574879 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aadeb55-1bc3-4fe4-8e95-d2973a9c4096-catalog-content\") pod \"redhat-marketplace-hhv59\" (UID: \"2aadeb55-1bc3-4fe4-8e95-d2973a9c4096\") " pod="openshift-marketplace/redhat-marketplace-hhv59" Jan 31 08:05:19 crc kubenswrapper[4908]: I0131 08:05:19.676681 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aadeb55-1bc3-4fe4-8e95-d2973a9c4096-catalog-content\") pod \"redhat-marketplace-hhv59\" (UID: \"2aadeb55-1bc3-4fe4-8e95-d2973a9c4096\") " pod="openshift-marketplace/redhat-marketplace-hhv59" Jan 31 08:05:19 crc kubenswrapper[4908]: I0131 08:05:19.676756 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znppx\" (UniqueName: \"kubernetes.io/projected/2aadeb55-1bc3-4fe4-8e95-d2973a9c4096-kube-api-access-znppx\") pod \"redhat-marketplace-hhv59\" (UID: \"2aadeb55-1bc3-4fe4-8e95-d2973a9c4096\") " pod="openshift-marketplace/redhat-marketplace-hhv59" Jan 31 08:05:19 crc kubenswrapper[4908]: I0131 08:05:19.676783 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aadeb55-1bc3-4fe4-8e95-d2973a9c4096-utilities\") pod \"redhat-marketplace-hhv59\" (UID: \"2aadeb55-1bc3-4fe4-8e95-d2973a9c4096\") " pod="openshift-marketplace/redhat-marketplace-hhv59" Jan 31 08:05:19 crc kubenswrapper[4908]: I0131 08:05:19.677261 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aadeb55-1bc3-4fe4-8e95-d2973a9c4096-utilities\") pod \"redhat-marketplace-hhv59\" (UID: \"2aadeb55-1bc3-4fe4-8e95-d2973a9c4096\") " pod="openshift-marketplace/redhat-marketplace-hhv59" Jan 31 08:05:19 crc kubenswrapper[4908]: I0131 08:05:19.677436 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aadeb55-1bc3-4fe4-8e95-d2973a9c4096-catalog-content\") pod \"redhat-marketplace-hhv59\" (UID: \"2aadeb55-1bc3-4fe4-8e95-d2973a9c4096\") " pod="openshift-marketplace/redhat-marketplace-hhv59" Jan 31 08:05:19 crc kubenswrapper[4908]: I0131 08:05:19.706853 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znppx\" (UniqueName: \"kubernetes.io/projected/2aadeb55-1bc3-4fe4-8e95-d2973a9c4096-kube-api-access-znppx\") pod \"redhat-marketplace-hhv59\" (UID: \"2aadeb55-1bc3-4fe4-8e95-d2973a9c4096\") " pod="openshift-marketplace/redhat-marketplace-hhv59" Jan 31 08:05:19 crc kubenswrapper[4908]: I0131 08:05:19.711100 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hhv59" Jan 31 08:05:20 crc kubenswrapper[4908]: I0131 08:05:20.284753 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hhv59"] Jan 31 08:05:20 crc kubenswrapper[4908]: I0131 08:05:20.798747 4908 generic.go:334] "Generic (PLEG): container finished" podID="2aadeb55-1bc3-4fe4-8e95-d2973a9c4096" containerID="ca67d97068f94d13f9c656505f1a8bd0af4651bb25d33e7f246bec30b560363d" exitCode=0 Jan 31 08:05:20 crc kubenswrapper[4908]: I0131 08:05:20.798791 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhv59" event={"ID":"2aadeb55-1bc3-4fe4-8e95-d2973a9c4096","Type":"ContainerDied","Data":"ca67d97068f94d13f9c656505f1a8bd0af4651bb25d33e7f246bec30b560363d"} Jan 31 08:05:20 crc kubenswrapper[4908]: I0131 08:05:20.799079 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhv59" event={"ID":"2aadeb55-1bc3-4fe4-8e95-d2973a9c4096","Type":"ContainerStarted","Data":"d44b776f09248663a2d2d31a4575490de886694c0ee58745c83a929756e5e6cb"} Jan 31 08:05:20 crc kubenswrapper[4908]: I0131 08:05:20.941107 4908 scope.go:117] "RemoveContainer" containerID="d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7" Jan 31 08:05:20 crc kubenswrapper[4908]: E0131 08:05:20.941368 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:05:22 crc kubenswrapper[4908]: I0131 08:05:22.837568 4908 generic.go:334] "Generic (PLEG): container finished" podID="2aadeb55-1bc3-4fe4-8e95-d2973a9c4096" containerID="94900de9549917a2b826316523c50c94e259926ad6cf2011caff608cc8173a51" exitCode=0 Jan 31 08:05:22 crc kubenswrapper[4908]: I0131 08:05:22.838185 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhv59" event={"ID":"2aadeb55-1bc3-4fe4-8e95-d2973a9c4096","Type":"ContainerDied","Data":"94900de9549917a2b826316523c50c94e259926ad6cf2011caff608cc8173a51"} Jan 31 08:05:23 crc kubenswrapper[4908]: I0131 08:05:23.850781 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhv59" event={"ID":"2aadeb55-1bc3-4fe4-8e95-d2973a9c4096","Type":"ContainerStarted","Data":"6f1985345caa5b93c3d4638535b66e24c8625fb34cb83c57f58e6f4067e3c06f"} Jan 31 08:05:23 crc kubenswrapper[4908]: I0131 08:05:23.853048 4908 generic.go:334] "Generic (PLEG): container finished" podID="cf1127bf-036b-4e90-ae26-84a790d46e73" containerID="1654558adde7ba06ce2366eff474c8c382f586222ceefe776dbbe91221dd91e3" exitCode=0 Jan 31 08:05:23 crc kubenswrapper[4908]: I0131 08:05:23.853085 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wp9w6" event={"ID":"cf1127bf-036b-4e90-ae26-84a790d46e73","Type":"ContainerDied","Data":"1654558adde7ba06ce2366eff474c8c382f586222ceefe776dbbe91221dd91e3"} Jan 31 08:05:23 crc kubenswrapper[4908]: I0131 08:05:23.879118 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hhv59" podStartSLOduration=2.405473001 podStartE2EDuration="4.879096908s" podCreationTimestamp="2026-01-31 08:05:19 +0000 UTC" firstStartedPulling="2026-01-31 08:05:20.802513208 +0000 UTC m=+2627.418457862" lastFinishedPulling="2026-01-31 08:05:23.276137115 +0000 UTC m=+2629.892081769" observedRunningTime="2026-01-31 08:05:23.870729097 +0000 UTC m=+2630.486673751" watchObservedRunningTime="2026-01-31 08:05:23.879096908 +0000 UTC m=+2630.495041562" Jan 31 08:05:25 crc kubenswrapper[4908]: I0131 08:05:25.235898 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wp9w6" Jan 31 08:05:25 crc kubenswrapper[4908]: I0131 08:05:25.391090 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4cl2\" (UniqueName: \"kubernetes.io/projected/cf1127bf-036b-4e90-ae26-84a790d46e73-kube-api-access-m4cl2\") pod \"cf1127bf-036b-4e90-ae26-84a790d46e73\" (UID: \"cf1127bf-036b-4e90-ae26-84a790d46e73\") " Jan 31 08:05:25 crc kubenswrapper[4908]: I0131 08:05:25.391208 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cf1127bf-036b-4e90-ae26-84a790d46e73-ceph\") pod \"cf1127bf-036b-4e90-ae26-84a790d46e73\" (UID: \"cf1127bf-036b-4e90-ae26-84a790d46e73\") " Jan 31 08:05:25 crc kubenswrapper[4908]: I0131 08:05:25.391487 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/cf1127bf-036b-4e90-ae26-84a790d46e73-inventory-0\") pod \"cf1127bf-036b-4e90-ae26-84a790d46e73\" (UID: \"cf1127bf-036b-4e90-ae26-84a790d46e73\") " Jan 31 08:05:25 crc kubenswrapper[4908]: I0131 08:05:25.391585 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cf1127bf-036b-4e90-ae26-84a790d46e73-ssh-key-openstack-edpm-ipam\") pod \"cf1127bf-036b-4e90-ae26-84a790d46e73\" (UID: \"cf1127bf-036b-4e90-ae26-84a790d46e73\") " Jan 31 08:05:25 crc kubenswrapper[4908]: I0131 08:05:25.397005 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf1127bf-036b-4e90-ae26-84a790d46e73-kube-api-access-m4cl2" (OuterVolumeSpecName: "kube-api-access-m4cl2") pod "cf1127bf-036b-4e90-ae26-84a790d46e73" (UID: "cf1127bf-036b-4e90-ae26-84a790d46e73"). InnerVolumeSpecName "kube-api-access-m4cl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:05:25 crc kubenswrapper[4908]: I0131 08:05:25.397998 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf1127bf-036b-4e90-ae26-84a790d46e73-ceph" (OuterVolumeSpecName: "ceph") pod "cf1127bf-036b-4e90-ae26-84a790d46e73" (UID: "cf1127bf-036b-4e90-ae26-84a790d46e73"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:05:25 crc kubenswrapper[4908]: I0131 08:05:25.417268 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf1127bf-036b-4e90-ae26-84a790d46e73-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "cf1127bf-036b-4e90-ae26-84a790d46e73" (UID: "cf1127bf-036b-4e90-ae26-84a790d46e73"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:05:25 crc kubenswrapper[4908]: I0131 08:05:25.420163 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf1127bf-036b-4e90-ae26-84a790d46e73-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cf1127bf-036b-4e90-ae26-84a790d46e73" (UID: "cf1127bf-036b-4e90-ae26-84a790d46e73"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:05:25 crc kubenswrapper[4908]: I0131 08:05:25.494256 4908 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cf1127bf-036b-4e90-ae26-84a790d46e73-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:05:25 crc kubenswrapper[4908]: I0131 08:05:25.494302 4908 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/cf1127bf-036b-4e90-ae26-84a790d46e73-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 31 08:05:25 crc kubenswrapper[4908]: I0131 08:05:25.494316 4908 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cf1127bf-036b-4e90-ae26-84a790d46e73-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:05:25 crc kubenswrapper[4908]: I0131 08:05:25.494329 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4cl2\" (UniqueName: \"kubernetes.io/projected/cf1127bf-036b-4e90-ae26-84a790d46e73-kube-api-access-m4cl2\") on node \"crc\" DevicePath \"\"" Jan 31 08:05:25 crc kubenswrapper[4908]: I0131 08:05:25.868397 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wp9w6" event={"ID":"cf1127bf-036b-4e90-ae26-84a790d46e73","Type":"ContainerDied","Data":"fdbf9a715d921078daffbe62a705849338b1c9b73ff44da9068562779531eff2"} Jan 31 08:05:25 crc kubenswrapper[4908]: I0131 08:05:25.868450 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdbf9a715d921078daffbe62a705849338b1c9b73ff44da9068562779531eff2" Jan 31 08:05:25 crc kubenswrapper[4908]: I0131 08:05:25.868469 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wp9w6" Jan 31 08:05:25 crc kubenswrapper[4908]: I0131 08:05:25.966755 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-96lv7"] Jan 31 08:05:25 crc kubenswrapper[4908]: E0131 08:05:25.967231 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf1127bf-036b-4e90-ae26-84a790d46e73" containerName="ssh-known-hosts-edpm-deployment" Jan 31 08:05:25 crc kubenswrapper[4908]: I0131 08:05:25.967253 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf1127bf-036b-4e90-ae26-84a790d46e73" containerName="ssh-known-hosts-edpm-deployment" Jan 31 08:05:25 crc kubenswrapper[4908]: I0131 08:05:25.967448 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf1127bf-036b-4e90-ae26-84a790d46e73" containerName="ssh-known-hosts-edpm-deployment" Jan 31 08:05:25 crc kubenswrapper[4908]: I0131 08:05:25.968133 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-96lv7" Jan 31 08:05:25 crc kubenswrapper[4908]: I0131 08:05:25.970202 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:05:25 crc kubenswrapper[4908]: I0131 08:05:25.970230 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:05:25 crc kubenswrapper[4908]: I0131 08:05:25.973005 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:05:25 crc kubenswrapper[4908]: I0131 08:05:25.973008 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:05:25 crc kubenswrapper[4908]: I0131 08:05:25.973007 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vgwb9" Jan 31 08:05:25 crc kubenswrapper[4908]: I0131 08:05:25.981757 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-96lv7"] Jan 31 08:05:26 crc kubenswrapper[4908]: I0131 08:05:26.104618 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrbqb\" (UniqueName: \"kubernetes.io/projected/5ab86783-08b3-4eed-bb8c-f053a7d46d0c-kube-api-access-lrbqb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-96lv7\" (UID: \"5ab86783-08b3-4eed-bb8c-f053a7d46d0c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-96lv7" Jan 31 08:05:26 crc kubenswrapper[4908]: I0131 08:05:26.104723 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ab86783-08b3-4eed-bb8c-f053a7d46d0c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-96lv7\" (UID: \"5ab86783-08b3-4eed-bb8c-f053a7d46d0c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-96lv7" Jan 31 08:05:26 crc kubenswrapper[4908]: I0131 08:05:26.105085 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5ab86783-08b3-4eed-bb8c-f053a7d46d0c-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-96lv7\" (UID: \"5ab86783-08b3-4eed-bb8c-f053a7d46d0c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-96lv7" Jan 31 08:05:26 crc kubenswrapper[4908]: I0131 08:05:26.105136 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ab86783-08b3-4eed-bb8c-f053a7d46d0c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-96lv7\" (UID: \"5ab86783-08b3-4eed-bb8c-f053a7d46d0c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-96lv7" Jan 31 08:05:26 crc kubenswrapper[4908]: I0131 08:05:26.210261 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ab86783-08b3-4eed-bb8c-f053a7d46d0c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-96lv7\" (UID: \"5ab86783-08b3-4eed-bb8c-f053a7d46d0c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-96lv7" Jan 31 08:05:26 crc kubenswrapper[4908]: I0131 08:05:26.210411 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5ab86783-08b3-4eed-bb8c-f053a7d46d0c-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-96lv7\" (UID: \"5ab86783-08b3-4eed-bb8c-f053a7d46d0c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-96lv7" Jan 31 08:05:26 crc kubenswrapper[4908]: I0131 08:05:26.210434 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ab86783-08b3-4eed-bb8c-f053a7d46d0c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-96lv7\" (UID: \"5ab86783-08b3-4eed-bb8c-f053a7d46d0c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-96lv7" Jan 31 08:05:26 crc kubenswrapper[4908]: I0131 08:05:26.210526 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrbqb\" (UniqueName: \"kubernetes.io/projected/5ab86783-08b3-4eed-bb8c-f053a7d46d0c-kube-api-access-lrbqb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-96lv7\" (UID: \"5ab86783-08b3-4eed-bb8c-f053a7d46d0c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-96lv7" Jan 31 08:05:26 crc kubenswrapper[4908]: I0131 08:05:26.214913 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5ab86783-08b3-4eed-bb8c-f053a7d46d0c-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-96lv7\" (UID: \"5ab86783-08b3-4eed-bb8c-f053a7d46d0c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-96lv7" Jan 31 08:05:26 crc kubenswrapper[4908]: I0131 08:05:26.215538 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ab86783-08b3-4eed-bb8c-f053a7d46d0c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-96lv7\" (UID: \"5ab86783-08b3-4eed-bb8c-f053a7d46d0c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-96lv7" Jan 31 08:05:26 crc kubenswrapper[4908]: I0131 08:05:26.215593 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ab86783-08b3-4eed-bb8c-f053a7d46d0c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-96lv7\" (UID: \"5ab86783-08b3-4eed-bb8c-f053a7d46d0c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-96lv7" Jan 31 08:05:26 crc kubenswrapper[4908]: I0131 08:05:26.228674 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrbqb\" (UniqueName: \"kubernetes.io/projected/5ab86783-08b3-4eed-bb8c-f053a7d46d0c-kube-api-access-lrbqb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-96lv7\" (UID: \"5ab86783-08b3-4eed-bb8c-f053a7d46d0c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-96lv7" Jan 31 08:05:26 crc kubenswrapper[4908]: I0131 08:05:26.286024 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-96lv7" Jan 31 08:05:26 crc kubenswrapper[4908]: I0131 08:05:26.804683 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-96lv7"] Jan 31 08:05:26 crc kubenswrapper[4908]: W0131 08:05:26.815163 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ab86783_08b3_4eed_bb8c_f053a7d46d0c.slice/crio-0d620d51525a36a4939659dc52d1d04b944cb64931a129d28655ef83718dedf0 WatchSource:0}: Error finding container 0d620d51525a36a4939659dc52d1d04b944cb64931a129d28655ef83718dedf0: Status 404 returned error can't find the container with id 0d620d51525a36a4939659dc52d1d04b944cb64931a129d28655ef83718dedf0 Jan 31 08:05:26 crc kubenswrapper[4908]: I0131 08:05:26.877026 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-96lv7" event={"ID":"5ab86783-08b3-4eed-bb8c-f053a7d46d0c","Type":"ContainerStarted","Data":"0d620d51525a36a4939659dc52d1d04b944cb64931a129d28655ef83718dedf0"} Jan 31 08:05:28 crc kubenswrapper[4908]: I0131 08:05:28.895638 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-96lv7" event={"ID":"5ab86783-08b3-4eed-bb8c-f053a7d46d0c","Type":"ContainerStarted","Data":"962b104d100ae1add451096383a81ac81cbb38be83dbef14bd02afaf596b4cf7"} Jan 31 08:05:28 crc kubenswrapper[4908]: I0131 08:05:28.916480 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-96lv7" podStartSLOduration=2.94817761 podStartE2EDuration="3.916457049s" podCreationTimestamp="2026-01-31 08:05:25 +0000 UTC" firstStartedPulling="2026-01-31 08:05:26.822048698 +0000 UTC m=+2633.437993352" lastFinishedPulling="2026-01-31 08:05:27.790328137 +0000 UTC m=+2634.406272791" observedRunningTime="2026-01-31 08:05:28.910181 +0000 UTC m=+2635.526125654" watchObservedRunningTime="2026-01-31 08:05:28.916457049 +0000 UTC m=+2635.532401703" Jan 31 08:05:29 crc kubenswrapper[4908]: I0131 08:05:29.712196 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hhv59" Jan 31 08:05:29 crc kubenswrapper[4908]: I0131 08:05:29.712268 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hhv59" Jan 31 08:05:29 crc kubenswrapper[4908]: I0131 08:05:29.766433 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hhv59" Jan 31 08:05:29 crc kubenswrapper[4908]: I0131 08:05:29.954602 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hhv59" Jan 31 08:05:30 crc kubenswrapper[4908]: I0131 08:05:30.005284 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hhv59"] Jan 31 08:05:31 crc kubenswrapper[4908]: I0131 08:05:31.918835 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hhv59" podUID="2aadeb55-1bc3-4fe4-8e95-d2973a9c4096" containerName="registry-server" containerID="cri-o://6f1985345caa5b93c3d4638535b66e24c8625fb34cb83c57f58e6f4067e3c06f" gracePeriod=2 Jan 31 08:05:32 crc kubenswrapper[4908]: I0131 08:05:32.362496 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hhv59" Jan 31 08:05:32 crc kubenswrapper[4908]: I0131 08:05:32.439745 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znppx\" (UniqueName: \"kubernetes.io/projected/2aadeb55-1bc3-4fe4-8e95-d2973a9c4096-kube-api-access-znppx\") pod \"2aadeb55-1bc3-4fe4-8e95-d2973a9c4096\" (UID: \"2aadeb55-1bc3-4fe4-8e95-d2973a9c4096\") " Jan 31 08:05:32 crc kubenswrapper[4908]: I0131 08:05:32.439878 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aadeb55-1bc3-4fe4-8e95-d2973a9c4096-catalog-content\") pod \"2aadeb55-1bc3-4fe4-8e95-d2973a9c4096\" (UID: \"2aadeb55-1bc3-4fe4-8e95-d2973a9c4096\") " Jan 31 08:05:32 crc kubenswrapper[4908]: I0131 08:05:32.440000 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aadeb55-1bc3-4fe4-8e95-d2973a9c4096-utilities\") pod \"2aadeb55-1bc3-4fe4-8e95-d2973a9c4096\" (UID: \"2aadeb55-1bc3-4fe4-8e95-d2973a9c4096\") " Jan 31 08:05:32 crc kubenswrapper[4908]: I0131 08:05:32.440784 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aadeb55-1bc3-4fe4-8e95-d2973a9c4096-utilities" (OuterVolumeSpecName: "utilities") pod "2aadeb55-1bc3-4fe4-8e95-d2973a9c4096" (UID: "2aadeb55-1bc3-4fe4-8e95-d2973a9c4096"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:05:32 crc kubenswrapper[4908]: I0131 08:05:32.449305 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aadeb55-1bc3-4fe4-8e95-d2973a9c4096-kube-api-access-znppx" (OuterVolumeSpecName: "kube-api-access-znppx") pod "2aadeb55-1bc3-4fe4-8e95-d2973a9c4096" (UID: "2aadeb55-1bc3-4fe4-8e95-d2973a9c4096"). InnerVolumeSpecName "kube-api-access-znppx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:05:32 crc kubenswrapper[4908]: I0131 08:05:32.469341 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aadeb55-1bc3-4fe4-8e95-d2973a9c4096-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2aadeb55-1bc3-4fe4-8e95-d2973a9c4096" (UID: "2aadeb55-1bc3-4fe4-8e95-d2973a9c4096"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:05:32 crc kubenswrapper[4908]: I0131 08:05:32.541859 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aadeb55-1bc3-4fe4-8e95-d2973a9c4096-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:05:32 crc kubenswrapper[4908]: I0131 08:05:32.542085 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aadeb55-1bc3-4fe4-8e95-d2973a9c4096-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:05:32 crc kubenswrapper[4908]: I0131 08:05:32.542179 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znppx\" (UniqueName: \"kubernetes.io/projected/2aadeb55-1bc3-4fe4-8e95-d2973a9c4096-kube-api-access-znppx\") on node \"crc\" DevicePath \"\"" Jan 31 08:05:32 crc kubenswrapper[4908]: I0131 08:05:32.933799 4908 generic.go:334] "Generic (PLEG): container finished" podID="2aadeb55-1bc3-4fe4-8e95-d2973a9c4096" containerID="6f1985345caa5b93c3d4638535b66e24c8625fb34cb83c57f58e6f4067e3c06f" exitCode=0 Jan 31 08:05:32 crc kubenswrapper[4908]: I0131 08:05:32.934039 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hhv59" Jan 31 08:05:32 crc kubenswrapper[4908]: I0131 08:05:32.934974 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhv59" event={"ID":"2aadeb55-1bc3-4fe4-8e95-d2973a9c4096","Type":"ContainerDied","Data":"6f1985345caa5b93c3d4638535b66e24c8625fb34cb83c57f58e6f4067e3c06f"} Jan 31 08:05:32 crc kubenswrapper[4908]: I0131 08:05:32.935120 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhv59" event={"ID":"2aadeb55-1bc3-4fe4-8e95-d2973a9c4096","Type":"ContainerDied","Data":"d44b776f09248663a2d2d31a4575490de886694c0ee58745c83a929756e5e6cb"} Jan 31 08:05:32 crc kubenswrapper[4908]: I0131 08:05:32.935145 4908 scope.go:117] "RemoveContainer" containerID="6f1985345caa5b93c3d4638535b66e24c8625fb34cb83c57f58e6f4067e3c06f" Jan 31 08:05:32 crc kubenswrapper[4908]: I0131 08:05:32.956712 4908 scope.go:117] "RemoveContainer" containerID="94900de9549917a2b826316523c50c94e259926ad6cf2011caff608cc8173a51" Jan 31 08:05:32 crc kubenswrapper[4908]: I0131 08:05:32.977032 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hhv59"] Jan 31 08:05:32 crc kubenswrapper[4908]: I0131 08:05:32.988764 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hhv59"] Jan 31 08:05:32 crc kubenswrapper[4908]: I0131 08:05:32.992188 4908 scope.go:117] "RemoveContainer" containerID="ca67d97068f94d13f9c656505f1a8bd0af4651bb25d33e7f246bec30b560363d" Jan 31 08:05:33 crc kubenswrapper[4908]: I0131 08:05:33.023656 4908 scope.go:117] "RemoveContainer" containerID="6f1985345caa5b93c3d4638535b66e24c8625fb34cb83c57f58e6f4067e3c06f" Jan 31 08:05:33 crc kubenswrapper[4908]: E0131 08:05:33.024272 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f1985345caa5b93c3d4638535b66e24c8625fb34cb83c57f58e6f4067e3c06f\": container with ID starting with 6f1985345caa5b93c3d4638535b66e24c8625fb34cb83c57f58e6f4067e3c06f not found: ID does not exist" containerID="6f1985345caa5b93c3d4638535b66e24c8625fb34cb83c57f58e6f4067e3c06f" Jan 31 08:05:33 crc kubenswrapper[4908]: I0131 08:05:33.024426 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f1985345caa5b93c3d4638535b66e24c8625fb34cb83c57f58e6f4067e3c06f"} err="failed to get container status \"6f1985345caa5b93c3d4638535b66e24c8625fb34cb83c57f58e6f4067e3c06f\": rpc error: code = NotFound desc = could not find container \"6f1985345caa5b93c3d4638535b66e24c8625fb34cb83c57f58e6f4067e3c06f\": container with ID starting with 6f1985345caa5b93c3d4638535b66e24c8625fb34cb83c57f58e6f4067e3c06f not found: ID does not exist" Jan 31 08:05:33 crc kubenswrapper[4908]: I0131 08:05:33.024517 4908 scope.go:117] "RemoveContainer" containerID="94900de9549917a2b826316523c50c94e259926ad6cf2011caff608cc8173a51" Jan 31 08:05:33 crc kubenswrapper[4908]: E0131 08:05:33.024952 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94900de9549917a2b826316523c50c94e259926ad6cf2011caff608cc8173a51\": container with ID starting with 94900de9549917a2b826316523c50c94e259926ad6cf2011caff608cc8173a51 not found: ID does not exist" containerID="94900de9549917a2b826316523c50c94e259926ad6cf2011caff608cc8173a51" Jan 31 08:05:33 crc kubenswrapper[4908]: I0131 08:05:33.025070 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94900de9549917a2b826316523c50c94e259926ad6cf2011caff608cc8173a51"} err="failed to get container status \"94900de9549917a2b826316523c50c94e259926ad6cf2011caff608cc8173a51\": rpc error: code = NotFound desc = could not find container \"94900de9549917a2b826316523c50c94e259926ad6cf2011caff608cc8173a51\": container with ID starting with 94900de9549917a2b826316523c50c94e259926ad6cf2011caff608cc8173a51 not found: ID does not exist" Jan 31 08:05:33 crc kubenswrapper[4908]: I0131 08:05:33.025143 4908 scope.go:117] "RemoveContainer" containerID="ca67d97068f94d13f9c656505f1a8bd0af4651bb25d33e7f246bec30b560363d" Jan 31 08:05:33 crc kubenswrapper[4908]: E0131 08:05:33.025561 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca67d97068f94d13f9c656505f1a8bd0af4651bb25d33e7f246bec30b560363d\": container with ID starting with ca67d97068f94d13f9c656505f1a8bd0af4651bb25d33e7f246bec30b560363d not found: ID does not exist" containerID="ca67d97068f94d13f9c656505f1a8bd0af4651bb25d33e7f246bec30b560363d" Jan 31 08:05:33 crc kubenswrapper[4908]: I0131 08:05:33.025618 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca67d97068f94d13f9c656505f1a8bd0af4651bb25d33e7f246bec30b560363d"} err="failed to get container status \"ca67d97068f94d13f9c656505f1a8bd0af4651bb25d33e7f246bec30b560363d\": rpc error: code = NotFound desc = could not find container \"ca67d97068f94d13f9c656505f1a8bd0af4651bb25d33e7f246bec30b560363d\": container with ID starting with ca67d97068f94d13f9c656505f1a8bd0af4651bb25d33e7f246bec30b560363d not found: ID does not exist" Jan 31 08:05:33 crc kubenswrapper[4908]: I0131 08:05:33.953580 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aadeb55-1bc3-4fe4-8e95-d2973a9c4096" path="/var/lib/kubelet/pods/2aadeb55-1bc3-4fe4-8e95-d2973a9c4096/volumes" Jan 31 08:05:34 crc kubenswrapper[4908]: I0131 08:05:34.941365 4908 scope.go:117] "RemoveContainer" containerID="d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7" Jan 31 08:05:34 crc kubenswrapper[4908]: E0131 08:05:34.941811 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:05:35 crc kubenswrapper[4908]: I0131 08:05:35.975024 4908 generic.go:334] "Generic (PLEG): container finished" podID="5ab86783-08b3-4eed-bb8c-f053a7d46d0c" containerID="962b104d100ae1add451096383a81ac81cbb38be83dbef14bd02afaf596b4cf7" exitCode=0 Jan 31 08:05:35 crc kubenswrapper[4908]: I0131 08:05:35.975102 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-96lv7" event={"ID":"5ab86783-08b3-4eed-bb8c-f053a7d46d0c","Type":"ContainerDied","Data":"962b104d100ae1add451096383a81ac81cbb38be83dbef14bd02afaf596b4cf7"} Jan 31 08:05:37 crc kubenswrapper[4908]: I0131 08:05:37.386380 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-96lv7" Jan 31 08:05:37 crc kubenswrapper[4908]: I0131 08:05:37.533238 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ab86783-08b3-4eed-bb8c-f053a7d46d0c-inventory\") pod \"5ab86783-08b3-4eed-bb8c-f053a7d46d0c\" (UID: \"5ab86783-08b3-4eed-bb8c-f053a7d46d0c\") " Jan 31 08:05:37 crc kubenswrapper[4908]: I0131 08:05:37.533544 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5ab86783-08b3-4eed-bb8c-f053a7d46d0c-ceph\") pod \"5ab86783-08b3-4eed-bb8c-f053a7d46d0c\" (UID: \"5ab86783-08b3-4eed-bb8c-f053a7d46d0c\") " Jan 31 08:05:37 crc kubenswrapper[4908]: I0131 08:05:37.533602 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrbqb\" (UniqueName: \"kubernetes.io/projected/5ab86783-08b3-4eed-bb8c-f053a7d46d0c-kube-api-access-lrbqb\") pod \"5ab86783-08b3-4eed-bb8c-f053a7d46d0c\" (UID: \"5ab86783-08b3-4eed-bb8c-f053a7d46d0c\") " Jan 31 08:05:37 crc kubenswrapper[4908]: I0131 08:05:37.533722 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ab86783-08b3-4eed-bb8c-f053a7d46d0c-ssh-key-openstack-edpm-ipam\") pod \"5ab86783-08b3-4eed-bb8c-f053a7d46d0c\" (UID: \"5ab86783-08b3-4eed-bb8c-f053a7d46d0c\") " Jan 31 08:05:37 crc kubenswrapper[4908]: I0131 08:05:37.539285 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ab86783-08b3-4eed-bb8c-f053a7d46d0c-ceph" (OuterVolumeSpecName: "ceph") pod "5ab86783-08b3-4eed-bb8c-f053a7d46d0c" (UID: "5ab86783-08b3-4eed-bb8c-f053a7d46d0c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:05:37 crc kubenswrapper[4908]: I0131 08:05:37.539285 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ab86783-08b3-4eed-bb8c-f053a7d46d0c-kube-api-access-lrbqb" (OuterVolumeSpecName: "kube-api-access-lrbqb") pod "5ab86783-08b3-4eed-bb8c-f053a7d46d0c" (UID: "5ab86783-08b3-4eed-bb8c-f053a7d46d0c"). InnerVolumeSpecName "kube-api-access-lrbqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:05:37 crc kubenswrapper[4908]: I0131 08:05:37.562070 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ab86783-08b3-4eed-bb8c-f053a7d46d0c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5ab86783-08b3-4eed-bb8c-f053a7d46d0c" (UID: "5ab86783-08b3-4eed-bb8c-f053a7d46d0c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:05:37 crc kubenswrapper[4908]: I0131 08:05:37.563238 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ab86783-08b3-4eed-bb8c-f053a7d46d0c-inventory" (OuterVolumeSpecName: "inventory") pod "5ab86783-08b3-4eed-bb8c-f053a7d46d0c" (UID: "5ab86783-08b3-4eed-bb8c-f053a7d46d0c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:05:37 crc kubenswrapper[4908]: I0131 08:05:37.636779 4908 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ab86783-08b3-4eed-bb8c-f053a7d46d0c-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:05:37 crc kubenswrapper[4908]: I0131 08:05:37.636819 4908 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5ab86783-08b3-4eed-bb8c-f053a7d46d0c-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:05:37 crc kubenswrapper[4908]: I0131 08:05:37.636831 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrbqb\" (UniqueName: \"kubernetes.io/projected/5ab86783-08b3-4eed-bb8c-f053a7d46d0c-kube-api-access-lrbqb\") on node \"crc\" DevicePath \"\"" Jan 31 08:05:37 crc kubenswrapper[4908]: I0131 08:05:37.636845 4908 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ab86783-08b3-4eed-bb8c-f053a7d46d0c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:05:37 crc kubenswrapper[4908]: I0131 08:05:37.993963 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-96lv7" event={"ID":"5ab86783-08b3-4eed-bb8c-f053a7d46d0c","Type":"ContainerDied","Data":"0d620d51525a36a4939659dc52d1d04b944cb64931a129d28655ef83718dedf0"} Jan 31 08:05:37 crc kubenswrapper[4908]: I0131 08:05:37.994066 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-96lv7" Jan 31 08:05:37 crc kubenswrapper[4908]: I0131 08:05:37.994077 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d620d51525a36a4939659dc52d1d04b944cb64931a129d28655ef83718dedf0" Jan 31 08:05:38 crc kubenswrapper[4908]: I0131 08:05:38.054687 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x"] Jan 31 08:05:38 crc kubenswrapper[4908]: E0131 08:05:38.055204 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aadeb55-1bc3-4fe4-8e95-d2973a9c4096" containerName="registry-server" Jan 31 08:05:38 crc kubenswrapper[4908]: I0131 08:05:38.055225 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aadeb55-1bc3-4fe4-8e95-d2973a9c4096" containerName="registry-server" Jan 31 08:05:38 crc kubenswrapper[4908]: E0131 08:05:38.055239 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aadeb55-1bc3-4fe4-8e95-d2973a9c4096" containerName="extract-content" Jan 31 08:05:38 crc kubenswrapper[4908]: I0131 08:05:38.055247 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aadeb55-1bc3-4fe4-8e95-d2973a9c4096" containerName="extract-content" Jan 31 08:05:38 crc kubenswrapper[4908]: E0131 08:05:38.055259 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ab86783-08b3-4eed-bb8c-f053a7d46d0c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:05:38 crc kubenswrapper[4908]: I0131 08:05:38.055269 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ab86783-08b3-4eed-bb8c-f053a7d46d0c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:05:38 crc kubenswrapper[4908]: E0131 08:05:38.055290 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aadeb55-1bc3-4fe4-8e95-d2973a9c4096" containerName="extract-utilities" Jan 31 08:05:38 crc kubenswrapper[4908]: I0131 08:05:38.055299 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aadeb55-1bc3-4fe4-8e95-d2973a9c4096" containerName="extract-utilities" Jan 31 08:05:38 crc kubenswrapper[4908]: I0131 08:05:38.055522 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ab86783-08b3-4eed-bb8c-f053a7d46d0c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:05:38 crc kubenswrapper[4908]: I0131 08:05:38.055539 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aadeb55-1bc3-4fe4-8e95-d2973a9c4096" containerName="registry-server" Jan 31 08:05:38 crc kubenswrapper[4908]: I0131 08:05:38.056283 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x" Jan 31 08:05:38 crc kubenswrapper[4908]: I0131 08:05:38.058216 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:05:38 crc kubenswrapper[4908]: I0131 08:05:38.058916 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:05:38 crc kubenswrapper[4908]: I0131 08:05:38.059168 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:05:38 crc kubenswrapper[4908]: I0131 08:05:38.059360 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vgwb9" Jan 31 08:05:38 crc kubenswrapper[4908]: I0131 08:05:38.059521 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:05:38 crc kubenswrapper[4908]: I0131 08:05:38.077078 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x"] Jan 31 08:05:38 crc kubenswrapper[4908]: I0131 08:05:38.150573 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a22134c5-5619-491a-9a1e-b7c07167ee98-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x\" (UID: \"a22134c5-5619-491a-9a1e-b7c07167ee98\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x" Jan 31 08:05:38 crc kubenswrapper[4908]: I0131 08:05:38.150748 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a22134c5-5619-491a-9a1e-b7c07167ee98-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x\" (UID: \"a22134c5-5619-491a-9a1e-b7c07167ee98\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x" Jan 31 08:05:38 crc kubenswrapper[4908]: I0131 08:05:38.150784 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nnvc\" (UniqueName: \"kubernetes.io/projected/a22134c5-5619-491a-9a1e-b7c07167ee98-kube-api-access-4nnvc\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x\" (UID: \"a22134c5-5619-491a-9a1e-b7c07167ee98\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x" Jan 31 08:05:38 crc kubenswrapper[4908]: I0131 08:05:38.150810 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a22134c5-5619-491a-9a1e-b7c07167ee98-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x\" (UID: \"a22134c5-5619-491a-9a1e-b7c07167ee98\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x" Jan 31 08:05:38 crc kubenswrapper[4908]: I0131 08:05:38.253167 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a22134c5-5619-491a-9a1e-b7c07167ee98-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x\" (UID: \"a22134c5-5619-491a-9a1e-b7c07167ee98\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x" Jan 31 08:05:38 crc kubenswrapper[4908]: I0131 08:05:38.253229 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nnvc\" (UniqueName: \"kubernetes.io/projected/a22134c5-5619-491a-9a1e-b7c07167ee98-kube-api-access-4nnvc\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x\" (UID: \"a22134c5-5619-491a-9a1e-b7c07167ee98\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x" Jan 31 08:05:38 crc kubenswrapper[4908]: I0131 08:05:38.253257 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a22134c5-5619-491a-9a1e-b7c07167ee98-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x\" (UID: \"a22134c5-5619-491a-9a1e-b7c07167ee98\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x" Jan 31 08:05:38 crc kubenswrapper[4908]: I0131 08:05:38.253303 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a22134c5-5619-491a-9a1e-b7c07167ee98-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x\" (UID: \"a22134c5-5619-491a-9a1e-b7c07167ee98\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x" Jan 31 08:05:38 crc kubenswrapper[4908]: I0131 08:05:38.257123 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a22134c5-5619-491a-9a1e-b7c07167ee98-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x\" (UID: \"a22134c5-5619-491a-9a1e-b7c07167ee98\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x" Jan 31 08:05:38 crc kubenswrapper[4908]: I0131 08:05:38.257250 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a22134c5-5619-491a-9a1e-b7c07167ee98-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x\" (UID: \"a22134c5-5619-491a-9a1e-b7c07167ee98\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x" Jan 31 08:05:38 crc kubenswrapper[4908]: I0131 08:05:38.257392 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a22134c5-5619-491a-9a1e-b7c07167ee98-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x\" (UID: \"a22134c5-5619-491a-9a1e-b7c07167ee98\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x" Jan 31 08:05:38 crc kubenswrapper[4908]: I0131 08:05:38.269767 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nnvc\" (UniqueName: \"kubernetes.io/projected/a22134c5-5619-491a-9a1e-b7c07167ee98-kube-api-access-4nnvc\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x\" (UID: \"a22134c5-5619-491a-9a1e-b7c07167ee98\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x" Jan 31 08:05:38 crc kubenswrapper[4908]: I0131 08:05:38.380686 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x" Jan 31 08:05:38 crc kubenswrapper[4908]: I0131 08:05:38.928325 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x"] Jan 31 08:05:39 crc kubenswrapper[4908]: I0131 08:05:39.003787 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x" event={"ID":"a22134c5-5619-491a-9a1e-b7c07167ee98","Type":"ContainerStarted","Data":"f6d37734a69f0e707bd648b211278b9bcc074867cb41ae9c6aece0402ad3b0d7"} Jan 31 08:05:40 crc kubenswrapper[4908]: I0131 08:05:40.012770 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x" event={"ID":"a22134c5-5619-491a-9a1e-b7c07167ee98","Type":"ContainerStarted","Data":"fc96f660bef8b38c0c9dbcaaf5918c0a518003e92696bd2b97879bfa3c055033"} Jan 31 08:05:40 crc kubenswrapper[4908]: I0131 08:05:40.027767 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x" podStartSLOduration=1.615920424 podStartE2EDuration="2.027750094s" podCreationTimestamp="2026-01-31 08:05:38 +0000 UTC" firstStartedPulling="2026-01-31 08:05:38.940096103 +0000 UTC m=+2645.556040757" lastFinishedPulling="2026-01-31 08:05:39.351925773 +0000 UTC m=+2645.967870427" observedRunningTime="2026-01-31 08:05:40.027259182 +0000 UTC m=+2646.643203836" watchObservedRunningTime="2026-01-31 08:05:40.027750094 +0000 UTC m=+2646.643694738" Jan 31 08:05:45 crc kubenswrapper[4908]: I0131 08:05:45.939666 4908 scope.go:117] "RemoveContainer" containerID="d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7" Jan 31 08:05:47 crc kubenswrapper[4908]: I0131 08:05:47.072512 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerStarted","Data":"88e354e219e21d62e1709e8178e0b0f3f840c23e4d2b616dfc11d69f7cc30d05"} Jan 31 08:05:49 crc kubenswrapper[4908]: I0131 08:05:49.091923 4908 generic.go:334] "Generic (PLEG): container finished" podID="a22134c5-5619-491a-9a1e-b7c07167ee98" containerID="fc96f660bef8b38c0c9dbcaaf5918c0a518003e92696bd2b97879bfa3c055033" exitCode=0 Jan 31 08:05:49 crc kubenswrapper[4908]: I0131 08:05:49.092485 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x" event={"ID":"a22134c5-5619-491a-9a1e-b7c07167ee98","Type":"ContainerDied","Data":"fc96f660bef8b38c0c9dbcaaf5918c0a518003e92696bd2b97879bfa3c055033"} Jan 31 08:05:50 crc kubenswrapper[4908]: I0131 08:05:50.521293 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x" Jan 31 08:05:50 crc kubenswrapper[4908]: I0131 08:05:50.683378 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a22134c5-5619-491a-9a1e-b7c07167ee98-ssh-key-openstack-edpm-ipam\") pod \"a22134c5-5619-491a-9a1e-b7c07167ee98\" (UID: \"a22134c5-5619-491a-9a1e-b7c07167ee98\") " Jan 31 08:05:50 crc kubenswrapper[4908]: I0131 08:05:50.683502 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a22134c5-5619-491a-9a1e-b7c07167ee98-ceph\") pod \"a22134c5-5619-491a-9a1e-b7c07167ee98\" (UID: \"a22134c5-5619-491a-9a1e-b7c07167ee98\") " Jan 31 08:05:50 crc kubenswrapper[4908]: I0131 08:05:50.683522 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nnvc\" (UniqueName: \"kubernetes.io/projected/a22134c5-5619-491a-9a1e-b7c07167ee98-kube-api-access-4nnvc\") pod \"a22134c5-5619-491a-9a1e-b7c07167ee98\" (UID: \"a22134c5-5619-491a-9a1e-b7c07167ee98\") " Jan 31 08:05:50 crc kubenswrapper[4908]: I0131 08:05:50.683597 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a22134c5-5619-491a-9a1e-b7c07167ee98-inventory\") pod \"a22134c5-5619-491a-9a1e-b7c07167ee98\" (UID: \"a22134c5-5619-491a-9a1e-b7c07167ee98\") " Jan 31 08:05:50 crc kubenswrapper[4908]: I0131 08:05:50.692188 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a22134c5-5619-491a-9a1e-b7c07167ee98-kube-api-access-4nnvc" (OuterVolumeSpecName: "kube-api-access-4nnvc") pod "a22134c5-5619-491a-9a1e-b7c07167ee98" (UID: "a22134c5-5619-491a-9a1e-b7c07167ee98"). InnerVolumeSpecName "kube-api-access-4nnvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:05:50 crc kubenswrapper[4908]: I0131 08:05:50.700324 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a22134c5-5619-491a-9a1e-b7c07167ee98-ceph" (OuterVolumeSpecName: "ceph") pod "a22134c5-5619-491a-9a1e-b7c07167ee98" (UID: "a22134c5-5619-491a-9a1e-b7c07167ee98"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:05:50 crc kubenswrapper[4908]: I0131 08:05:50.718233 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a22134c5-5619-491a-9a1e-b7c07167ee98-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a22134c5-5619-491a-9a1e-b7c07167ee98" (UID: "a22134c5-5619-491a-9a1e-b7c07167ee98"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:05:50 crc kubenswrapper[4908]: I0131 08:05:50.741352 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a22134c5-5619-491a-9a1e-b7c07167ee98-inventory" (OuterVolumeSpecName: "inventory") pod "a22134c5-5619-491a-9a1e-b7c07167ee98" (UID: "a22134c5-5619-491a-9a1e-b7c07167ee98"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:05:50 crc kubenswrapper[4908]: I0131 08:05:50.785599 4908 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a22134c5-5619-491a-9a1e-b7c07167ee98-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:05:50 crc kubenswrapper[4908]: I0131 08:05:50.785642 4908 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a22134c5-5619-491a-9a1e-b7c07167ee98-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:05:50 crc kubenswrapper[4908]: I0131 08:05:50.785656 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nnvc\" (UniqueName: \"kubernetes.io/projected/a22134c5-5619-491a-9a1e-b7c07167ee98-kube-api-access-4nnvc\") on node \"crc\" DevicePath \"\"" Jan 31 08:05:50 crc kubenswrapper[4908]: I0131 08:05:50.785668 4908 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a22134c5-5619-491a-9a1e-b7c07167ee98-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.109378 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x" event={"ID":"a22134c5-5619-491a-9a1e-b7c07167ee98","Type":"ContainerDied","Data":"f6d37734a69f0e707bd648b211278b9bcc074867cb41ae9c6aece0402ad3b0d7"} Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.109416 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.109428 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6d37734a69f0e707bd648b211278b9bcc074867cb41ae9c6aece0402ad3b0d7" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.203346 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g"] Jan 31 08:05:51 crc kubenswrapper[4908]: E0131 08:05:51.203898 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a22134c5-5619-491a-9a1e-b7c07167ee98" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.203910 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="a22134c5-5619-491a-9a1e-b7c07167ee98" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.204105 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="a22134c5-5619-491a-9a1e-b7c07167ee98" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.204670 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.208446 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.208581 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.208939 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.209188 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.209339 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.209478 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.209627 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.216198 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g"] Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.227674 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vgwb9" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.296155 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.296224 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.296258 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.296289 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9105f7e-b7e8-451c-be73-c07970181984-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.296307 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.296334 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.296356 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh8nl\" (UniqueName: \"kubernetes.io/projected/c9105f7e-b7e8-451c-be73-c07970181984-kube-api-access-gh8nl\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.296381 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9105f7e-b7e8-451c-be73-c07970181984-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.296401 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.296421 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.296463 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.296480 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9105f7e-b7e8-451c-be73-c07970181984-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.296508 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.398230 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.398340 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.398378 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.398409 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.398444 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9105f7e-b7e8-451c-be73-c07970181984-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.398465 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.398495 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.398524 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh8nl\" (UniqueName: \"kubernetes.io/projected/c9105f7e-b7e8-451c-be73-c07970181984-kube-api-access-gh8nl\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.398560 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9105f7e-b7e8-451c-be73-c07970181984-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.398586 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.398616 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.398675 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.398702 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9105f7e-b7e8-451c-be73-c07970181984-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.405472 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.405767 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.406036 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.406214 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.407454 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9105f7e-b7e8-451c-be73-c07970181984-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.407799 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.408174 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.408425 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.408884 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.410388 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9105f7e-b7e8-451c-be73-c07970181984-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.410514 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9105f7e-b7e8-451c-be73-c07970181984-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.410827 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.419054 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh8nl\" (UniqueName: \"kubernetes.io/projected/c9105f7e-b7e8-451c-be73-c07970181984-kube-api-access-gh8nl\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:51 crc kubenswrapper[4908]: I0131 08:05:51.554037 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:05:52 crc kubenswrapper[4908]: I0131 08:05:52.194847 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g"] Jan 31 08:05:52 crc kubenswrapper[4908]: W0131 08:05:52.196748 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9105f7e_b7e8_451c_be73_c07970181984.slice/crio-53be17e6caf4edf61ef3ae9fb4f6b6b9db842d8fabf6dbe710c445fffdd336f1 WatchSource:0}: Error finding container 53be17e6caf4edf61ef3ae9fb4f6b6b9db842d8fabf6dbe710c445fffdd336f1: Status 404 returned error can't find the container with id 53be17e6caf4edf61ef3ae9fb4f6b6b9db842d8fabf6dbe710c445fffdd336f1 Jan 31 08:05:53 crc kubenswrapper[4908]: I0131 08:05:53.131779 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" event={"ID":"c9105f7e-b7e8-451c-be73-c07970181984","Type":"ContainerStarted","Data":"c106efd7972490cae8f740ac4738026eea87f7715022dee22d2467f978e0f8b1"} Jan 31 08:05:53 crc kubenswrapper[4908]: I0131 08:05:53.133434 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" event={"ID":"c9105f7e-b7e8-451c-be73-c07970181984","Type":"ContainerStarted","Data":"53be17e6caf4edf61ef3ae9fb4f6b6b9db842d8fabf6dbe710c445fffdd336f1"} Jan 31 08:05:53 crc kubenswrapper[4908]: I0131 08:05:53.158660 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" podStartSLOduration=1.721144277 podStartE2EDuration="2.158641906s" podCreationTimestamp="2026-01-31 08:05:51 +0000 UTC" firstStartedPulling="2026-01-31 08:05:52.199190578 +0000 UTC m=+2658.815135232" lastFinishedPulling="2026-01-31 08:05:52.636688207 +0000 UTC m=+2659.252632861" observedRunningTime="2026-01-31 08:05:53.153434604 +0000 UTC m=+2659.769379268" watchObservedRunningTime="2026-01-31 08:05:53.158641906 +0000 UTC m=+2659.774586560" Jan 31 08:06:22 crc kubenswrapper[4908]: I0131 08:06:22.392969 4908 generic.go:334] "Generic (PLEG): container finished" podID="c9105f7e-b7e8-451c-be73-c07970181984" containerID="c106efd7972490cae8f740ac4738026eea87f7715022dee22d2467f978e0f8b1" exitCode=0 Jan 31 08:06:22 crc kubenswrapper[4908]: I0131 08:06:22.393044 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" event={"ID":"c9105f7e-b7e8-451c-be73-c07970181984","Type":"ContainerDied","Data":"c106efd7972490cae8f740ac4738026eea87f7715022dee22d2467f978e0f8b1"} Jan 31 08:06:23 crc kubenswrapper[4908]: I0131 08:06:23.820197 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:06:23 crc kubenswrapper[4908]: I0131 08:06:23.938738 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-ssh-key-openstack-edpm-ipam\") pod \"c9105f7e-b7e8-451c-be73-c07970181984\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " Jan 31 08:06:23 crc kubenswrapper[4908]: I0131 08:06:23.938828 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-ovn-combined-ca-bundle\") pod \"c9105f7e-b7e8-451c-be73-c07970181984\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " Jan 31 08:06:23 crc kubenswrapper[4908]: I0131 08:06:23.938943 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-ceph\") pod \"c9105f7e-b7e8-451c-be73-c07970181984\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " Jan 31 08:06:23 crc kubenswrapper[4908]: I0131 08:06:23.938993 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-inventory\") pod \"c9105f7e-b7e8-451c-be73-c07970181984\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " Jan 31 08:06:23 crc kubenswrapper[4908]: I0131 08:06:23.939061 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9105f7e-b7e8-451c-be73-c07970181984-openstack-edpm-ipam-ovn-default-certs-0\") pod \"c9105f7e-b7e8-451c-be73-c07970181984\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " Jan 31 08:06:23 crc kubenswrapper[4908]: I0131 08:06:23.939100 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-neutron-metadata-combined-ca-bundle\") pod \"c9105f7e-b7e8-451c-be73-c07970181984\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " Jan 31 08:06:23 crc kubenswrapper[4908]: I0131 08:06:23.939191 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-repo-setup-combined-ca-bundle\") pod \"c9105f7e-b7e8-451c-be73-c07970181984\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " Jan 31 08:06:23 crc kubenswrapper[4908]: I0131 08:06:23.939220 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9105f7e-b7e8-451c-be73-c07970181984-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"c9105f7e-b7e8-451c-be73-c07970181984\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " Jan 31 08:06:23 crc kubenswrapper[4908]: I0131 08:06:23.939564 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gh8nl\" (UniqueName: \"kubernetes.io/projected/c9105f7e-b7e8-451c-be73-c07970181984-kube-api-access-gh8nl\") pod \"c9105f7e-b7e8-451c-be73-c07970181984\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " Jan 31 08:06:23 crc kubenswrapper[4908]: I0131 08:06:23.939678 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-libvirt-combined-ca-bundle\") pod \"c9105f7e-b7e8-451c-be73-c07970181984\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " Jan 31 08:06:23 crc kubenswrapper[4908]: I0131 08:06:23.939702 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-nova-combined-ca-bundle\") pod \"c9105f7e-b7e8-451c-be73-c07970181984\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " Jan 31 08:06:23 crc kubenswrapper[4908]: I0131 08:06:23.939794 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9105f7e-b7e8-451c-be73-c07970181984-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"c9105f7e-b7e8-451c-be73-c07970181984\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " Jan 31 08:06:23 crc kubenswrapper[4908]: I0131 08:06:23.939875 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-bootstrap-combined-ca-bundle\") pod \"c9105f7e-b7e8-451c-be73-c07970181984\" (UID: \"c9105f7e-b7e8-451c-be73-c07970181984\") " Jan 31 08:06:23 crc kubenswrapper[4908]: I0131 08:06:23.948657 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9105f7e-b7e8-451c-be73-c07970181984-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "c9105f7e-b7e8-451c-be73-c07970181984" (UID: "c9105f7e-b7e8-451c-be73-c07970181984"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:06:23 crc kubenswrapper[4908]: I0131 08:06:23.949362 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "c9105f7e-b7e8-451c-be73-c07970181984" (UID: "c9105f7e-b7e8-451c-be73-c07970181984"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:06:23 crc kubenswrapper[4908]: I0131 08:06:23.949390 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9105f7e-b7e8-451c-be73-c07970181984-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "c9105f7e-b7e8-451c-be73-c07970181984" (UID: "c9105f7e-b7e8-451c-be73-c07970181984"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:06:23 crc kubenswrapper[4908]: I0131 08:06:23.949627 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "c9105f7e-b7e8-451c-be73-c07970181984" (UID: "c9105f7e-b7e8-451c-be73-c07970181984"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:06:23 crc kubenswrapper[4908]: I0131 08:06:23.949647 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-ceph" (OuterVolumeSpecName: "ceph") pod "c9105f7e-b7e8-451c-be73-c07970181984" (UID: "c9105f7e-b7e8-451c-be73-c07970181984"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:06:23 crc kubenswrapper[4908]: I0131 08:06:23.949755 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "c9105f7e-b7e8-451c-be73-c07970181984" (UID: "c9105f7e-b7e8-451c-be73-c07970181984"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:06:23 crc kubenswrapper[4908]: I0131 08:06:23.949933 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "c9105f7e-b7e8-451c-be73-c07970181984" (UID: "c9105f7e-b7e8-451c-be73-c07970181984"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:06:23 crc kubenswrapper[4908]: I0131 08:06:23.950816 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "c9105f7e-b7e8-451c-be73-c07970181984" (UID: "c9105f7e-b7e8-451c-be73-c07970181984"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:06:23 crc kubenswrapper[4908]: I0131 08:06:23.952788 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9105f7e-b7e8-451c-be73-c07970181984-kube-api-access-gh8nl" (OuterVolumeSpecName: "kube-api-access-gh8nl") pod "c9105f7e-b7e8-451c-be73-c07970181984" (UID: "c9105f7e-b7e8-451c-be73-c07970181984"). InnerVolumeSpecName "kube-api-access-gh8nl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:06:23 crc kubenswrapper[4908]: I0131 08:06:23.952935 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "c9105f7e-b7e8-451c-be73-c07970181984" (UID: "c9105f7e-b7e8-451c-be73-c07970181984"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:06:23 crc kubenswrapper[4908]: I0131 08:06:23.954132 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9105f7e-b7e8-451c-be73-c07970181984-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "c9105f7e-b7e8-451c-be73-c07970181984" (UID: "c9105f7e-b7e8-451c-be73-c07970181984"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:06:23 crc kubenswrapper[4908]: I0131 08:06:23.982968 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c9105f7e-b7e8-451c-be73-c07970181984" (UID: "c9105f7e-b7e8-451c-be73-c07970181984"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:06:23 crc kubenswrapper[4908]: I0131 08:06:23.990354 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-inventory" (OuterVolumeSpecName: "inventory") pod "c9105f7e-b7e8-451c-be73-c07970181984" (UID: "c9105f7e-b7e8-451c-be73-c07970181984"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.043734 4908 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.043792 4908 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.043809 4908 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9105f7e-b7e8-451c-be73-c07970181984-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.043824 4908 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.043839 4908 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.043851 4908 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.043863 4908 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.043877 4908 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.043890 4908 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9105f7e-b7e8-451c-be73-c07970181984-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.043903 4908 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.043917 4908 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9105f7e-b7e8-451c-be73-c07970181984-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.043929 4908 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9105f7e-b7e8-451c-be73-c07970181984-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.043942 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gh8nl\" (UniqueName: \"kubernetes.io/projected/c9105f7e-b7e8-451c-be73-c07970181984-kube-api-access-gh8nl\") on node \"crc\" DevicePath \"\"" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.412435 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" event={"ID":"c9105f7e-b7e8-451c-be73-c07970181984","Type":"ContainerDied","Data":"53be17e6caf4edf61ef3ae9fb4f6b6b9db842d8fabf6dbe710c445fffdd336f1"} Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.412484 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53be17e6caf4edf61ef3ae9fb4f6b6b9db842d8fabf6dbe710c445fffdd336f1" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.412625 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.541855 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz"] Jan 31 08:06:24 crc kubenswrapper[4908]: E0131 08:06:24.542927 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9105f7e-b7e8-451c-be73-c07970181984" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.542953 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9105f7e-b7e8-451c-be73-c07970181984" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.543413 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9105f7e-b7e8-451c-be73-c07970181984" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.544389 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.552056 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz"] Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.556147 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.556586 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.556815 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.566515 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.567227 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vgwb9" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.656676 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ef67c3c5-b34c-4302-9f15-55df61dc6e41-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz\" (UID: \"ef67c3c5-b34c-4302-9f15-55df61dc6e41\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.656822 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ef67c3c5-b34c-4302-9f15-55df61dc6e41-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz\" (UID: \"ef67c3c5-b34c-4302-9f15-55df61dc6e41\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.656928 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ef67c3c5-b34c-4302-9f15-55df61dc6e41-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz\" (UID: \"ef67c3c5-b34c-4302-9f15-55df61dc6e41\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.657055 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbn5c\" (UniqueName: \"kubernetes.io/projected/ef67c3c5-b34c-4302-9f15-55df61dc6e41-kube-api-access-bbn5c\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz\" (UID: \"ef67c3c5-b34c-4302-9f15-55df61dc6e41\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.759590 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ef67c3c5-b34c-4302-9f15-55df61dc6e41-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz\" (UID: \"ef67c3c5-b34c-4302-9f15-55df61dc6e41\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.759724 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ef67c3c5-b34c-4302-9f15-55df61dc6e41-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz\" (UID: \"ef67c3c5-b34c-4302-9f15-55df61dc6e41\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.759779 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbn5c\" (UniqueName: \"kubernetes.io/projected/ef67c3c5-b34c-4302-9f15-55df61dc6e41-kube-api-access-bbn5c\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz\" (UID: \"ef67c3c5-b34c-4302-9f15-55df61dc6e41\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.759852 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ef67c3c5-b34c-4302-9f15-55df61dc6e41-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz\" (UID: \"ef67c3c5-b34c-4302-9f15-55df61dc6e41\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.765098 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ef67c3c5-b34c-4302-9f15-55df61dc6e41-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz\" (UID: \"ef67c3c5-b34c-4302-9f15-55df61dc6e41\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.766181 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ef67c3c5-b34c-4302-9f15-55df61dc6e41-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz\" (UID: \"ef67c3c5-b34c-4302-9f15-55df61dc6e41\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.783118 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbn5c\" (UniqueName: \"kubernetes.io/projected/ef67c3c5-b34c-4302-9f15-55df61dc6e41-kube-api-access-bbn5c\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz\" (UID: \"ef67c3c5-b34c-4302-9f15-55df61dc6e41\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.786117 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ef67c3c5-b34c-4302-9f15-55df61dc6e41-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz\" (UID: \"ef67c3c5-b34c-4302-9f15-55df61dc6e41\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz" Jan 31 08:06:24 crc kubenswrapper[4908]: I0131 08:06:24.916898 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz" Jan 31 08:06:25 crc kubenswrapper[4908]: I0131 08:06:25.461338 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz"] Jan 31 08:06:26 crc kubenswrapper[4908]: I0131 08:06:26.429176 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz" event={"ID":"ef67c3c5-b34c-4302-9f15-55df61dc6e41","Type":"ContainerStarted","Data":"07cc32cf8d865f9f3367e33942d500751e5c849bb2296711d7cf2fdc64fb73e0"} Jan 31 08:06:26 crc kubenswrapper[4908]: I0131 08:06:26.429508 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz" event={"ID":"ef67c3c5-b34c-4302-9f15-55df61dc6e41","Type":"ContainerStarted","Data":"5f5031a39bfe76239c7f5940ffeaf6b0967dac1f23625e974befbb7212d5b9fc"} Jan 31 08:06:26 crc kubenswrapper[4908]: I0131 08:06:26.460941 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz" podStartSLOduration=1.9056237839999999 podStartE2EDuration="2.460913194s" podCreationTimestamp="2026-01-31 08:06:24 +0000 UTC" firstStartedPulling="2026-01-31 08:06:25.47582604 +0000 UTC m=+2692.091770694" lastFinishedPulling="2026-01-31 08:06:26.03111545 +0000 UTC m=+2692.647060104" observedRunningTime="2026-01-31 08:06:26.451390353 +0000 UTC m=+2693.067335007" watchObservedRunningTime="2026-01-31 08:06:26.460913194 +0000 UTC m=+2693.076857838" Jan 31 08:06:31 crc kubenswrapper[4908]: I0131 08:06:31.471660 4908 generic.go:334] "Generic (PLEG): container finished" podID="ef67c3c5-b34c-4302-9f15-55df61dc6e41" containerID="07cc32cf8d865f9f3367e33942d500751e5c849bb2296711d7cf2fdc64fb73e0" exitCode=0 Jan 31 08:06:31 crc kubenswrapper[4908]: I0131 08:06:31.471721 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz" event={"ID":"ef67c3c5-b34c-4302-9f15-55df61dc6e41","Type":"ContainerDied","Data":"07cc32cf8d865f9f3367e33942d500751e5c849bb2296711d7cf2fdc64fb73e0"} Jan 31 08:06:32 crc kubenswrapper[4908]: I0131 08:06:32.867583 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.022297 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ef67c3c5-b34c-4302-9f15-55df61dc6e41-ssh-key-openstack-edpm-ipam\") pod \"ef67c3c5-b34c-4302-9f15-55df61dc6e41\" (UID: \"ef67c3c5-b34c-4302-9f15-55df61dc6e41\") " Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.022470 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ef67c3c5-b34c-4302-9f15-55df61dc6e41-ceph\") pod \"ef67c3c5-b34c-4302-9f15-55df61dc6e41\" (UID: \"ef67c3c5-b34c-4302-9f15-55df61dc6e41\") " Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.022553 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbn5c\" (UniqueName: \"kubernetes.io/projected/ef67c3c5-b34c-4302-9f15-55df61dc6e41-kube-api-access-bbn5c\") pod \"ef67c3c5-b34c-4302-9f15-55df61dc6e41\" (UID: \"ef67c3c5-b34c-4302-9f15-55df61dc6e41\") " Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.022590 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ef67c3c5-b34c-4302-9f15-55df61dc6e41-inventory\") pod \"ef67c3c5-b34c-4302-9f15-55df61dc6e41\" (UID: \"ef67c3c5-b34c-4302-9f15-55df61dc6e41\") " Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.028726 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef67c3c5-b34c-4302-9f15-55df61dc6e41-kube-api-access-bbn5c" (OuterVolumeSpecName: "kube-api-access-bbn5c") pod "ef67c3c5-b34c-4302-9f15-55df61dc6e41" (UID: "ef67c3c5-b34c-4302-9f15-55df61dc6e41"). InnerVolumeSpecName "kube-api-access-bbn5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.029408 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef67c3c5-b34c-4302-9f15-55df61dc6e41-ceph" (OuterVolumeSpecName: "ceph") pod "ef67c3c5-b34c-4302-9f15-55df61dc6e41" (UID: "ef67c3c5-b34c-4302-9f15-55df61dc6e41"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.049228 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef67c3c5-b34c-4302-9f15-55df61dc6e41-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ef67c3c5-b34c-4302-9f15-55df61dc6e41" (UID: "ef67c3c5-b34c-4302-9f15-55df61dc6e41"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.052267 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef67c3c5-b34c-4302-9f15-55df61dc6e41-inventory" (OuterVolumeSpecName: "inventory") pod "ef67c3c5-b34c-4302-9f15-55df61dc6e41" (UID: "ef67c3c5-b34c-4302-9f15-55df61dc6e41"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.124606 4908 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ef67c3c5-b34c-4302-9f15-55df61dc6e41-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.124643 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbn5c\" (UniqueName: \"kubernetes.io/projected/ef67c3c5-b34c-4302-9f15-55df61dc6e41-kube-api-access-bbn5c\") on node \"crc\" DevicePath \"\"" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.124654 4908 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ef67c3c5-b34c-4302-9f15-55df61dc6e41-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.124664 4908 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ef67c3c5-b34c-4302-9f15-55df61dc6e41-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.492833 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz" event={"ID":"ef67c3c5-b34c-4302-9f15-55df61dc6e41","Type":"ContainerDied","Data":"5f5031a39bfe76239c7f5940ffeaf6b0967dac1f23625e974befbb7212d5b9fc"} Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.492891 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f5031a39bfe76239c7f5940ffeaf6b0967dac1f23625e974befbb7212d5b9fc" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.492911 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.643510 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gx7p"] Jan 31 08:06:33 crc kubenswrapper[4908]: E0131 08:06:33.643997 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef67c3c5-b34c-4302-9f15-55df61dc6e41" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.644016 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef67c3c5-b34c-4302-9f15-55df61dc6e41" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.644236 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef67c3c5-b34c-4302-9f15-55df61dc6e41" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.645030 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gx7p" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.647242 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.647338 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.647420 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.647485 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.647724 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vgwb9" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.647905 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.659759 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gx7p"] Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.738415 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55dbf02b-7749-423f-9f04-7b2d545b9eaa-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gx7p\" (UID: \"55dbf02b-7749-423f-9f04-7b2d545b9eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gx7p" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.738531 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zb8k\" (UniqueName: \"kubernetes.io/projected/55dbf02b-7749-423f-9f04-7b2d545b9eaa-kube-api-access-8zb8k\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gx7p\" (UID: \"55dbf02b-7749-423f-9f04-7b2d545b9eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gx7p" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.738597 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55dbf02b-7749-423f-9f04-7b2d545b9eaa-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gx7p\" (UID: \"55dbf02b-7749-423f-9f04-7b2d545b9eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gx7p" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.738787 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55dbf02b-7749-423f-9f04-7b2d545b9eaa-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gx7p\" (UID: \"55dbf02b-7749-423f-9f04-7b2d545b9eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gx7p" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.738874 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/55dbf02b-7749-423f-9f04-7b2d545b9eaa-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gx7p\" (UID: \"55dbf02b-7749-423f-9f04-7b2d545b9eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gx7p" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.738963 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/55dbf02b-7749-423f-9f04-7b2d545b9eaa-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gx7p\" (UID: \"55dbf02b-7749-423f-9f04-7b2d545b9eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gx7p" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.841240 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/55dbf02b-7749-423f-9f04-7b2d545b9eaa-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gx7p\" (UID: \"55dbf02b-7749-423f-9f04-7b2d545b9eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gx7p" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.841298 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55dbf02b-7749-423f-9f04-7b2d545b9eaa-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gx7p\" (UID: \"55dbf02b-7749-423f-9f04-7b2d545b9eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gx7p" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.841331 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zb8k\" (UniqueName: \"kubernetes.io/projected/55dbf02b-7749-423f-9f04-7b2d545b9eaa-kube-api-access-8zb8k\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gx7p\" (UID: \"55dbf02b-7749-423f-9f04-7b2d545b9eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gx7p" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.841377 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55dbf02b-7749-423f-9f04-7b2d545b9eaa-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gx7p\" (UID: \"55dbf02b-7749-423f-9f04-7b2d545b9eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gx7p" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.841475 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55dbf02b-7749-423f-9f04-7b2d545b9eaa-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gx7p\" (UID: \"55dbf02b-7749-423f-9f04-7b2d545b9eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gx7p" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.841515 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/55dbf02b-7749-423f-9f04-7b2d545b9eaa-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gx7p\" (UID: \"55dbf02b-7749-423f-9f04-7b2d545b9eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gx7p" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.843506 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/55dbf02b-7749-423f-9f04-7b2d545b9eaa-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gx7p\" (UID: \"55dbf02b-7749-423f-9f04-7b2d545b9eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gx7p" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.846032 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/55dbf02b-7749-423f-9f04-7b2d545b9eaa-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gx7p\" (UID: \"55dbf02b-7749-423f-9f04-7b2d545b9eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gx7p" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.846641 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55dbf02b-7749-423f-9f04-7b2d545b9eaa-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gx7p\" (UID: \"55dbf02b-7749-423f-9f04-7b2d545b9eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gx7p" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.847177 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55dbf02b-7749-423f-9f04-7b2d545b9eaa-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gx7p\" (UID: \"55dbf02b-7749-423f-9f04-7b2d545b9eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gx7p" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.850624 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55dbf02b-7749-423f-9f04-7b2d545b9eaa-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gx7p\" (UID: \"55dbf02b-7749-423f-9f04-7b2d545b9eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gx7p" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.870787 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zb8k\" (UniqueName: \"kubernetes.io/projected/55dbf02b-7749-423f-9f04-7b2d545b9eaa-kube-api-access-8zb8k\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gx7p\" (UID: \"55dbf02b-7749-423f-9f04-7b2d545b9eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gx7p" Jan 31 08:06:33 crc kubenswrapper[4908]: I0131 08:06:33.961469 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gx7p" Jan 31 08:06:34 crc kubenswrapper[4908]: I0131 08:06:34.466861 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gx7p"] Jan 31 08:06:34 crc kubenswrapper[4908]: I0131 08:06:34.507890 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gx7p" event={"ID":"55dbf02b-7749-423f-9f04-7b2d545b9eaa","Type":"ContainerStarted","Data":"e4993a073f803adec56cf99ba0b7dfa8f4dfef1c0e7204bed23d3a32d1ae7940"} Jan 31 08:06:35 crc kubenswrapper[4908]: I0131 08:06:35.516651 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gx7p" event={"ID":"55dbf02b-7749-423f-9f04-7b2d545b9eaa","Type":"ContainerStarted","Data":"4df0825cb614b514029ac083d233b9f2bee2a2c153d4f2ca995ab45b99190d0e"} Jan 31 08:06:35 crc kubenswrapper[4908]: I0131 08:06:35.540383 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gx7p" podStartSLOduration=2.155548799 podStartE2EDuration="2.540361758s" podCreationTimestamp="2026-01-31 08:06:33 +0000 UTC" firstStartedPulling="2026-01-31 08:06:34.476125097 +0000 UTC m=+2701.092069751" lastFinishedPulling="2026-01-31 08:06:34.860938056 +0000 UTC m=+2701.476882710" observedRunningTime="2026-01-31 08:06:35.538125521 +0000 UTC m=+2702.154070175" watchObservedRunningTime="2026-01-31 08:06:35.540361758 +0000 UTC m=+2702.156306412" Jan 31 08:07:42 crc kubenswrapper[4908]: I0131 08:07:42.073401 4908 generic.go:334] "Generic (PLEG): container finished" podID="55dbf02b-7749-423f-9f04-7b2d545b9eaa" containerID="4df0825cb614b514029ac083d233b9f2bee2a2c153d4f2ca995ab45b99190d0e" exitCode=0 Jan 31 08:07:42 crc kubenswrapper[4908]: I0131 08:07:42.073740 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gx7p" event={"ID":"55dbf02b-7749-423f-9f04-7b2d545b9eaa","Type":"ContainerDied","Data":"4df0825cb614b514029ac083d233b9f2bee2a2c153d4f2ca995ab45b99190d0e"} Jan 31 08:07:43 crc kubenswrapper[4908]: I0131 08:07:43.507635 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gx7p" Jan 31 08:07:43 crc kubenswrapper[4908]: I0131 08:07:43.599158 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55dbf02b-7749-423f-9f04-7b2d545b9eaa-inventory\") pod \"55dbf02b-7749-423f-9f04-7b2d545b9eaa\" (UID: \"55dbf02b-7749-423f-9f04-7b2d545b9eaa\") " Jan 31 08:07:43 crc kubenswrapper[4908]: I0131 08:07:43.599229 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55dbf02b-7749-423f-9f04-7b2d545b9eaa-ssh-key-openstack-edpm-ipam\") pod \"55dbf02b-7749-423f-9f04-7b2d545b9eaa\" (UID: \"55dbf02b-7749-423f-9f04-7b2d545b9eaa\") " Jan 31 08:07:43 crc kubenswrapper[4908]: I0131 08:07:43.599301 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/55dbf02b-7749-423f-9f04-7b2d545b9eaa-ovncontroller-config-0\") pod \"55dbf02b-7749-423f-9f04-7b2d545b9eaa\" (UID: \"55dbf02b-7749-423f-9f04-7b2d545b9eaa\") " Jan 31 08:07:43 crc kubenswrapper[4908]: I0131 08:07:43.599340 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55dbf02b-7749-423f-9f04-7b2d545b9eaa-ovn-combined-ca-bundle\") pod \"55dbf02b-7749-423f-9f04-7b2d545b9eaa\" (UID: \"55dbf02b-7749-423f-9f04-7b2d545b9eaa\") " Jan 31 08:07:43 crc kubenswrapper[4908]: I0131 08:07:43.599363 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/55dbf02b-7749-423f-9f04-7b2d545b9eaa-ceph\") pod \"55dbf02b-7749-423f-9f04-7b2d545b9eaa\" (UID: \"55dbf02b-7749-423f-9f04-7b2d545b9eaa\") " Jan 31 08:07:43 crc kubenswrapper[4908]: I0131 08:07:43.599389 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zb8k\" (UniqueName: \"kubernetes.io/projected/55dbf02b-7749-423f-9f04-7b2d545b9eaa-kube-api-access-8zb8k\") pod \"55dbf02b-7749-423f-9f04-7b2d545b9eaa\" (UID: \"55dbf02b-7749-423f-9f04-7b2d545b9eaa\") " Jan 31 08:07:43 crc kubenswrapper[4908]: I0131 08:07:43.609418 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55dbf02b-7749-423f-9f04-7b2d545b9eaa-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "55dbf02b-7749-423f-9f04-7b2d545b9eaa" (UID: "55dbf02b-7749-423f-9f04-7b2d545b9eaa"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:07:43 crc kubenswrapper[4908]: I0131 08:07:43.609449 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55dbf02b-7749-423f-9f04-7b2d545b9eaa-ceph" (OuterVolumeSpecName: "ceph") pod "55dbf02b-7749-423f-9f04-7b2d545b9eaa" (UID: "55dbf02b-7749-423f-9f04-7b2d545b9eaa"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:07:43 crc kubenswrapper[4908]: I0131 08:07:43.610156 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55dbf02b-7749-423f-9f04-7b2d545b9eaa-kube-api-access-8zb8k" (OuterVolumeSpecName: "kube-api-access-8zb8k") pod "55dbf02b-7749-423f-9f04-7b2d545b9eaa" (UID: "55dbf02b-7749-423f-9f04-7b2d545b9eaa"). InnerVolumeSpecName "kube-api-access-8zb8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:07:43 crc kubenswrapper[4908]: I0131 08:07:43.622187 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55dbf02b-7749-423f-9f04-7b2d545b9eaa-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "55dbf02b-7749-423f-9f04-7b2d545b9eaa" (UID: "55dbf02b-7749-423f-9f04-7b2d545b9eaa"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:07:43 crc kubenswrapper[4908]: I0131 08:07:43.628258 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55dbf02b-7749-423f-9f04-7b2d545b9eaa-inventory" (OuterVolumeSpecName: "inventory") pod "55dbf02b-7749-423f-9f04-7b2d545b9eaa" (UID: "55dbf02b-7749-423f-9f04-7b2d545b9eaa"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:07:43 crc kubenswrapper[4908]: I0131 08:07:43.634617 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55dbf02b-7749-423f-9f04-7b2d545b9eaa-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "55dbf02b-7749-423f-9f04-7b2d545b9eaa" (UID: "55dbf02b-7749-423f-9f04-7b2d545b9eaa"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:07:43 crc kubenswrapper[4908]: I0131 08:07:43.702047 4908 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55dbf02b-7749-423f-9f04-7b2d545b9eaa-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:07:43 crc kubenswrapper[4908]: I0131 08:07:43.702102 4908 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55dbf02b-7749-423f-9f04-7b2d545b9eaa-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:07:43 crc kubenswrapper[4908]: I0131 08:07:43.702120 4908 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/55dbf02b-7749-423f-9f04-7b2d545b9eaa-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 08:07:43 crc kubenswrapper[4908]: I0131 08:07:43.702133 4908 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55dbf02b-7749-423f-9f04-7b2d545b9eaa-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:07:43 crc kubenswrapper[4908]: I0131 08:07:43.702142 4908 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/55dbf02b-7749-423f-9f04-7b2d545b9eaa-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:07:43 crc kubenswrapper[4908]: I0131 08:07:43.702152 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zb8k\" (UniqueName: \"kubernetes.io/projected/55dbf02b-7749-423f-9f04-7b2d545b9eaa-kube-api-access-8zb8k\") on node \"crc\" DevicePath \"\"" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.099911 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gx7p" event={"ID":"55dbf02b-7749-423f-9f04-7b2d545b9eaa","Type":"ContainerDied","Data":"e4993a073f803adec56cf99ba0b7dfa8f4dfef1c0e7204bed23d3a32d1ae7940"} Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.099959 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4993a073f803adec56cf99ba0b7dfa8f4dfef1c0e7204bed23d3a32d1ae7940" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.100019 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gx7p" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.177846 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x"] Jan 31 08:07:44 crc kubenswrapper[4908]: E0131 08:07:44.179634 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55dbf02b-7749-423f-9f04-7b2d545b9eaa" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.179664 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="55dbf02b-7749-423f-9f04-7b2d545b9eaa" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.179939 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="55dbf02b-7749-423f-9f04-7b2d545b9eaa" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.180812 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.183803 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.184279 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.184571 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.184785 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.184958 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vgwb9" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.185204 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.187770 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.189303 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x"] Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.318424 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x\" (UID: \"697af640-d294-4a01-a0d5-1ee46f8b75ae\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.318490 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x\" (UID: \"697af640-d294-4a01-a0d5-1ee46f8b75ae\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.318517 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x\" (UID: \"697af640-d294-4a01-a0d5-1ee46f8b75ae\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.318535 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npjrz\" (UniqueName: \"kubernetes.io/projected/697af640-d294-4a01-a0d5-1ee46f8b75ae-kube-api-access-npjrz\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x\" (UID: \"697af640-d294-4a01-a0d5-1ee46f8b75ae\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.318741 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x\" (UID: \"697af640-d294-4a01-a0d5-1ee46f8b75ae\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.318821 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x\" (UID: \"697af640-d294-4a01-a0d5-1ee46f8b75ae\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.318850 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x\" (UID: \"697af640-d294-4a01-a0d5-1ee46f8b75ae\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.420551 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x\" (UID: \"697af640-d294-4a01-a0d5-1ee46f8b75ae\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.420649 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x\" (UID: \"697af640-d294-4a01-a0d5-1ee46f8b75ae\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.420687 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x\" (UID: \"697af640-d294-4a01-a0d5-1ee46f8b75ae\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.420776 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x\" (UID: \"697af640-d294-4a01-a0d5-1ee46f8b75ae\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.420807 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x\" (UID: \"697af640-d294-4a01-a0d5-1ee46f8b75ae\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.420830 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npjrz\" (UniqueName: \"kubernetes.io/projected/697af640-d294-4a01-a0d5-1ee46f8b75ae-kube-api-access-npjrz\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x\" (UID: \"697af640-d294-4a01-a0d5-1ee46f8b75ae\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.420859 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x\" (UID: \"697af640-d294-4a01-a0d5-1ee46f8b75ae\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.424351 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x\" (UID: \"697af640-d294-4a01-a0d5-1ee46f8b75ae\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.424424 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x\" (UID: \"697af640-d294-4a01-a0d5-1ee46f8b75ae\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.424839 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x\" (UID: \"697af640-d294-4a01-a0d5-1ee46f8b75ae\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.431483 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x\" (UID: \"697af640-d294-4a01-a0d5-1ee46f8b75ae\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.439650 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x\" (UID: \"697af640-d294-4a01-a0d5-1ee46f8b75ae\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.440123 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x\" (UID: \"697af640-d294-4a01-a0d5-1ee46f8b75ae\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.444515 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npjrz\" (UniqueName: \"kubernetes.io/projected/697af640-d294-4a01-a0d5-1ee46f8b75ae-kube-api-access-npjrz\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x\" (UID: \"697af640-d294-4a01-a0d5-1ee46f8b75ae\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x" Jan 31 08:07:44 crc kubenswrapper[4908]: I0131 08:07:44.504455 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x" Jan 31 08:07:45 crc kubenswrapper[4908]: I0131 08:07:45.040410 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x"] Jan 31 08:07:45 crc kubenswrapper[4908]: I0131 08:07:45.109060 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x" event={"ID":"697af640-d294-4a01-a0d5-1ee46f8b75ae","Type":"ContainerStarted","Data":"80ed0eca9a07abe3228e36f26fd4a9422ca188b67ebdaa5238971c0a0a6ea06d"} Jan 31 08:07:48 crc kubenswrapper[4908]: I0131 08:07:48.135951 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x" event={"ID":"697af640-d294-4a01-a0d5-1ee46f8b75ae","Type":"ContainerStarted","Data":"1ee18040dcafe5e47f3f522150959c61bac04608c06ef249583bf9807507ecf8"} Jan 31 08:07:48 crc kubenswrapper[4908]: I0131 08:07:48.162040 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x" podStartSLOduration=2.163684156 podStartE2EDuration="4.16200342s" podCreationTimestamp="2026-01-31 08:07:44 +0000 UTC" firstStartedPulling="2026-01-31 08:07:45.041436379 +0000 UTC m=+2771.657381033" lastFinishedPulling="2026-01-31 08:07:47.039755643 +0000 UTC m=+2773.655700297" observedRunningTime="2026-01-31 08:07:48.157534635 +0000 UTC m=+2774.773479289" watchObservedRunningTime="2026-01-31 08:07:48.16200342 +0000 UTC m=+2774.777948074" Jan 31 08:08:10 crc kubenswrapper[4908]: I0131 08:08:10.431050 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:08:10 crc kubenswrapper[4908]: I0131 08:08:10.433123 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:08:40 crc kubenswrapper[4908]: I0131 08:08:40.431086 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:08:40 crc kubenswrapper[4908]: I0131 08:08:40.431622 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:08:48 crc kubenswrapper[4908]: I0131 08:08:48.262661 4908 generic.go:334] "Generic (PLEG): container finished" podID="697af640-d294-4a01-a0d5-1ee46f8b75ae" containerID="1ee18040dcafe5e47f3f522150959c61bac04608c06ef249583bf9807507ecf8" exitCode=0 Jan 31 08:08:48 crc kubenswrapper[4908]: I0131 08:08:48.262746 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x" event={"ID":"697af640-d294-4a01-a0d5-1ee46f8b75ae","Type":"ContainerDied","Data":"1ee18040dcafe5e47f3f522150959c61bac04608c06ef249583bf9807507ecf8"} Jan 31 08:08:49 crc kubenswrapper[4908]: I0131 08:08:49.642629 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x" Jan 31 08:08:49 crc kubenswrapper[4908]: I0131 08:08:49.793613 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-ceph\") pod \"697af640-d294-4a01-a0d5-1ee46f8b75ae\" (UID: \"697af640-d294-4a01-a0d5-1ee46f8b75ae\") " Jan 31 08:08:49 crc kubenswrapper[4908]: I0131 08:08:49.793722 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-ssh-key-openstack-edpm-ipam\") pod \"697af640-d294-4a01-a0d5-1ee46f8b75ae\" (UID: \"697af640-d294-4a01-a0d5-1ee46f8b75ae\") " Jan 31 08:08:49 crc kubenswrapper[4908]: I0131 08:08:49.793779 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-neutron-ovn-metadata-agent-neutron-config-0\") pod \"697af640-d294-4a01-a0d5-1ee46f8b75ae\" (UID: \"697af640-d294-4a01-a0d5-1ee46f8b75ae\") " Jan 31 08:08:49 crc kubenswrapper[4908]: I0131 08:08:49.793893 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-neutron-metadata-combined-ca-bundle\") pod \"697af640-d294-4a01-a0d5-1ee46f8b75ae\" (UID: \"697af640-d294-4a01-a0d5-1ee46f8b75ae\") " Jan 31 08:08:49 crc kubenswrapper[4908]: I0131 08:08:49.793931 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npjrz\" (UniqueName: \"kubernetes.io/projected/697af640-d294-4a01-a0d5-1ee46f8b75ae-kube-api-access-npjrz\") pod \"697af640-d294-4a01-a0d5-1ee46f8b75ae\" (UID: \"697af640-d294-4a01-a0d5-1ee46f8b75ae\") " Jan 31 08:08:49 crc kubenswrapper[4908]: I0131 08:08:49.794056 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-inventory\") pod \"697af640-d294-4a01-a0d5-1ee46f8b75ae\" (UID: \"697af640-d294-4a01-a0d5-1ee46f8b75ae\") " Jan 31 08:08:49 crc kubenswrapper[4908]: I0131 08:08:49.794103 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-nova-metadata-neutron-config-0\") pod \"697af640-d294-4a01-a0d5-1ee46f8b75ae\" (UID: \"697af640-d294-4a01-a0d5-1ee46f8b75ae\") " Jan 31 08:08:49 crc kubenswrapper[4908]: I0131 08:08:49.800336 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-ceph" (OuterVolumeSpecName: "ceph") pod "697af640-d294-4a01-a0d5-1ee46f8b75ae" (UID: "697af640-d294-4a01-a0d5-1ee46f8b75ae"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:08:49 crc kubenswrapper[4908]: I0131 08:08:49.804308 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "697af640-d294-4a01-a0d5-1ee46f8b75ae" (UID: "697af640-d294-4a01-a0d5-1ee46f8b75ae"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:08:49 crc kubenswrapper[4908]: I0131 08:08:49.804414 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/697af640-d294-4a01-a0d5-1ee46f8b75ae-kube-api-access-npjrz" (OuterVolumeSpecName: "kube-api-access-npjrz") pod "697af640-d294-4a01-a0d5-1ee46f8b75ae" (UID: "697af640-d294-4a01-a0d5-1ee46f8b75ae"). InnerVolumeSpecName "kube-api-access-npjrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:08:49 crc kubenswrapper[4908]: I0131 08:08:49.821226 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "697af640-d294-4a01-a0d5-1ee46f8b75ae" (UID: "697af640-d294-4a01-a0d5-1ee46f8b75ae"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:08:49 crc kubenswrapper[4908]: I0131 08:08:49.821690 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "697af640-d294-4a01-a0d5-1ee46f8b75ae" (UID: "697af640-d294-4a01-a0d5-1ee46f8b75ae"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:08:49 crc kubenswrapper[4908]: I0131 08:08:49.824004 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "697af640-d294-4a01-a0d5-1ee46f8b75ae" (UID: "697af640-d294-4a01-a0d5-1ee46f8b75ae"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:08:49 crc kubenswrapper[4908]: I0131 08:08:49.836616 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-inventory" (OuterVolumeSpecName: "inventory") pod "697af640-d294-4a01-a0d5-1ee46f8b75ae" (UID: "697af640-d294-4a01-a0d5-1ee46f8b75ae"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:08:49 crc kubenswrapper[4908]: I0131 08:08:49.896217 4908 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 08:08:49 crc kubenswrapper[4908]: I0131 08:08:49.896260 4908 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:08:49 crc kubenswrapper[4908]: I0131 08:08:49.896271 4908 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:08:49 crc kubenswrapper[4908]: I0131 08:08:49.896284 4908 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 08:08:49 crc kubenswrapper[4908]: I0131 08:08:49.896296 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npjrz\" (UniqueName: \"kubernetes.io/projected/697af640-d294-4a01-a0d5-1ee46f8b75ae-kube-api-access-npjrz\") on node \"crc\" DevicePath \"\"" Jan 31 08:08:49 crc kubenswrapper[4908]: I0131 08:08:49.896308 4908 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:08:49 crc kubenswrapper[4908]: I0131 08:08:49.896319 4908 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/697af640-d294-4a01-a0d5-1ee46f8b75ae-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.283999 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x" event={"ID":"697af640-d294-4a01-a0d5-1ee46f8b75ae","Type":"ContainerDied","Data":"80ed0eca9a07abe3228e36f26fd4a9422ca188b67ebdaa5238971c0a0a6ea06d"} Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.284602 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80ed0eca9a07abe3228e36f26fd4a9422ca188b67ebdaa5238971c0a0a6ea06d" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.284118 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.459909 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8jfph"] Jan 31 08:08:50 crc kubenswrapper[4908]: E0131 08:08:50.460341 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="697af640-d294-4a01-a0d5-1ee46f8b75ae" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.460358 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="697af640-d294-4a01-a0d5-1ee46f8b75ae" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.460522 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="697af640-d294-4a01-a0d5-1ee46f8b75ae" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.461149 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8jfph" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.466460 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.466525 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.466600 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vgwb9" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.466761 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.466785 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.467005 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.468492 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8jfph"] Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.618652 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa48837f-3373-4699-936f-a64a1c1daf15-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8jfph\" (UID: \"aa48837f-3373-4699-936f-a64a1c1daf15\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8jfph" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.618703 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf7pj\" (UniqueName: \"kubernetes.io/projected/aa48837f-3373-4699-936f-a64a1c1daf15-kube-api-access-sf7pj\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8jfph\" (UID: \"aa48837f-3373-4699-936f-a64a1c1daf15\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8jfph" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.618959 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/aa48837f-3373-4699-936f-a64a1c1daf15-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8jfph\" (UID: \"aa48837f-3373-4699-936f-a64a1c1daf15\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8jfph" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.619078 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa48837f-3373-4699-936f-a64a1c1daf15-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8jfph\" (UID: \"aa48837f-3373-4699-936f-a64a1c1daf15\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8jfph" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.619109 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa48837f-3373-4699-936f-a64a1c1daf15-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8jfph\" (UID: \"aa48837f-3373-4699-936f-a64a1c1daf15\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8jfph" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.619155 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa48837f-3373-4699-936f-a64a1c1daf15-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8jfph\" (UID: \"aa48837f-3373-4699-936f-a64a1c1daf15\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8jfph" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.721382 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa48837f-3373-4699-936f-a64a1c1daf15-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8jfph\" (UID: \"aa48837f-3373-4699-936f-a64a1c1daf15\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8jfph" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.721435 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf7pj\" (UniqueName: \"kubernetes.io/projected/aa48837f-3373-4699-936f-a64a1c1daf15-kube-api-access-sf7pj\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8jfph\" (UID: \"aa48837f-3373-4699-936f-a64a1c1daf15\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8jfph" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.721532 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/aa48837f-3373-4699-936f-a64a1c1daf15-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8jfph\" (UID: \"aa48837f-3373-4699-936f-a64a1c1daf15\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8jfph" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.721558 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa48837f-3373-4699-936f-a64a1c1daf15-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8jfph\" (UID: \"aa48837f-3373-4699-936f-a64a1c1daf15\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8jfph" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.721578 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa48837f-3373-4699-936f-a64a1c1daf15-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8jfph\" (UID: \"aa48837f-3373-4699-936f-a64a1c1daf15\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8jfph" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.721610 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa48837f-3373-4699-936f-a64a1c1daf15-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8jfph\" (UID: \"aa48837f-3373-4699-936f-a64a1c1daf15\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8jfph" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.726365 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa48837f-3373-4699-936f-a64a1c1daf15-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8jfph\" (UID: \"aa48837f-3373-4699-936f-a64a1c1daf15\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8jfph" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.730334 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa48837f-3373-4699-936f-a64a1c1daf15-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8jfph\" (UID: \"aa48837f-3373-4699-936f-a64a1c1daf15\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8jfph" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.737476 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/aa48837f-3373-4699-936f-a64a1c1daf15-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8jfph\" (UID: \"aa48837f-3373-4699-936f-a64a1c1daf15\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8jfph" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.740635 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa48837f-3373-4699-936f-a64a1c1daf15-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8jfph\" (UID: \"aa48837f-3373-4699-936f-a64a1c1daf15\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8jfph" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.750667 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa48837f-3373-4699-936f-a64a1c1daf15-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8jfph\" (UID: \"aa48837f-3373-4699-936f-a64a1c1daf15\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8jfph" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.758101 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf7pj\" (UniqueName: \"kubernetes.io/projected/aa48837f-3373-4699-936f-a64a1c1daf15-kube-api-access-sf7pj\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8jfph\" (UID: \"aa48837f-3373-4699-936f-a64a1c1daf15\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8jfph" Jan 31 08:08:50 crc kubenswrapper[4908]: I0131 08:08:50.784638 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8jfph" Jan 31 08:08:51 crc kubenswrapper[4908]: I0131 08:08:51.117256 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8jfph"] Jan 31 08:08:51 crc kubenswrapper[4908]: I0131 08:08:51.123193 4908 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 08:08:51 crc kubenswrapper[4908]: I0131 08:08:51.292719 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8jfph" event={"ID":"aa48837f-3373-4699-936f-a64a1c1daf15","Type":"ContainerStarted","Data":"029faaafa34e610124b48e636203a764e08c8cdb894719561ef9465254030708"} Jan 31 08:08:52 crc kubenswrapper[4908]: I0131 08:08:52.302819 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8jfph" event={"ID":"aa48837f-3373-4699-936f-a64a1c1daf15","Type":"ContainerStarted","Data":"cdd1ef7b672899c17f255852a01a92a61d79c99b8ff8205e3be0e7661ffe57e6"} Jan 31 08:08:52 crc kubenswrapper[4908]: I0131 08:08:52.325304 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8jfph" podStartSLOduration=1.5828518649999999 podStartE2EDuration="2.325284604s" podCreationTimestamp="2026-01-31 08:08:50 +0000 UTC" firstStartedPulling="2026-01-31 08:08:51.12295917 +0000 UTC m=+2837.738903824" lastFinishedPulling="2026-01-31 08:08:51.865391909 +0000 UTC m=+2838.481336563" observedRunningTime="2026-01-31 08:08:52.318453189 +0000 UTC m=+2838.934397843" watchObservedRunningTime="2026-01-31 08:08:52.325284604 +0000 UTC m=+2838.941229248" Jan 31 08:09:10 crc kubenswrapper[4908]: I0131 08:09:10.430743 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:09:10 crc kubenswrapper[4908]: I0131 08:09:10.431297 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:09:10 crc kubenswrapper[4908]: I0131 08:09:10.431337 4908 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 08:09:10 crc kubenswrapper[4908]: I0131 08:09:10.432038 4908 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"88e354e219e21d62e1709e8178e0b0f3f840c23e4d2b616dfc11d69f7cc30d05"} pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 08:09:10 crc kubenswrapper[4908]: I0131 08:09:10.432111 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" containerID="cri-o://88e354e219e21d62e1709e8178e0b0f3f840c23e4d2b616dfc11d69f7cc30d05" gracePeriod=600 Jan 31 08:09:11 crc kubenswrapper[4908]: I0131 08:09:11.479249 4908 generic.go:334] "Generic (PLEG): container finished" podID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerID="88e354e219e21d62e1709e8178e0b0f3f840c23e4d2b616dfc11d69f7cc30d05" exitCode=0 Jan 31 08:09:11 crc kubenswrapper[4908]: I0131 08:09:11.479334 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerDied","Data":"88e354e219e21d62e1709e8178e0b0f3f840c23e4d2b616dfc11d69f7cc30d05"} Jan 31 08:09:11 crc kubenswrapper[4908]: I0131 08:09:11.480038 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerStarted","Data":"45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109"} Jan 31 08:09:11 crc kubenswrapper[4908]: I0131 08:09:11.480071 4908 scope.go:117] "RemoveContainer" containerID="d321a20c432d157f2e989956fa10f485b4edddb1fbbf593e417be8067e4c0bf7" Jan 31 08:11:40 crc kubenswrapper[4908]: I0131 08:11:40.430601 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:11:40 crc kubenswrapper[4908]: I0131 08:11:40.433154 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:12:10 crc kubenswrapper[4908]: I0131 08:12:10.430627 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:12:10 crc kubenswrapper[4908]: I0131 08:12:10.431216 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:12:40 crc kubenswrapper[4908]: I0131 08:12:40.431267 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:12:40 crc kubenswrapper[4908]: I0131 08:12:40.432293 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:12:40 crc kubenswrapper[4908]: I0131 08:12:40.432373 4908 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 08:12:40 crc kubenswrapper[4908]: I0131 08:12:40.433558 4908 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109"} pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 08:12:40 crc kubenswrapper[4908]: I0131 08:12:40.433635 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" containerID="cri-o://45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109" gracePeriod=600 Jan 31 08:12:40 crc kubenswrapper[4908]: E0131 08:12:40.585621 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:12:40 crc kubenswrapper[4908]: I0131 08:12:40.830710 4908 generic.go:334] "Generic (PLEG): container finished" podID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerID="45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109" exitCode=0 Jan 31 08:12:40 crc kubenswrapper[4908]: I0131 08:12:40.830776 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerDied","Data":"45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109"} Jan 31 08:12:40 crc kubenswrapper[4908]: I0131 08:12:40.830829 4908 scope.go:117] "RemoveContainer" containerID="88e354e219e21d62e1709e8178e0b0f3f840c23e4d2b616dfc11d69f7cc30d05" Jan 31 08:12:40 crc kubenswrapper[4908]: I0131 08:12:40.831902 4908 scope.go:117] "RemoveContainer" containerID="45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109" Jan 31 08:12:40 crc kubenswrapper[4908]: E0131 08:12:40.832294 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:12:47 crc kubenswrapper[4908]: I0131 08:12:47.891273 4908 generic.go:334] "Generic (PLEG): container finished" podID="aa48837f-3373-4699-936f-a64a1c1daf15" containerID="cdd1ef7b672899c17f255852a01a92a61d79c99b8ff8205e3be0e7661ffe57e6" exitCode=0 Jan 31 08:12:47 crc kubenswrapper[4908]: I0131 08:12:47.891363 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8jfph" event={"ID":"aa48837f-3373-4699-936f-a64a1c1daf15","Type":"ContainerDied","Data":"cdd1ef7b672899c17f255852a01a92a61d79c99b8ff8205e3be0e7661ffe57e6"} Jan 31 08:12:49 crc kubenswrapper[4908]: I0131 08:12:49.290908 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8jfph" Jan 31 08:12:49 crc kubenswrapper[4908]: I0131 08:12:49.351332 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf7pj\" (UniqueName: \"kubernetes.io/projected/aa48837f-3373-4699-936f-a64a1c1daf15-kube-api-access-sf7pj\") pod \"aa48837f-3373-4699-936f-a64a1c1daf15\" (UID: \"aa48837f-3373-4699-936f-a64a1c1daf15\") " Jan 31 08:12:49 crc kubenswrapper[4908]: I0131 08:12:49.351397 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa48837f-3373-4699-936f-a64a1c1daf15-ssh-key-openstack-edpm-ipam\") pod \"aa48837f-3373-4699-936f-a64a1c1daf15\" (UID: \"aa48837f-3373-4699-936f-a64a1c1daf15\") " Jan 31 08:12:49 crc kubenswrapper[4908]: I0131 08:12:49.351478 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa48837f-3373-4699-936f-a64a1c1daf15-libvirt-combined-ca-bundle\") pod \"aa48837f-3373-4699-936f-a64a1c1daf15\" (UID: \"aa48837f-3373-4699-936f-a64a1c1daf15\") " Jan 31 08:12:49 crc kubenswrapper[4908]: I0131 08:12:49.351531 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa48837f-3373-4699-936f-a64a1c1daf15-ceph\") pod \"aa48837f-3373-4699-936f-a64a1c1daf15\" (UID: \"aa48837f-3373-4699-936f-a64a1c1daf15\") " Jan 31 08:12:49 crc kubenswrapper[4908]: I0131 08:12:49.351600 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa48837f-3373-4699-936f-a64a1c1daf15-inventory\") pod \"aa48837f-3373-4699-936f-a64a1c1daf15\" (UID: \"aa48837f-3373-4699-936f-a64a1c1daf15\") " Jan 31 08:12:49 crc kubenswrapper[4908]: I0131 08:12:49.351622 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/aa48837f-3373-4699-936f-a64a1c1daf15-libvirt-secret-0\") pod \"aa48837f-3373-4699-936f-a64a1c1daf15\" (UID: \"aa48837f-3373-4699-936f-a64a1c1daf15\") " Jan 31 08:12:49 crc kubenswrapper[4908]: I0131 08:12:49.358690 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa48837f-3373-4699-936f-a64a1c1daf15-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "aa48837f-3373-4699-936f-a64a1c1daf15" (UID: "aa48837f-3373-4699-936f-a64a1c1daf15"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:12:49 crc kubenswrapper[4908]: I0131 08:12:49.359556 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa48837f-3373-4699-936f-a64a1c1daf15-ceph" (OuterVolumeSpecName: "ceph") pod "aa48837f-3373-4699-936f-a64a1c1daf15" (UID: "aa48837f-3373-4699-936f-a64a1c1daf15"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:12:49 crc kubenswrapper[4908]: I0131 08:12:49.360153 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa48837f-3373-4699-936f-a64a1c1daf15-kube-api-access-sf7pj" (OuterVolumeSpecName: "kube-api-access-sf7pj") pod "aa48837f-3373-4699-936f-a64a1c1daf15" (UID: "aa48837f-3373-4699-936f-a64a1c1daf15"). InnerVolumeSpecName "kube-api-access-sf7pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:12:49 crc kubenswrapper[4908]: I0131 08:12:49.385584 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa48837f-3373-4699-936f-a64a1c1daf15-inventory" (OuterVolumeSpecName: "inventory") pod "aa48837f-3373-4699-936f-a64a1c1daf15" (UID: "aa48837f-3373-4699-936f-a64a1c1daf15"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:12:49 crc kubenswrapper[4908]: I0131 08:12:49.387694 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa48837f-3373-4699-936f-a64a1c1daf15-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "aa48837f-3373-4699-936f-a64a1c1daf15" (UID: "aa48837f-3373-4699-936f-a64a1c1daf15"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:12:49 crc kubenswrapper[4908]: I0131 08:12:49.388597 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa48837f-3373-4699-936f-a64a1c1daf15-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "aa48837f-3373-4699-936f-a64a1c1daf15" (UID: "aa48837f-3373-4699-936f-a64a1c1daf15"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:12:49 crc kubenswrapper[4908]: I0131 08:12:49.454274 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sf7pj\" (UniqueName: \"kubernetes.io/projected/aa48837f-3373-4699-936f-a64a1c1daf15-kube-api-access-sf7pj\") on node \"crc\" DevicePath \"\"" Jan 31 08:12:49 crc kubenswrapper[4908]: I0131 08:12:49.454331 4908 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa48837f-3373-4699-936f-a64a1c1daf15-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:12:49 crc kubenswrapper[4908]: I0131 08:12:49.454376 4908 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa48837f-3373-4699-936f-a64a1c1daf15-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:12:49 crc kubenswrapper[4908]: I0131 08:12:49.454394 4908 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa48837f-3373-4699-936f-a64a1c1daf15-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:12:49 crc kubenswrapper[4908]: I0131 08:12:49.454408 4908 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa48837f-3373-4699-936f-a64a1c1daf15-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:12:49 crc kubenswrapper[4908]: I0131 08:12:49.454422 4908 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/aa48837f-3373-4699-936f-a64a1c1daf15-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 31 08:12:49 crc kubenswrapper[4908]: I0131 08:12:49.926113 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8jfph" event={"ID":"aa48837f-3373-4699-936f-a64a1c1daf15","Type":"ContainerDied","Data":"029faaafa34e610124b48e636203a764e08c8cdb894719561ef9465254030708"} Jan 31 08:12:49 crc kubenswrapper[4908]: I0131 08:12:49.926449 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="029faaafa34e610124b48e636203a764e08c8cdb894719561ef9465254030708" Jan 31 08:12:49 crc kubenswrapper[4908]: I0131 08:12:49.926330 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8jfph" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.011421 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66"] Jan 31 08:12:50 crc kubenswrapper[4908]: E0131 08:12:50.011918 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa48837f-3373-4699-936f-a64a1c1daf15" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.011943 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa48837f-3373-4699-936f-a64a1c1daf15" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.012237 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa48837f-3373-4699-936f-a64a1c1daf15" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.012844 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.016673 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.016897 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.022879 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.023308 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.025608 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vgwb9" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.025725 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.025627 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.025928 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.026003 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.026823 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66"] Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.065305 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.065489 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.065714 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.065872 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9npb9\" (UniqueName: \"kubernetes.io/projected/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-kube-api-access-9npb9\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.065923 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.065948 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.066002 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.066070 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.066089 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.066132 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.066180 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.168886 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9npb9\" (UniqueName: \"kubernetes.io/projected/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-kube-api-access-9npb9\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.169004 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.169039 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.169081 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.169140 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.169169 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.169204 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.169233 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.169291 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.169337 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.169390 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.170711 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.176048 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.176452 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.176803 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.176817 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.176936 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.177518 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.178574 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.178885 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.179003 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.193139 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9npb9\" (UniqueName: \"kubernetes.io/projected/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-kube-api-access-9npb9\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.329375 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.908336 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66"] Jan 31 08:12:50 crc kubenswrapper[4908]: I0131 08:12:50.943423 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" event={"ID":"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b","Type":"ContainerStarted","Data":"b5a5401fec7a0028418845d6b94adf5917cdab076eb6267f733041e9039b45e2"} Jan 31 08:12:51 crc kubenswrapper[4908]: I0131 08:12:51.952559 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" event={"ID":"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b","Type":"ContainerStarted","Data":"49ae096e6026bc0ec270a9d7ee9a6e5359780f38006fdd2428c146191bdb8759"} Jan 31 08:12:51 crc kubenswrapper[4908]: I0131 08:12:51.976198 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" podStartSLOduration=2.357820256 podStartE2EDuration="2.976178656s" podCreationTimestamp="2026-01-31 08:12:49 +0000 UTC" firstStartedPulling="2026-01-31 08:12:50.91672051 +0000 UTC m=+3077.532665164" lastFinishedPulling="2026-01-31 08:12:51.53507891 +0000 UTC m=+3078.151023564" observedRunningTime="2026-01-31 08:12:51.972705099 +0000 UTC m=+3078.588649773" watchObservedRunningTime="2026-01-31 08:12:51.976178656 +0000 UTC m=+3078.592123320" Jan 31 08:12:53 crc kubenswrapper[4908]: I0131 08:12:53.940067 4908 scope.go:117] "RemoveContainer" containerID="45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109" Jan 31 08:12:53 crc kubenswrapper[4908]: E0131 08:12:53.940558 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:13:06 crc kubenswrapper[4908]: I0131 08:13:06.940651 4908 scope.go:117] "RemoveContainer" containerID="45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109" Jan 31 08:13:06 crc kubenswrapper[4908]: E0131 08:13:06.942269 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:13:19 crc kubenswrapper[4908]: I0131 08:13:19.942194 4908 scope.go:117] "RemoveContainer" containerID="45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109" Jan 31 08:13:19 crc kubenswrapper[4908]: E0131 08:13:19.943175 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:13:30 crc kubenswrapper[4908]: I0131 08:13:30.941376 4908 scope.go:117] "RemoveContainer" containerID="45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109" Jan 31 08:13:30 crc kubenswrapper[4908]: E0131 08:13:30.945064 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:13:43 crc kubenswrapper[4908]: I0131 08:13:43.940826 4908 scope.go:117] "RemoveContainer" containerID="45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109" Jan 31 08:13:43 crc kubenswrapper[4908]: E0131 08:13:43.941500 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:13:57 crc kubenswrapper[4908]: I0131 08:13:57.946003 4908 scope.go:117] "RemoveContainer" containerID="45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109" Jan 31 08:13:57 crc kubenswrapper[4908]: E0131 08:13:57.946653 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:14:08 crc kubenswrapper[4908]: I0131 08:14:08.941042 4908 scope.go:117] "RemoveContainer" containerID="45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109" Jan 31 08:14:08 crc kubenswrapper[4908]: E0131 08:14:08.941796 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:14:20 crc kubenswrapper[4908]: I0131 08:14:20.941114 4908 scope.go:117] "RemoveContainer" containerID="45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109" Jan 31 08:14:20 crc kubenswrapper[4908]: E0131 08:14:20.942134 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:14:33 crc kubenswrapper[4908]: I0131 08:14:33.940312 4908 scope.go:117] "RemoveContainer" containerID="45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109" Jan 31 08:14:33 crc kubenswrapper[4908]: E0131 08:14:33.941628 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:14:45 crc kubenswrapper[4908]: I0131 08:14:45.939897 4908 scope.go:117] "RemoveContainer" containerID="45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109" Jan 31 08:14:45 crc kubenswrapper[4908]: E0131 08:14:45.940900 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:14:57 crc kubenswrapper[4908]: I0131 08:14:57.945998 4908 scope.go:117] "RemoveContainer" containerID="45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109" Jan 31 08:14:57 crc kubenswrapper[4908]: E0131 08:14:57.946713 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:15:00 crc kubenswrapper[4908]: I0131 08:15:00.157772 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497455-68c8w"] Jan 31 08:15:00 crc kubenswrapper[4908]: I0131 08:15:00.159247 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-68c8w" Jan 31 08:15:00 crc kubenswrapper[4908]: I0131 08:15:00.161174 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 08:15:00 crc kubenswrapper[4908]: I0131 08:15:00.162588 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 08:15:00 crc kubenswrapper[4908]: I0131 08:15:00.177431 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497455-68c8w"] Jan 31 08:15:00 crc kubenswrapper[4908]: I0131 08:15:00.231914 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3873dd5-f913-48ef-a2b4-f3fc5e442b61-secret-volume\") pod \"collect-profiles-29497455-68c8w\" (UID: \"a3873dd5-f913-48ef-a2b4-f3fc5e442b61\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-68c8w" Jan 31 08:15:00 crc kubenswrapper[4908]: I0131 08:15:00.232016 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx9pk\" (UniqueName: \"kubernetes.io/projected/a3873dd5-f913-48ef-a2b4-f3fc5e442b61-kube-api-access-kx9pk\") pod \"collect-profiles-29497455-68c8w\" (UID: \"a3873dd5-f913-48ef-a2b4-f3fc5e442b61\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-68c8w" Jan 31 08:15:00 crc kubenswrapper[4908]: I0131 08:15:00.232116 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3873dd5-f913-48ef-a2b4-f3fc5e442b61-config-volume\") pod \"collect-profiles-29497455-68c8w\" (UID: \"a3873dd5-f913-48ef-a2b4-f3fc5e442b61\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-68c8w" Jan 31 08:15:00 crc kubenswrapper[4908]: I0131 08:15:00.333362 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3873dd5-f913-48ef-a2b4-f3fc5e442b61-secret-volume\") pod \"collect-profiles-29497455-68c8w\" (UID: \"a3873dd5-f913-48ef-a2b4-f3fc5e442b61\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-68c8w" Jan 31 08:15:00 crc kubenswrapper[4908]: I0131 08:15:00.333441 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx9pk\" (UniqueName: \"kubernetes.io/projected/a3873dd5-f913-48ef-a2b4-f3fc5e442b61-kube-api-access-kx9pk\") pod \"collect-profiles-29497455-68c8w\" (UID: \"a3873dd5-f913-48ef-a2b4-f3fc5e442b61\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-68c8w" Jan 31 08:15:00 crc kubenswrapper[4908]: I0131 08:15:00.333518 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3873dd5-f913-48ef-a2b4-f3fc5e442b61-config-volume\") pod \"collect-profiles-29497455-68c8w\" (UID: \"a3873dd5-f913-48ef-a2b4-f3fc5e442b61\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-68c8w" Jan 31 08:15:00 crc kubenswrapper[4908]: I0131 08:15:00.334343 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3873dd5-f913-48ef-a2b4-f3fc5e442b61-config-volume\") pod \"collect-profiles-29497455-68c8w\" (UID: \"a3873dd5-f913-48ef-a2b4-f3fc5e442b61\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-68c8w" Jan 31 08:15:00 crc kubenswrapper[4908]: I0131 08:15:00.339225 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3873dd5-f913-48ef-a2b4-f3fc5e442b61-secret-volume\") pod \"collect-profiles-29497455-68c8w\" (UID: \"a3873dd5-f913-48ef-a2b4-f3fc5e442b61\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-68c8w" Jan 31 08:15:00 crc kubenswrapper[4908]: I0131 08:15:00.350963 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx9pk\" (UniqueName: \"kubernetes.io/projected/a3873dd5-f913-48ef-a2b4-f3fc5e442b61-kube-api-access-kx9pk\") pod \"collect-profiles-29497455-68c8w\" (UID: \"a3873dd5-f913-48ef-a2b4-f3fc5e442b61\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-68c8w" Jan 31 08:15:00 crc kubenswrapper[4908]: I0131 08:15:00.477650 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-68c8w" Jan 31 08:15:00 crc kubenswrapper[4908]: I0131 08:15:00.896184 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497455-68c8w"] Jan 31 08:15:01 crc kubenswrapper[4908]: I0131 08:15:01.031267 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-68c8w" event={"ID":"a3873dd5-f913-48ef-a2b4-f3fc5e442b61","Type":"ContainerStarted","Data":"d7ba6114ac41aaf130e14bc80574b6b0e85b5e10642497ee7cfc41662f907f1f"} Jan 31 08:15:02 crc kubenswrapper[4908]: I0131 08:15:02.043448 4908 generic.go:334] "Generic (PLEG): container finished" podID="a3873dd5-f913-48ef-a2b4-f3fc5e442b61" containerID="ad9c3e751330496ece65eea7c818ec03ec6e90ed16e7993ca3e2eec49ab5dc53" exitCode=0 Jan 31 08:15:02 crc kubenswrapper[4908]: I0131 08:15:02.043516 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-68c8w" event={"ID":"a3873dd5-f913-48ef-a2b4-f3fc5e442b61","Type":"ContainerDied","Data":"ad9c3e751330496ece65eea7c818ec03ec6e90ed16e7993ca3e2eec49ab5dc53"} Jan 31 08:15:03 crc kubenswrapper[4908]: I0131 08:15:03.363793 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-68c8w" Jan 31 08:15:03 crc kubenswrapper[4908]: I0131 08:15:03.487185 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3873dd5-f913-48ef-a2b4-f3fc5e442b61-secret-volume\") pod \"a3873dd5-f913-48ef-a2b4-f3fc5e442b61\" (UID: \"a3873dd5-f913-48ef-a2b4-f3fc5e442b61\") " Jan 31 08:15:03 crc kubenswrapper[4908]: I0131 08:15:03.487356 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3873dd5-f913-48ef-a2b4-f3fc5e442b61-config-volume\") pod \"a3873dd5-f913-48ef-a2b4-f3fc5e442b61\" (UID: \"a3873dd5-f913-48ef-a2b4-f3fc5e442b61\") " Jan 31 08:15:03 crc kubenswrapper[4908]: I0131 08:15:03.487573 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx9pk\" (UniqueName: \"kubernetes.io/projected/a3873dd5-f913-48ef-a2b4-f3fc5e442b61-kube-api-access-kx9pk\") pod \"a3873dd5-f913-48ef-a2b4-f3fc5e442b61\" (UID: \"a3873dd5-f913-48ef-a2b4-f3fc5e442b61\") " Jan 31 08:15:03 crc kubenswrapper[4908]: I0131 08:15:03.488322 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3873dd5-f913-48ef-a2b4-f3fc5e442b61-config-volume" (OuterVolumeSpecName: "config-volume") pod "a3873dd5-f913-48ef-a2b4-f3fc5e442b61" (UID: "a3873dd5-f913-48ef-a2b4-f3fc5e442b61"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:15:03 crc kubenswrapper[4908]: I0131 08:15:03.492820 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3873dd5-f913-48ef-a2b4-f3fc5e442b61-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a3873dd5-f913-48ef-a2b4-f3fc5e442b61" (UID: "a3873dd5-f913-48ef-a2b4-f3fc5e442b61"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:15:03 crc kubenswrapper[4908]: I0131 08:15:03.494304 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3873dd5-f913-48ef-a2b4-f3fc5e442b61-kube-api-access-kx9pk" (OuterVolumeSpecName: "kube-api-access-kx9pk") pod "a3873dd5-f913-48ef-a2b4-f3fc5e442b61" (UID: "a3873dd5-f913-48ef-a2b4-f3fc5e442b61"). InnerVolumeSpecName "kube-api-access-kx9pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:15:03 crc kubenswrapper[4908]: I0131 08:15:03.590075 4908 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3873dd5-f913-48ef-a2b4-f3fc5e442b61-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:03 crc kubenswrapper[4908]: I0131 08:15:03.590119 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx9pk\" (UniqueName: \"kubernetes.io/projected/a3873dd5-f913-48ef-a2b4-f3fc5e442b61-kube-api-access-kx9pk\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:03 crc kubenswrapper[4908]: I0131 08:15:03.590131 4908 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3873dd5-f913-48ef-a2b4-f3fc5e442b61-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:04 crc kubenswrapper[4908]: I0131 08:15:04.065924 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-68c8w" event={"ID":"a3873dd5-f913-48ef-a2b4-f3fc5e442b61","Type":"ContainerDied","Data":"d7ba6114ac41aaf130e14bc80574b6b0e85b5e10642497ee7cfc41662f907f1f"} Jan 31 08:15:04 crc kubenswrapper[4908]: I0131 08:15:04.066186 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7ba6114ac41aaf130e14bc80574b6b0e85b5e10642497ee7cfc41662f907f1f" Jan 31 08:15:04 crc kubenswrapper[4908]: I0131 08:15:04.066045 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-68c8w" Jan 31 08:15:04 crc kubenswrapper[4908]: I0131 08:15:04.449139 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497410-g5phg"] Jan 31 08:15:04 crc kubenswrapper[4908]: I0131 08:15:04.457137 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497410-g5phg"] Jan 31 08:15:04 crc kubenswrapper[4908]: I0131 08:15:04.657392 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m4xbt"] Jan 31 08:15:04 crc kubenswrapper[4908]: E0131 08:15:04.657810 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3873dd5-f913-48ef-a2b4-f3fc5e442b61" containerName="collect-profiles" Jan 31 08:15:04 crc kubenswrapper[4908]: I0131 08:15:04.657830 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3873dd5-f913-48ef-a2b4-f3fc5e442b61" containerName="collect-profiles" Jan 31 08:15:04 crc kubenswrapper[4908]: I0131 08:15:04.658348 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3873dd5-f913-48ef-a2b4-f3fc5e442b61" containerName="collect-profiles" Jan 31 08:15:04 crc kubenswrapper[4908]: I0131 08:15:04.659550 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m4xbt" Jan 31 08:15:04 crc kubenswrapper[4908]: I0131 08:15:04.675859 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m4xbt"] Jan 31 08:15:04 crc kubenswrapper[4908]: I0131 08:15:04.712603 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b182a74-a85d-4ac2-a607-deb101b1465b-utilities\") pod \"community-operators-m4xbt\" (UID: \"8b182a74-a85d-4ac2-a607-deb101b1465b\") " pod="openshift-marketplace/community-operators-m4xbt" Jan 31 08:15:04 crc kubenswrapper[4908]: I0131 08:15:04.712684 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b182a74-a85d-4ac2-a607-deb101b1465b-catalog-content\") pod \"community-operators-m4xbt\" (UID: \"8b182a74-a85d-4ac2-a607-deb101b1465b\") " pod="openshift-marketplace/community-operators-m4xbt" Jan 31 08:15:04 crc kubenswrapper[4908]: I0131 08:15:04.712850 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tkqf\" (UniqueName: \"kubernetes.io/projected/8b182a74-a85d-4ac2-a607-deb101b1465b-kube-api-access-5tkqf\") pod \"community-operators-m4xbt\" (UID: \"8b182a74-a85d-4ac2-a607-deb101b1465b\") " pod="openshift-marketplace/community-operators-m4xbt" Jan 31 08:15:04 crc kubenswrapper[4908]: I0131 08:15:04.814554 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b182a74-a85d-4ac2-a607-deb101b1465b-catalog-content\") pod \"community-operators-m4xbt\" (UID: \"8b182a74-a85d-4ac2-a607-deb101b1465b\") " pod="openshift-marketplace/community-operators-m4xbt" Jan 31 08:15:04 crc kubenswrapper[4908]: I0131 08:15:04.814675 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tkqf\" (UniqueName: \"kubernetes.io/projected/8b182a74-a85d-4ac2-a607-deb101b1465b-kube-api-access-5tkqf\") pod \"community-operators-m4xbt\" (UID: \"8b182a74-a85d-4ac2-a607-deb101b1465b\") " pod="openshift-marketplace/community-operators-m4xbt" Jan 31 08:15:04 crc kubenswrapper[4908]: I0131 08:15:04.814761 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b182a74-a85d-4ac2-a607-deb101b1465b-utilities\") pod \"community-operators-m4xbt\" (UID: \"8b182a74-a85d-4ac2-a607-deb101b1465b\") " pod="openshift-marketplace/community-operators-m4xbt" Jan 31 08:15:04 crc kubenswrapper[4908]: I0131 08:15:04.815148 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b182a74-a85d-4ac2-a607-deb101b1465b-catalog-content\") pod \"community-operators-m4xbt\" (UID: \"8b182a74-a85d-4ac2-a607-deb101b1465b\") " pod="openshift-marketplace/community-operators-m4xbt" Jan 31 08:15:04 crc kubenswrapper[4908]: I0131 08:15:04.815232 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b182a74-a85d-4ac2-a607-deb101b1465b-utilities\") pod \"community-operators-m4xbt\" (UID: \"8b182a74-a85d-4ac2-a607-deb101b1465b\") " pod="openshift-marketplace/community-operators-m4xbt" Jan 31 08:15:04 crc kubenswrapper[4908]: I0131 08:15:04.843165 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tkqf\" (UniqueName: \"kubernetes.io/projected/8b182a74-a85d-4ac2-a607-deb101b1465b-kube-api-access-5tkqf\") pod \"community-operators-m4xbt\" (UID: \"8b182a74-a85d-4ac2-a607-deb101b1465b\") " pod="openshift-marketplace/community-operators-m4xbt" Jan 31 08:15:04 crc kubenswrapper[4908]: I0131 08:15:04.855964 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tdtx4"] Jan 31 08:15:04 crc kubenswrapper[4908]: I0131 08:15:04.858563 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tdtx4" Jan 31 08:15:04 crc kubenswrapper[4908]: I0131 08:15:04.875417 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tdtx4"] Jan 31 08:15:04 crc kubenswrapper[4908]: I0131 08:15:04.920490 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10ecfb2f-b880-4d07-9060-b7d8cf7e92d5-utilities\") pod \"certified-operators-tdtx4\" (UID: \"10ecfb2f-b880-4d07-9060-b7d8cf7e92d5\") " pod="openshift-marketplace/certified-operators-tdtx4" Jan 31 08:15:04 crc kubenswrapper[4908]: I0131 08:15:04.920542 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vqqm\" (UniqueName: \"kubernetes.io/projected/10ecfb2f-b880-4d07-9060-b7d8cf7e92d5-kube-api-access-5vqqm\") pod \"certified-operators-tdtx4\" (UID: \"10ecfb2f-b880-4d07-9060-b7d8cf7e92d5\") " pod="openshift-marketplace/certified-operators-tdtx4" Jan 31 08:15:04 crc kubenswrapper[4908]: I0131 08:15:04.920594 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10ecfb2f-b880-4d07-9060-b7d8cf7e92d5-catalog-content\") pod \"certified-operators-tdtx4\" (UID: \"10ecfb2f-b880-4d07-9060-b7d8cf7e92d5\") " pod="openshift-marketplace/certified-operators-tdtx4" Jan 31 08:15:04 crc kubenswrapper[4908]: I0131 08:15:04.976800 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m4xbt" Jan 31 08:15:05 crc kubenswrapper[4908]: I0131 08:15:05.022183 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10ecfb2f-b880-4d07-9060-b7d8cf7e92d5-catalog-content\") pod \"certified-operators-tdtx4\" (UID: \"10ecfb2f-b880-4d07-9060-b7d8cf7e92d5\") " pod="openshift-marketplace/certified-operators-tdtx4" Jan 31 08:15:05 crc kubenswrapper[4908]: I0131 08:15:05.022719 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10ecfb2f-b880-4d07-9060-b7d8cf7e92d5-utilities\") pod \"certified-operators-tdtx4\" (UID: \"10ecfb2f-b880-4d07-9060-b7d8cf7e92d5\") " pod="openshift-marketplace/certified-operators-tdtx4" Jan 31 08:15:05 crc kubenswrapper[4908]: I0131 08:15:05.022870 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vqqm\" (UniqueName: \"kubernetes.io/projected/10ecfb2f-b880-4d07-9060-b7d8cf7e92d5-kube-api-access-5vqqm\") pod \"certified-operators-tdtx4\" (UID: \"10ecfb2f-b880-4d07-9060-b7d8cf7e92d5\") " pod="openshift-marketplace/certified-operators-tdtx4" Jan 31 08:15:05 crc kubenswrapper[4908]: I0131 08:15:05.026122 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10ecfb2f-b880-4d07-9060-b7d8cf7e92d5-utilities\") pod \"certified-operators-tdtx4\" (UID: \"10ecfb2f-b880-4d07-9060-b7d8cf7e92d5\") " pod="openshift-marketplace/certified-operators-tdtx4" Jan 31 08:15:05 crc kubenswrapper[4908]: I0131 08:15:05.028155 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10ecfb2f-b880-4d07-9060-b7d8cf7e92d5-catalog-content\") pod \"certified-operators-tdtx4\" (UID: \"10ecfb2f-b880-4d07-9060-b7d8cf7e92d5\") " pod="openshift-marketplace/certified-operators-tdtx4" Jan 31 08:15:05 crc kubenswrapper[4908]: I0131 08:15:05.050040 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vqqm\" (UniqueName: \"kubernetes.io/projected/10ecfb2f-b880-4d07-9060-b7d8cf7e92d5-kube-api-access-5vqqm\") pod \"certified-operators-tdtx4\" (UID: \"10ecfb2f-b880-4d07-9060-b7d8cf7e92d5\") " pod="openshift-marketplace/certified-operators-tdtx4" Jan 31 08:15:05 crc kubenswrapper[4908]: I0131 08:15:05.233583 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tdtx4" Jan 31 08:15:05 crc kubenswrapper[4908]: I0131 08:15:05.509890 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m4xbt"] Jan 31 08:15:05 crc kubenswrapper[4908]: I0131 08:15:05.798251 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tdtx4"] Jan 31 08:15:05 crc kubenswrapper[4908]: W0131 08:15:05.843332 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10ecfb2f_b880_4d07_9060_b7d8cf7e92d5.slice/crio-f05cce76bbd65e50fb0863dd4bc0e653f002b4fa32306bc401c8485d3b12be1e WatchSource:0}: Error finding container f05cce76bbd65e50fb0863dd4bc0e653f002b4fa32306bc401c8485d3b12be1e: Status 404 returned error can't find the container with id f05cce76bbd65e50fb0863dd4bc0e653f002b4fa32306bc401c8485d3b12be1e Jan 31 08:15:05 crc kubenswrapper[4908]: I0131 08:15:05.953589 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62329ea4-99fc-4e24-8a90-00dbf5b649cc" path="/var/lib/kubelet/pods/62329ea4-99fc-4e24-8a90-00dbf5b649cc/volumes" Jan 31 08:15:06 crc kubenswrapper[4908]: I0131 08:15:06.086084 4908 generic.go:334] "Generic (PLEG): container finished" podID="8b182a74-a85d-4ac2-a607-deb101b1465b" containerID="5d7c63d6c65b9eeacb78ddb7424fe578a1e3ef74edc82dc9951c0d7196f2d3dd" exitCode=0 Jan 31 08:15:06 crc kubenswrapper[4908]: I0131 08:15:06.086161 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m4xbt" event={"ID":"8b182a74-a85d-4ac2-a607-deb101b1465b","Type":"ContainerDied","Data":"5d7c63d6c65b9eeacb78ddb7424fe578a1e3ef74edc82dc9951c0d7196f2d3dd"} Jan 31 08:15:06 crc kubenswrapper[4908]: I0131 08:15:06.086189 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m4xbt" event={"ID":"8b182a74-a85d-4ac2-a607-deb101b1465b","Type":"ContainerStarted","Data":"6a32573748f28fc7822f9a0043664a2879d8262f9a447b4dc45ebc4c85b69461"} Jan 31 08:15:06 crc kubenswrapper[4908]: I0131 08:15:06.088031 4908 generic.go:334] "Generic (PLEG): container finished" podID="10ecfb2f-b880-4d07-9060-b7d8cf7e92d5" containerID="c0ea94f2381b858a0743a2a49a3f51ca384e3b53bcc3bf9c028f850b1882662e" exitCode=0 Jan 31 08:15:06 crc kubenswrapper[4908]: I0131 08:15:06.088068 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdtx4" event={"ID":"10ecfb2f-b880-4d07-9060-b7d8cf7e92d5","Type":"ContainerDied","Data":"c0ea94f2381b858a0743a2a49a3f51ca384e3b53bcc3bf9c028f850b1882662e"} Jan 31 08:15:06 crc kubenswrapper[4908]: I0131 08:15:06.088089 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdtx4" event={"ID":"10ecfb2f-b880-4d07-9060-b7d8cf7e92d5","Type":"ContainerStarted","Data":"f05cce76bbd65e50fb0863dd4bc0e653f002b4fa32306bc401c8485d3b12be1e"} Jan 31 08:15:06 crc kubenswrapper[4908]: I0131 08:15:06.089468 4908 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 08:15:07 crc kubenswrapper[4908]: I0131 08:15:07.098741 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m4xbt" event={"ID":"8b182a74-a85d-4ac2-a607-deb101b1465b","Type":"ContainerStarted","Data":"96558a85429fa3ddaa6aa165866c6f0a9368d0dd4f0bfca91163de44ad95dfe7"} Jan 31 08:15:07 crc kubenswrapper[4908]: I0131 08:15:07.101171 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdtx4" event={"ID":"10ecfb2f-b880-4d07-9060-b7d8cf7e92d5","Type":"ContainerStarted","Data":"61ef199c197f83b30ffec6d258ed041034a381a435e13ea3fa2e17c499377090"} Jan 31 08:15:08 crc kubenswrapper[4908]: I0131 08:15:08.111941 4908 generic.go:334] "Generic (PLEG): container finished" podID="8b182a74-a85d-4ac2-a607-deb101b1465b" containerID="96558a85429fa3ddaa6aa165866c6f0a9368d0dd4f0bfca91163de44ad95dfe7" exitCode=0 Jan 31 08:15:08 crc kubenswrapper[4908]: I0131 08:15:08.112020 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m4xbt" event={"ID":"8b182a74-a85d-4ac2-a607-deb101b1465b","Type":"ContainerDied","Data":"96558a85429fa3ddaa6aa165866c6f0a9368d0dd4f0bfca91163de44ad95dfe7"} Jan 31 08:15:08 crc kubenswrapper[4908]: I0131 08:15:08.114342 4908 generic.go:334] "Generic (PLEG): container finished" podID="10ecfb2f-b880-4d07-9060-b7d8cf7e92d5" containerID="61ef199c197f83b30ffec6d258ed041034a381a435e13ea3fa2e17c499377090" exitCode=0 Jan 31 08:15:08 crc kubenswrapper[4908]: I0131 08:15:08.114381 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdtx4" event={"ID":"10ecfb2f-b880-4d07-9060-b7d8cf7e92d5","Type":"ContainerDied","Data":"61ef199c197f83b30ffec6d258ed041034a381a435e13ea3fa2e17c499377090"} Jan 31 08:15:08 crc kubenswrapper[4908]: I0131 08:15:08.940688 4908 scope.go:117] "RemoveContainer" containerID="45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109" Jan 31 08:15:08 crc kubenswrapper[4908]: E0131 08:15:08.941274 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:15:09 crc kubenswrapper[4908]: I0131 08:15:09.126031 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m4xbt" event={"ID":"8b182a74-a85d-4ac2-a607-deb101b1465b","Type":"ContainerStarted","Data":"0411441c9f6bfd8a429f60d4f873f6cfd0e0932082c6d6e347d28de32f3167cd"} Jan 31 08:15:09 crc kubenswrapper[4908]: I0131 08:15:09.129924 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdtx4" event={"ID":"10ecfb2f-b880-4d07-9060-b7d8cf7e92d5","Type":"ContainerStarted","Data":"823e7a16a56dadef5c4aa63fd9d9770a2ee933b45b130c191047c823af20b161"} Jan 31 08:15:09 crc kubenswrapper[4908]: I0131 08:15:09.148796 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m4xbt" podStartSLOduration=2.466525068 podStartE2EDuration="5.148773829s" podCreationTimestamp="2026-01-31 08:15:04 +0000 UTC" firstStartedPulling="2026-01-31 08:15:06.0891989 +0000 UTC m=+3212.705143554" lastFinishedPulling="2026-01-31 08:15:08.771447651 +0000 UTC m=+3215.387392315" observedRunningTime="2026-01-31 08:15:09.144627415 +0000 UTC m=+3215.760572089" watchObservedRunningTime="2026-01-31 08:15:09.148773829 +0000 UTC m=+3215.764718493" Jan 31 08:15:09 crc kubenswrapper[4908]: I0131 08:15:09.171722 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tdtx4" podStartSLOduration=2.359756292 podStartE2EDuration="5.171703614s" podCreationTimestamp="2026-01-31 08:15:04 +0000 UTC" firstStartedPulling="2026-01-31 08:15:06.090146074 +0000 UTC m=+3212.706090728" lastFinishedPulling="2026-01-31 08:15:08.902093406 +0000 UTC m=+3215.518038050" observedRunningTime="2026-01-31 08:15:09.163278083 +0000 UTC m=+3215.779222737" watchObservedRunningTime="2026-01-31 08:15:09.171703614 +0000 UTC m=+3215.787648268" Jan 31 08:15:09 crc kubenswrapper[4908]: I0131 08:15:09.650674 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l8b62"] Jan 31 08:15:09 crc kubenswrapper[4908]: I0131 08:15:09.652410 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l8b62" Jan 31 08:15:09 crc kubenswrapper[4908]: I0131 08:15:09.661892 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l8b62"] Jan 31 08:15:09 crc kubenswrapper[4908]: I0131 08:15:09.704790 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z2jn\" (UniqueName: \"kubernetes.io/projected/7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1-kube-api-access-2z2jn\") pod \"redhat-operators-l8b62\" (UID: \"7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1\") " pod="openshift-marketplace/redhat-operators-l8b62" Jan 31 08:15:09 crc kubenswrapper[4908]: I0131 08:15:09.704859 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1-catalog-content\") pod \"redhat-operators-l8b62\" (UID: \"7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1\") " pod="openshift-marketplace/redhat-operators-l8b62" Jan 31 08:15:09 crc kubenswrapper[4908]: I0131 08:15:09.705030 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1-utilities\") pod \"redhat-operators-l8b62\" (UID: \"7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1\") " pod="openshift-marketplace/redhat-operators-l8b62" Jan 31 08:15:09 crc kubenswrapper[4908]: I0131 08:15:09.806348 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1-catalog-content\") pod \"redhat-operators-l8b62\" (UID: \"7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1\") " pod="openshift-marketplace/redhat-operators-l8b62" Jan 31 08:15:09 crc kubenswrapper[4908]: I0131 08:15:09.806491 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1-utilities\") pod \"redhat-operators-l8b62\" (UID: \"7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1\") " pod="openshift-marketplace/redhat-operators-l8b62" Jan 31 08:15:09 crc kubenswrapper[4908]: I0131 08:15:09.806565 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z2jn\" (UniqueName: \"kubernetes.io/projected/7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1-kube-api-access-2z2jn\") pod \"redhat-operators-l8b62\" (UID: \"7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1\") " pod="openshift-marketplace/redhat-operators-l8b62" Jan 31 08:15:09 crc kubenswrapper[4908]: I0131 08:15:09.806942 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1-catalog-content\") pod \"redhat-operators-l8b62\" (UID: \"7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1\") " pod="openshift-marketplace/redhat-operators-l8b62" Jan 31 08:15:09 crc kubenswrapper[4908]: I0131 08:15:09.806949 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1-utilities\") pod \"redhat-operators-l8b62\" (UID: \"7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1\") " pod="openshift-marketplace/redhat-operators-l8b62" Jan 31 08:15:09 crc kubenswrapper[4908]: I0131 08:15:09.826943 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z2jn\" (UniqueName: \"kubernetes.io/projected/7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1-kube-api-access-2z2jn\") pod \"redhat-operators-l8b62\" (UID: \"7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1\") " pod="openshift-marketplace/redhat-operators-l8b62" Jan 31 08:15:09 crc kubenswrapper[4908]: I0131 08:15:09.974111 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l8b62" Jan 31 08:15:10 crc kubenswrapper[4908]: W0131 08:15:10.263176 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ee5c944_7f9c_4dcb_9d7d_62d0447d10e1.slice/crio-c08884ad0231e2e8d8251fdda257ce2086eb4cfa6d83fc487ab14076b2ec319d WatchSource:0}: Error finding container c08884ad0231e2e8d8251fdda257ce2086eb4cfa6d83fc487ab14076b2ec319d: Status 404 returned error can't find the container with id c08884ad0231e2e8d8251fdda257ce2086eb4cfa6d83fc487ab14076b2ec319d Jan 31 08:15:10 crc kubenswrapper[4908]: I0131 08:15:10.277305 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l8b62"] Jan 31 08:15:11 crc kubenswrapper[4908]: I0131 08:15:11.147558 4908 generic.go:334] "Generic (PLEG): container finished" podID="7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1" containerID="525f7e4b2d00a1ae2377eecf5832506dbf266753d755e80a2a61263c6c3b08af" exitCode=0 Jan 31 08:15:11 crc kubenswrapper[4908]: I0131 08:15:11.147850 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8b62" event={"ID":"7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1","Type":"ContainerDied","Data":"525f7e4b2d00a1ae2377eecf5832506dbf266753d755e80a2a61263c6c3b08af"} Jan 31 08:15:11 crc kubenswrapper[4908]: I0131 08:15:11.147876 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8b62" event={"ID":"7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1","Type":"ContainerStarted","Data":"c08884ad0231e2e8d8251fdda257ce2086eb4cfa6d83fc487ab14076b2ec319d"} Jan 31 08:15:12 crc kubenswrapper[4908]: I0131 08:15:12.161770 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8b62" event={"ID":"7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1","Type":"ContainerStarted","Data":"fce6d6d35d8025b8458499f18f586f072d327aa32eab5655cb37b3cda94422ed"} Jan 31 08:15:13 crc kubenswrapper[4908]: I0131 08:15:13.174692 4908 generic.go:334] "Generic (PLEG): container finished" podID="7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1" containerID="fce6d6d35d8025b8458499f18f586f072d327aa32eab5655cb37b3cda94422ed" exitCode=0 Jan 31 08:15:13 crc kubenswrapper[4908]: I0131 08:15:13.174751 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8b62" event={"ID":"7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1","Type":"ContainerDied","Data":"fce6d6d35d8025b8458499f18f586f072d327aa32eab5655cb37b3cda94422ed"} Jan 31 08:15:14 crc kubenswrapper[4908]: I0131 08:15:14.977318 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m4xbt" Jan 31 08:15:14 crc kubenswrapper[4908]: I0131 08:15:14.977807 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m4xbt" Jan 31 08:15:15 crc kubenswrapper[4908]: I0131 08:15:15.038505 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m4xbt" Jan 31 08:15:15 crc kubenswrapper[4908]: I0131 08:15:15.234226 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tdtx4" Jan 31 08:15:15 crc kubenswrapper[4908]: I0131 08:15:15.234264 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tdtx4" Jan 31 08:15:15 crc kubenswrapper[4908]: I0131 08:15:15.241245 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m4xbt" Jan 31 08:15:15 crc kubenswrapper[4908]: I0131 08:15:15.279117 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tdtx4" Jan 31 08:15:16 crc kubenswrapper[4908]: I0131 08:15:16.246021 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tdtx4" Jan 31 08:15:18 crc kubenswrapper[4908]: I0131 08:15:18.043512 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m4xbt"] Jan 31 08:15:18 crc kubenswrapper[4908]: I0131 08:15:18.043855 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m4xbt" podUID="8b182a74-a85d-4ac2-a607-deb101b1465b" containerName="registry-server" containerID="cri-o://0411441c9f6bfd8a429f60d4f873f6cfd0e0932082c6d6e347d28de32f3167cd" gracePeriod=2 Jan 31 08:15:18 crc kubenswrapper[4908]: I0131 08:15:18.644133 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tdtx4"] Jan 31 08:15:18 crc kubenswrapper[4908]: I0131 08:15:18.644705 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tdtx4" podUID="10ecfb2f-b880-4d07-9060-b7d8cf7e92d5" containerName="registry-server" containerID="cri-o://823e7a16a56dadef5c4aa63fd9d9770a2ee933b45b130c191047c823af20b161" gracePeriod=2 Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.087785 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m4xbt" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.092515 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tdtx4" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.217584 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10ecfb2f-b880-4d07-9060-b7d8cf7e92d5-catalog-content\") pod \"10ecfb2f-b880-4d07-9060-b7d8cf7e92d5\" (UID: \"10ecfb2f-b880-4d07-9060-b7d8cf7e92d5\") " Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.217677 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b182a74-a85d-4ac2-a607-deb101b1465b-catalog-content\") pod \"8b182a74-a85d-4ac2-a607-deb101b1465b\" (UID: \"8b182a74-a85d-4ac2-a607-deb101b1465b\") " Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.217701 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10ecfb2f-b880-4d07-9060-b7d8cf7e92d5-utilities\") pod \"10ecfb2f-b880-4d07-9060-b7d8cf7e92d5\" (UID: \"10ecfb2f-b880-4d07-9060-b7d8cf7e92d5\") " Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.217730 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b182a74-a85d-4ac2-a607-deb101b1465b-utilities\") pod \"8b182a74-a85d-4ac2-a607-deb101b1465b\" (UID: \"8b182a74-a85d-4ac2-a607-deb101b1465b\") " Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.217771 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vqqm\" (UniqueName: \"kubernetes.io/projected/10ecfb2f-b880-4d07-9060-b7d8cf7e92d5-kube-api-access-5vqqm\") pod \"10ecfb2f-b880-4d07-9060-b7d8cf7e92d5\" (UID: \"10ecfb2f-b880-4d07-9060-b7d8cf7e92d5\") " Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.217862 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tkqf\" (UniqueName: \"kubernetes.io/projected/8b182a74-a85d-4ac2-a607-deb101b1465b-kube-api-access-5tkqf\") pod \"8b182a74-a85d-4ac2-a607-deb101b1465b\" (UID: \"8b182a74-a85d-4ac2-a607-deb101b1465b\") " Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.218737 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10ecfb2f-b880-4d07-9060-b7d8cf7e92d5-utilities" (OuterVolumeSpecName: "utilities") pod "10ecfb2f-b880-4d07-9060-b7d8cf7e92d5" (UID: "10ecfb2f-b880-4d07-9060-b7d8cf7e92d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.219386 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10ecfb2f-b880-4d07-9060-b7d8cf7e92d5-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.219718 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b182a74-a85d-4ac2-a607-deb101b1465b-utilities" (OuterVolumeSpecName: "utilities") pod "8b182a74-a85d-4ac2-a607-deb101b1465b" (UID: "8b182a74-a85d-4ac2-a607-deb101b1465b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.225332 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b182a74-a85d-4ac2-a607-deb101b1465b-kube-api-access-5tkqf" (OuterVolumeSpecName: "kube-api-access-5tkqf") pod "8b182a74-a85d-4ac2-a607-deb101b1465b" (UID: "8b182a74-a85d-4ac2-a607-deb101b1465b"). InnerVolumeSpecName "kube-api-access-5tkqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.229170 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10ecfb2f-b880-4d07-9060-b7d8cf7e92d5-kube-api-access-5vqqm" (OuterVolumeSpecName: "kube-api-access-5vqqm") pod "10ecfb2f-b880-4d07-9060-b7d8cf7e92d5" (UID: "10ecfb2f-b880-4d07-9060-b7d8cf7e92d5"). InnerVolumeSpecName "kube-api-access-5vqqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.238873 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8b62" event={"ID":"7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1","Type":"ContainerStarted","Data":"e36d58f65e60b08987a7349cc8e4215f4c7f1fc89957603086cd74fab55d0ff9"} Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.258640 4908 generic.go:334] "Generic (PLEG): container finished" podID="8b182a74-a85d-4ac2-a607-deb101b1465b" containerID="0411441c9f6bfd8a429f60d4f873f6cfd0e0932082c6d6e347d28de32f3167cd" exitCode=0 Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.258723 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m4xbt" event={"ID":"8b182a74-a85d-4ac2-a607-deb101b1465b","Type":"ContainerDied","Data":"0411441c9f6bfd8a429f60d4f873f6cfd0e0932082c6d6e347d28de32f3167cd"} Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.259106 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m4xbt" event={"ID":"8b182a74-a85d-4ac2-a607-deb101b1465b","Type":"ContainerDied","Data":"6a32573748f28fc7822f9a0043664a2879d8262f9a447b4dc45ebc4c85b69461"} Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.259134 4908 scope.go:117] "RemoveContainer" containerID="0411441c9f6bfd8a429f60d4f873f6cfd0e0932082c6d6e347d28de32f3167cd" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.259304 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m4xbt" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.260189 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l8b62" podStartSLOduration=2.682958129 podStartE2EDuration="11.260168391s" podCreationTimestamp="2026-01-31 08:15:09 +0000 UTC" firstStartedPulling="2026-01-31 08:15:11.14958994 +0000 UTC m=+3217.765534594" lastFinishedPulling="2026-01-31 08:15:19.726800202 +0000 UTC m=+3226.342744856" observedRunningTime="2026-01-31 08:15:20.257637528 +0000 UTC m=+3226.873582182" watchObservedRunningTime="2026-01-31 08:15:20.260168391 +0000 UTC m=+3226.876113045" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.265204 4908 generic.go:334] "Generic (PLEG): container finished" podID="10ecfb2f-b880-4d07-9060-b7d8cf7e92d5" containerID="823e7a16a56dadef5c4aa63fd9d9770a2ee933b45b130c191047c823af20b161" exitCode=0 Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.265258 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdtx4" event={"ID":"10ecfb2f-b880-4d07-9060-b7d8cf7e92d5","Type":"ContainerDied","Data":"823e7a16a56dadef5c4aa63fd9d9770a2ee933b45b130c191047c823af20b161"} Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.265292 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdtx4" event={"ID":"10ecfb2f-b880-4d07-9060-b7d8cf7e92d5","Type":"ContainerDied","Data":"f05cce76bbd65e50fb0863dd4bc0e653f002b4fa32306bc401c8485d3b12be1e"} Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.265360 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tdtx4" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.278972 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10ecfb2f-b880-4d07-9060-b7d8cf7e92d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10ecfb2f-b880-4d07-9060-b7d8cf7e92d5" (UID: "10ecfb2f-b880-4d07-9060-b7d8cf7e92d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.288801 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b182a74-a85d-4ac2-a607-deb101b1465b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b182a74-a85d-4ac2-a607-deb101b1465b" (UID: "8b182a74-a85d-4ac2-a607-deb101b1465b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.320815 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10ecfb2f-b880-4d07-9060-b7d8cf7e92d5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.320844 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b182a74-a85d-4ac2-a607-deb101b1465b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.320854 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b182a74-a85d-4ac2-a607-deb101b1465b-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.320867 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vqqm\" (UniqueName: \"kubernetes.io/projected/10ecfb2f-b880-4d07-9060-b7d8cf7e92d5-kube-api-access-5vqqm\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.320878 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tkqf\" (UniqueName: \"kubernetes.io/projected/8b182a74-a85d-4ac2-a607-deb101b1465b-kube-api-access-5tkqf\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.326271 4908 scope.go:117] "RemoveContainer" containerID="96558a85429fa3ddaa6aa165866c6f0a9368d0dd4f0bfca91163de44ad95dfe7" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.365824 4908 scope.go:117] "RemoveContainer" containerID="5d7c63d6c65b9eeacb78ddb7424fe578a1e3ef74edc82dc9951c0d7196f2d3dd" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.386307 4908 scope.go:117] "RemoveContainer" containerID="0411441c9f6bfd8a429f60d4f873f6cfd0e0932082c6d6e347d28de32f3167cd" Jan 31 08:15:20 crc kubenswrapper[4908]: E0131 08:15:20.386811 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0411441c9f6bfd8a429f60d4f873f6cfd0e0932082c6d6e347d28de32f3167cd\": container with ID starting with 0411441c9f6bfd8a429f60d4f873f6cfd0e0932082c6d6e347d28de32f3167cd not found: ID does not exist" containerID="0411441c9f6bfd8a429f60d4f873f6cfd0e0932082c6d6e347d28de32f3167cd" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.386876 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0411441c9f6bfd8a429f60d4f873f6cfd0e0932082c6d6e347d28de32f3167cd"} err="failed to get container status \"0411441c9f6bfd8a429f60d4f873f6cfd0e0932082c6d6e347d28de32f3167cd\": rpc error: code = NotFound desc = could not find container \"0411441c9f6bfd8a429f60d4f873f6cfd0e0932082c6d6e347d28de32f3167cd\": container with ID starting with 0411441c9f6bfd8a429f60d4f873f6cfd0e0932082c6d6e347d28de32f3167cd not found: ID does not exist" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.386912 4908 scope.go:117] "RemoveContainer" containerID="96558a85429fa3ddaa6aa165866c6f0a9368d0dd4f0bfca91163de44ad95dfe7" Jan 31 08:15:20 crc kubenswrapper[4908]: E0131 08:15:20.387303 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96558a85429fa3ddaa6aa165866c6f0a9368d0dd4f0bfca91163de44ad95dfe7\": container with ID starting with 96558a85429fa3ddaa6aa165866c6f0a9368d0dd4f0bfca91163de44ad95dfe7 not found: ID does not exist" containerID="96558a85429fa3ddaa6aa165866c6f0a9368d0dd4f0bfca91163de44ad95dfe7" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.387343 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96558a85429fa3ddaa6aa165866c6f0a9368d0dd4f0bfca91163de44ad95dfe7"} err="failed to get container status \"96558a85429fa3ddaa6aa165866c6f0a9368d0dd4f0bfca91163de44ad95dfe7\": rpc error: code = NotFound desc = could not find container \"96558a85429fa3ddaa6aa165866c6f0a9368d0dd4f0bfca91163de44ad95dfe7\": container with ID starting with 96558a85429fa3ddaa6aa165866c6f0a9368d0dd4f0bfca91163de44ad95dfe7 not found: ID does not exist" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.387391 4908 scope.go:117] "RemoveContainer" containerID="5d7c63d6c65b9eeacb78ddb7424fe578a1e3ef74edc82dc9951c0d7196f2d3dd" Jan 31 08:15:20 crc kubenswrapper[4908]: E0131 08:15:20.387710 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d7c63d6c65b9eeacb78ddb7424fe578a1e3ef74edc82dc9951c0d7196f2d3dd\": container with ID starting with 5d7c63d6c65b9eeacb78ddb7424fe578a1e3ef74edc82dc9951c0d7196f2d3dd not found: ID does not exist" containerID="5d7c63d6c65b9eeacb78ddb7424fe578a1e3ef74edc82dc9951c0d7196f2d3dd" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.387749 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d7c63d6c65b9eeacb78ddb7424fe578a1e3ef74edc82dc9951c0d7196f2d3dd"} err="failed to get container status \"5d7c63d6c65b9eeacb78ddb7424fe578a1e3ef74edc82dc9951c0d7196f2d3dd\": rpc error: code = NotFound desc = could not find container \"5d7c63d6c65b9eeacb78ddb7424fe578a1e3ef74edc82dc9951c0d7196f2d3dd\": container with ID starting with 5d7c63d6c65b9eeacb78ddb7424fe578a1e3ef74edc82dc9951c0d7196f2d3dd not found: ID does not exist" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.387771 4908 scope.go:117] "RemoveContainer" containerID="823e7a16a56dadef5c4aa63fd9d9770a2ee933b45b130c191047c823af20b161" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.426061 4908 scope.go:117] "RemoveContainer" containerID="61ef199c197f83b30ffec6d258ed041034a381a435e13ea3fa2e17c499377090" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.456169 4908 scope.go:117] "RemoveContainer" containerID="c0ea94f2381b858a0743a2a49a3f51ca384e3b53bcc3bf9c028f850b1882662e" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.488540 4908 scope.go:117] "RemoveContainer" containerID="823e7a16a56dadef5c4aa63fd9d9770a2ee933b45b130c191047c823af20b161" Jan 31 08:15:20 crc kubenswrapper[4908]: E0131 08:15:20.489033 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"823e7a16a56dadef5c4aa63fd9d9770a2ee933b45b130c191047c823af20b161\": container with ID starting with 823e7a16a56dadef5c4aa63fd9d9770a2ee933b45b130c191047c823af20b161 not found: ID does not exist" containerID="823e7a16a56dadef5c4aa63fd9d9770a2ee933b45b130c191047c823af20b161" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.489066 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"823e7a16a56dadef5c4aa63fd9d9770a2ee933b45b130c191047c823af20b161"} err="failed to get container status \"823e7a16a56dadef5c4aa63fd9d9770a2ee933b45b130c191047c823af20b161\": rpc error: code = NotFound desc = could not find container \"823e7a16a56dadef5c4aa63fd9d9770a2ee933b45b130c191047c823af20b161\": container with ID starting with 823e7a16a56dadef5c4aa63fd9d9770a2ee933b45b130c191047c823af20b161 not found: ID does not exist" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.489091 4908 scope.go:117] "RemoveContainer" containerID="61ef199c197f83b30ffec6d258ed041034a381a435e13ea3fa2e17c499377090" Jan 31 08:15:20 crc kubenswrapper[4908]: E0131 08:15:20.490084 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61ef199c197f83b30ffec6d258ed041034a381a435e13ea3fa2e17c499377090\": container with ID starting with 61ef199c197f83b30ffec6d258ed041034a381a435e13ea3fa2e17c499377090 not found: ID does not exist" containerID="61ef199c197f83b30ffec6d258ed041034a381a435e13ea3fa2e17c499377090" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.490138 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61ef199c197f83b30ffec6d258ed041034a381a435e13ea3fa2e17c499377090"} err="failed to get container status \"61ef199c197f83b30ffec6d258ed041034a381a435e13ea3fa2e17c499377090\": rpc error: code = NotFound desc = could not find container \"61ef199c197f83b30ffec6d258ed041034a381a435e13ea3fa2e17c499377090\": container with ID starting with 61ef199c197f83b30ffec6d258ed041034a381a435e13ea3fa2e17c499377090 not found: ID does not exist" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.490168 4908 scope.go:117] "RemoveContainer" containerID="c0ea94f2381b858a0743a2a49a3f51ca384e3b53bcc3bf9c028f850b1882662e" Jan 31 08:15:20 crc kubenswrapper[4908]: E0131 08:15:20.490534 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0ea94f2381b858a0743a2a49a3f51ca384e3b53bcc3bf9c028f850b1882662e\": container with ID starting with c0ea94f2381b858a0743a2a49a3f51ca384e3b53bcc3bf9c028f850b1882662e not found: ID does not exist" containerID="c0ea94f2381b858a0743a2a49a3f51ca384e3b53bcc3bf9c028f850b1882662e" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.490588 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0ea94f2381b858a0743a2a49a3f51ca384e3b53bcc3bf9c028f850b1882662e"} err="failed to get container status \"c0ea94f2381b858a0743a2a49a3f51ca384e3b53bcc3bf9c028f850b1882662e\": rpc error: code = NotFound desc = could not find container \"c0ea94f2381b858a0743a2a49a3f51ca384e3b53bcc3bf9c028f850b1882662e\": container with ID starting with c0ea94f2381b858a0743a2a49a3f51ca384e3b53bcc3bf9c028f850b1882662e not found: ID does not exist" Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.610189 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m4xbt"] Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.624210 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m4xbt"] Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.632833 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tdtx4"] Jan 31 08:15:20 crc kubenswrapper[4908]: I0131 08:15:20.641704 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tdtx4"] Jan 31 08:15:21 crc kubenswrapper[4908]: I0131 08:15:21.939845 4908 scope.go:117] "RemoveContainer" containerID="45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109" Jan 31 08:15:21 crc kubenswrapper[4908]: E0131 08:15:21.941172 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:15:21 crc kubenswrapper[4908]: I0131 08:15:21.948733 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10ecfb2f-b880-4d07-9060-b7d8cf7e92d5" path="/var/lib/kubelet/pods/10ecfb2f-b880-4d07-9060-b7d8cf7e92d5/volumes" Jan 31 08:15:21 crc kubenswrapper[4908]: I0131 08:15:21.949432 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b182a74-a85d-4ac2-a607-deb101b1465b" path="/var/lib/kubelet/pods/8b182a74-a85d-4ac2-a607-deb101b1465b/volumes" Jan 31 08:15:29 crc kubenswrapper[4908]: I0131 08:15:29.975542 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l8b62" Jan 31 08:15:29 crc kubenswrapper[4908]: I0131 08:15:29.976054 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l8b62" Jan 31 08:15:30 crc kubenswrapper[4908]: I0131 08:15:30.016526 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l8b62" Jan 31 08:15:30 crc kubenswrapper[4908]: I0131 08:15:30.402170 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l8b62" Jan 31 08:15:31 crc kubenswrapper[4908]: I0131 08:15:31.052592 4908 scope.go:117] "RemoveContainer" containerID="dd1ab417af1dcd6749e8583d60cc88d895456ebce2e8a1d59fffdf29d8aa519f" Jan 31 08:15:32 crc kubenswrapper[4908]: I0131 08:15:32.507113 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l8b62"] Jan 31 08:15:33 crc kubenswrapper[4908]: I0131 08:15:33.380620 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l8b62" podUID="7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1" containerName="registry-server" containerID="cri-o://e36d58f65e60b08987a7349cc8e4215f4c7f1fc89957603086cd74fab55d0ff9" gracePeriod=2 Jan 31 08:15:34 crc kubenswrapper[4908]: I0131 08:15:34.332188 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l8b62" Jan 31 08:15:34 crc kubenswrapper[4908]: I0131 08:15:34.339854 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z2jn\" (UniqueName: \"kubernetes.io/projected/7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1-kube-api-access-2z2jn\") pod \"7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1\" (UID: \"7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1\") " Jan 31 08:15:34 crc kubenswrapper[4908]: I0131 08:15:34.341034 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1-utilities\") pod \"7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1\" (UID: \"7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1\") " Jan 31 08:15:34 crc kubenswrapper[4908]: I0131 08:15:34.341354 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1-catalog-content\") pod \"7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1\" (UID: \"7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1\") " Jan 31 08:15:34 crc kubenswrapper[4908]: I0131 08:15:34.357306 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1-utilities" (OuterVolumeSpecName: "utilities") pod "7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1" (UID: "7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:15:34 crc kubenswrapper[4908]: I0131 08:15:34.358095 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1-kube-api-access-2z2jn" (OuterVolumeSpecName: "kube-api-access-2z2jn") pod "7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1" (UID: "7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1"). InnerVolumeSpecName "kube-api-access-2z2jn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:15:34 crc kubenswrapper[4908]: I0131 08:15:34.392166 4908 generic.go:334] "Generic (PLEG): container finished" podID="7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1" containerID="e36d58f65e60b08987a7349cc8e4215f4c7f1fc89957603086cd74fab55d0ff9" exitCode=0 Jan 31 08:15:34 crc kubenswrapper[4908]: I0131 08:15:34.392224 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8b62" event={"ID":"7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1","Type":"ContainerDied","Data":"e36d58f65e60b08987a7349cc8e4215f4c7f1fc89957603086cd74fab55d0ff9"} Jan 31 08:15:34 crc kubenswrapper[4908]: I0131 08:15:34.392260 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8b62" event={"ID":"7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1","Type":"ContainerDied","Data":"c08884ad0231e2e8d8251fdda257ce2086eb4cfa6d83fc487ab14076b2ec319d"} Jan 31 08:15:34 crc kubenswrapper[4908]: I0131 08:15:34.392260 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l8b62" Jan 31 08:15:34 crc kubenswrapper[4908]: I0131 08:15:34.392289 4908 scope.go:117] "RemoveContainer" containerID="e36d58f65e60b08987a7349cc8e4215f4c7f1fc89957603086cd74fab55d0ff9" Jan 31 08:15:34 crc kubenswrapper[4908]: I0131 08:15:34.418124 4908 scope.go:117] "RemoveContainer" containerID="fce6d6d35d8025b8458499f18f586f072d327aa32eab5655cb37b3cda94422ed" Jan 31 08:15:34 crc kubenswrapper[4908]: I0131 08:15:34.437568 4908 scope.go:117] "RemoveContainer" containerID="525f7e4b2d00a1ae2377eecf5832506dbf266753d755e80a2a61263c6c3b08af" Jan 31 08:15:34 crc kubenswrapper[4908]: I0131 08:15:34.443862 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:34 crc kubenswrapper[4908]: I0131 08:15:34.443937 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2z2jn\" (UniqueName: \"kubernetes.io/projected/7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1-kube-api-access-2z2jn\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:34 crc kubenswrapper[4908]: I0131 08:15:34.476919 4908 scope.go:117] "RemoveContainer" containerID="e36d58f65e60b08987a7349cc8e4215f4c7f1fc89957603086cd74fab55d0ff9" Jan 31 08:15:34 crc kubenswrapper[4908]: E0131 08:15:34.477511 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e36d58f65e60b08987a7349cc8e4215f4c7f1fc89957603086cd74fab55d0ff9\": container with ID starting with e36d58f65e60b08987a7349cc8e4215f4c7f1fc89957603086cd74fab55d0ff9 not found: ID does not exist" containerID="e36d58f65e60b08987a7349cc8e4215f4c7f1fc89957603086cd74fab55d0ff9" Jan 31 08:15:34 crc kubenswrapper[4908]: I0131 08:15:34.477558 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e36d58f65e60b08987a7349cc8e4215f4c7f1fc89957603086cd74fab55d0ff9"} err="failed to get container status \"e36d58f65e60b08987a7349cc8e4215f4c7f1fc89957603086cd74fab55d0ff9\": rpc error: code = NotFound desc = could not find container \"e36d58f65e60b08987a7349cc8e4215f4c7f1fc89957603086cd74fab55d0ff9\": container with ID starting with e36d58f65e60b08987a7349cc8e4215f4c7f1fc89957603086cd74fab55d0ff9 not found: ID does not exist" Jan 31 08:15:34 crc kubenswrapper[4908]: I0131 08:15:34.477589 4908 scope.go:117] "RemoveContainer" containerID="fce6d6d35d8025b8458499f18f586f072d327aa32eab5655cb37b3cda94422ed" Jan 31 08:15:34 crc kubenswrapper[4908]: E0131 08:15:34.477965 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fce6d6d35d8025b8458499f18f586f072d327aa32eab5655cb37b3cda94422ed\": container with ID starting with fce6d6d35d8025b8458499f18f586f072d327aa32eab5655cb37b3cda94422ed not found: ID does not exist" containerID="fce6d6d35d8025b8458499f18f586f072d327aa32eab5655cb37b3cda94422ed" Jan 31 08:15:34 crc kubenswrapper[4908]: I0131 08:15:34.478011 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fce6d6d35d8025b8458499f18f586f072d327aa32eab5655cb37b3cda94422ed"} err="failed to get container status \"fce6d6d35d8025b8458499f18f586f072d327aa32eab5655cb37b3cda94422ed\": rpc error: code = NotFound desc = could not find container \"fce6d6d35d8025b8458499f18f586f072d327aa32eab5655cb37b3cda94422ed\": container with ID starting with fce6d6d35d8025b8458499f18f586f072d327aa32eab5655cb37b3cda94422ed not found: ID does not exist" Jan 31 08:15:34 crc kubenswrapper[4908]: I0131 08:15:34.478033 4908 scope.go:117] "RemoveContainer" containerID="525f7e4b2d00a1ae2377eecf5832506dbf266753d755e80a2a61263c6c3b08af" Jan 31 08:15:34 crc kubenswrapper[4908]: E0131 08:15:34.478434 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"525f7e4b2d00a1ae2377eecf5832506dbf266753d755e80a2a61263c6c3b08af\": container with ID starting with 525f7e4b2d00a1ae2377eecf5832506dbf266753d755e80a2a61263c6c3b08af not found: ID does not exist" containerID="525f7e4b2d00a1ae2377eecf5832506dbf266753d755e80a2a61263c6c3b08af" Jan 31 08:15:34 crc kubenswrapper[4908]: I0131 08:15:34.478460 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"525f7e4b2d00a1ae2377eecf5832506dbf266753d755e80a2a61263c6c3b08af"} err="failed to get container status \"525f7e4b2d00a1ae2377eecf5832506dbf266753d755e80a2a61263c6c3b08af\": rpc error: code = NotFound desc = could not find container \"525f7e4b2d00a1ae2377eecf5832506dbf266753d755e80a2a61263c6c3b08af\": container with ID starting with 525f7e4b2d00a1ae2377eecf5832506dbf266753d755e80a2a61263c6c3b08af not found: ID does not exist" Jan 31 08:15:34 crc kubenswrapper[4908]: I0131 08:15:34.479970 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1" (UID: "7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:15:34 crc kubenswrapper[4908]: I0131 08:15:34.545915 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:34 crc kubenswrapper[4908]: I0131 08:15:34.729159 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l8b62"] Jan 31 08:15:34 crc kubenswrapper[4908]: I0131 08:15:34.738378 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l8b62"] Jan 31 08:15:34 crc kubenswrapper[4908]: I0131 08:15:34.939948 4908 scope.go:117] "RemoveContainer" containerID="45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109" Jan 31 08:15:34 crc kubenswrapper[4908]: E0131 08:15:34.940264 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:15:35 crc kubenswrapper[4908]: I0131 08:15:35.952643 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1" path="/var/lib/kubelet/pods/7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1/volumes" Jan 31 08:15:42 crc kubenswrapper[4908]: I0131 08:15:42.374391 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7nndl"] Jan 31 08:15:42 crc kubenswrapper[4908]: E0131 08:15:42.375502 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1" containerName="extract-utilities" Jan 31 08:15:42 crc kubenswrapper[4908]: I0131 08:15:42.375519 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1" containerName="extract-utilities" Jan 31 08:15:42 crc kubenswrapper[4908]: E0131 08:15:42.375535 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10ecfb2f-b880-4d07-9060-b7d8cf7e92d5" containerName="extract-content" Jan 31 08:15:42 crc kubenswrapper[4908]: I0131 08:15:42.375546 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="10ecfb2f-b880-4d07-9060-b7d8cf7e92d5" containerName="extract-content" Jan 31 08:15:42 crc kubenswrapper[4908]: E0131 08:15:42.375562 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182a74-a85d-4ac2-a607-deb101b1465b" containerName="extract-utilities" Jan 31 08:15:42 crc kubenswrapper[4908]: I0131 08:15:42.375570 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182a74-a85d-4ac2-a607-deb101b1465b" containerName="extract-utilities" Jan 31 08:15:42 crc kubenswrapper[4908]: E0131 08:15:42.375591 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1" containerName="extract-content" Jan 31 08:15:42 crc kubenswrapper[4908]: I0131 08:15:42.375600 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1" containerName="extract-content" Jan 31 08:15:42 crc kubenswrapper[4908]: E0131 08:15:42.375615 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10ecfb2f-b880-4d07-9060-b7d8cf7e92d5" containerName="registry-server" Jan 31 08:15:42 crc kubenswrapper[4908]: I0131 08:15:42.375623 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="10ecfb2f-b880-4d07-9060-b7d8cf7e92d5" containerName="registry-server" Jan 31 08:15:42 crc kubenswrapper[4908]: E0131 08:15:42.375636 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182a74-a85d-4ac2-a607-deb101b1465b" containerName="registry-server" Jan 31 08:15:42 crc kubenswrapper[4908]: I0131 08:15:42.375645 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182a74-a85d-4ac2-a607-deb101b1465b" containerName="registry-server" Jan 31 08:15:42 crc kubenswrapper[4908]: E0131 08:15:42.375657 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1" containerName="registry-server" Jan 31 08:15:42 crc kubenswrapper[4908]: I0131 08:15:42.375665 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1" containerName="registry-server" Jan 31 08:15:42 crc kubenswrapper[4908]: E0131 08:15:42.375678 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10ecfb2f-b880-4d07-9060-b7d8cf7e92d5" containerName="extract-utilities" Jan 31 08:15:42 crc kubenswrapper[4908]: I0131 08:15:42.375685 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="10ecfb2f-b880-4d07-9060-b7d8cf7e92d5" containerName="extract-utilities" Jan 31 08:15:42 crc kubenswrapper[4908]: E0131 08:15:42.375704 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182a74-a85d-4ac2-a607-deb101b1465b" containerName="extract-content" Jan 31 08:15:42 crc kubenswrapper[4908]: I0131 08:15:42.375711 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182a74-a85d-4ac2-a607-deb101b1465b" containerName="extract-content" Jan 31 08:15:42 crc kubenswrapper[4908]: I0131 08:15:42.375962 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182a74-a85d-4ac2-a607-deb101b1465b" containerName="registry-server" Jan 31 08:15:42 crc kubenswrapper[4908]: I0131 08:15:42.376006 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ee5c944-7f9c-4dcb-9d7d-62d0447d10e1" containerName="registry-server" Jan 31 08:15:42 crc kubenswrapper[4908]: I0131 08:15:42.376029 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="10ecfb2f-b880-4d07-9060-b7d8cf7e92d5" containerName="registry-server" Jan 31 08:15:42 crc kubenswrapper[4908]: I0131 08:15:42.377630 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7nndl" Jan 31 08:15:42 crc kubenswrapper[4908]: I0131 08:15:42.384101 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7nndl"] Jan 31 08:15:42 crc kubenswrapper[4908]: I0131 08:15:42.420850 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea659b0e-ee06-43f9-a771-7f216a1a7bcb-utilities\") pod \"redhat-marketplace-7nndl\" (UID: \"ea659b0e-ee06-43f9-a771-7f216a1a7bcb\") " pod="openshift-marketplace/redhat-marketplace-7nndl" Jan 31 08:15:42 crc kubenswrapper[4908]: I0131 08:15:42.420994 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rsdc\" (UniqueName: \"kubernetes.io/projected/ea659b0e-ee06-43f9-a771-7f216a1a7bcb-kube-api-access-9rsdc\") pod \"redhat-marketplace-7nndl\" (UID: \"ea659b0e-ee06-43f9-a771-7f216a1a7bcb\") " pod="openshift-marketplace/redhat-marketplace-7nndl" Jan 31 08:15:42 crc kubenswrapper[4908]: I0131 08:15:42.421124 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea659b0e-ee06-43f9-a771-7f216a1a7bcb-catalog-content\") pod \"redhat-marketplace-7nndl\" (UID: \"ea659b0e-ee06-43f9-a771-7f216a1a7bcb\") " pod="openshift-marketplace/redhat-marketplace-7nndl" Jan 31 08:15:42 crc kubenswrapper[4908]: I0131 08:15:42.522593 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rsdc\" (UniqueName: \"kubernetes.io/projected/ea659b0e-ee06-43f9-a771-7f216a1a7bcb-kube-api-access-9rsdc\") pod \"redhat-marketplace-7nndl\" (UID: \"ea659b0e-ee06-43f9-a771-7f216a1a7bcb\") " pod="openshift-marketplace/redhat-marketplace-7nndl" Jan 31 08:15:42 crc kubenswrapper[4908]: I0131 08:15:42.522708 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea659b0e-ee06-43f9-a771-7f216a1a7bcb-catalog-content\") pod \"redhat-marketplace-7nndl\" (UID: \"ea659b0e-ee06-43f9-a771-7f216a1a7bcb\") " pod="openshift-marketplace/redhat-marketplace-7nndl" Jan 31 08:15:42 crc kubenswrapper[4908]: I0131 08:15:42.522744 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea659b0e-ee06-43f9-a771-7f216a1a7bcb-utilities\") pod \"redhat-marketplace-7nndl\" (UID: \"ea659b0e-ee06-43f9-a771-7f216a1a7bcb\") " pod="openshift-marketplace/redhat-marketplace-7nndl" Jan 31 08:15:42 crc kubenswrapper[4908]: I0131 08:15:42.523384 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea659b0e-ee06-43f9-a771-7f216a1a7bcb-utilities\") pod \"redhat-marketplace-7nndl\" (UID: \"ea659b0e-ee06-43f9-a771-7f216a1a7bcb\") " pod="openshift-marketplace/redhat-marketplace-7nndl" Jan 31 08:15:42 crc kubenswrapper[4908]: I0131 08:15:42.523669 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea659b0e-ee06-43f9-a771-7f216a1a7bcb-catalog-content\") pod \"redhat-marketplace-7nndl\" (UID: \"ea659b0e-ee06-43f9-a771-7f216a1a7bcb\") " pod="openshift-marketplace/redhat-marketplace-7nndl" Jan 31 08:15:42 crc kubenswrapper[4908]: I0131 08:15:42.547371 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rsdc\" (UniqueName: \"kubernetes.io/projected/ea659b0e-ee06-43f9-a771-7f216a1a7bcb-kube-api-access-9rsdc\") pod \"redhat-marketplace-7nndl\" (UID: \"ea659b0e-ee06-43f9-a771-7f216a1a7bcb\") " pod="openshift-marketplace/redhat-marketplace-7nndl" Jan 31 08:15:42 crc kubenswrapper[4908]: I0131 08:15:42.696179 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7nndl" Jan 31 08:15:43 crc kubenswrapper[4908]: I0131 08:15:43.249761 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7nndl"] Jan 31 08:15:43 crc kubenswrapper[4908]: I0131 08:15:43.474011 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7nndl" event={"ID":"ea659b0e-ee06-43f9-a771-7f216a1a7bcb","Type":"ContainerStarted","Data":"302cb276efa5564c66e143a47bf13c99e659f364ac9c7e275b1ee5a3d058ead0"} Jan 31 08:15:43 crc kubenswrapper[4908]: I0131 08:15:43.474368 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7nndl" event={"ID":"ea659b0e-ee06-43f9-a771-7f216a1a7bcb","Type":"ContainerStarted","Data":"ced12b813ce2b4abd10906bd1e22c10061b7f4dec47378c539dce55a8f9cf7c4"} Jan 31 08:15:44 crc kubenswrapper[4908]: I0131 08:15:44.485262 4908 generic.go:334] "Generic (PLEG): container finished" podID="ea659b0e-ee06-43f9-a771-7f216a1a7bcb" containerID="302cb276efa5564c66e143a47bf13c99e659f364ac9c7e275b1ee5a3d058ead0" exitCode=0 Jan 31 08:15:44 crc kubenswrapper[4908]: I0131 08:15:44.485592 4908 generic.go:334] "Generic (PLEG): container finished" podID="ea659b0e-ee06-43f9-a771-7f216a1a7bcb" containerID="184087d08219e5627d5a54b8088e7980e543f3225404dc1775edf9fe3835a85d" exitCode=0 Jan 31 08:15:44 crc kubenswrapper[4908]: I0131 08:15:44.485369 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7nndl" event={"ID":"ea659b0e-ee06-43f9-a771-7f216a1a7bcb","Type":"ContainerDied","Data":"302cb276efa5564c66e143a47bf13c99e659f364ac9c7e275b1ee5a3d058ead0"} Jan 31 08:15:44 crc kubenswrapper[4908]: I0131 08:15:44.485642 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7nndl" event={"ID":"ea659b0e-ee06-43f9-a771-7f216a1a7bcb","Type":"ContainerDied","Data":"184087d08219e5627d5a54b8088e7980e543f3225404dc1775edf9fe3835a85d"} Jan 31 08:15:45 crc kubenswrapper[4908]: I0131 08:15:45.496079 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7nndl" event={"ID":"ea659b0e-ee06-43f9-a771-7f216a1a7bcb","Type":"ContainerStarted","Data":"201bef7e810018b4a0e593810612645fcaedf93d38a4a0da2236b6d56ebef463"} Jan 31 08:15:45 crc kubenswrapper[4908]: I0131 08:15:45.522923 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7nndl" podStartSLOduration=2.120478567 podStartE2EDuration="3.522896608s" podCreationTimestamp="2026-01-31 08:15:42 +0000 UTC" firstStartedPulling="2026-01-31 08:15:43.476042774 +0000 UTC m=+3250.091987428" lastFinishedPulling="2026-01-31 08:15:44.878460815 +0000 UTC m=+3251.494405469" observedRunningTime="2026-01-31 08:15:45.512736573 +0000 UTC m=+3252.128681237" watchObservedRunningTime="2026-01-31 08:15:45.522896608 +0000 UTC m=+3252.138841272" Jan 31 08:15:49 crc kubenswrapper[4908]: I0131 08:15:49.941499 4908 scope.go:117] "RemoveContainer" containerID="45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109" Jan 31 08:15:49 crc kubenswrapper[4908]: E0131 08:15:49.943284 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:15:52 crc kubenswrapper[4908]: I0131 08:15:52.697384 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7nndl" Jan 31 08:15:52 crc kubenswrapper[4908]: I0131 08:15:52.697799 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7nndl" Jan 31 08:15:52 crc kubenswrapper[4908]: I0131 08:15:52.745442 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7nndl" Jan 31 08:15:53 crc kubenswrapper[4908]: I0131 08:15:53.585952 4908 generic.go:334] "Generic (PLEG): container finished" podID="e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b" containerID="49ae096e6026bc0ec270a9d7ee9a6e5359780f38006fdd2428c146191bdb8759" exitCode=0 Jan 31 08:15:53 crc kubenswrapper[4908]: I0131 08:15:53.586026 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" event={"ID":"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b","Type":"ContainerDied","Data":"49ae096e6026bc0ec270a9d7ee9a6e5359780f38006fdd2428c146191bdb8759"} Jan 31 08:15:53 crc kubenswrapper[4908]: I0131 08:15:53.636061 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7nndl" Jan 31 08:15:53 crc kubenswrapper[4908]: I0131 08:15:53.683927 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7nndl"] Jan 31 08:15:54 crc kubenswrapper[4908]: I0131 08:15:54.986160 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.048075 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-ssh-key-openstack-edpm-ipam\") pod \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.048168 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-migration-ssh-key-0\") pod \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.048285 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-cell1-compute-config-0\") pod \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.048322 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9npb9\" (UniqueName: \"kubernetes.io/projected/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-kube-api-access-9npb9\") pod \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.048835 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-cell1-compute-config-1\") pod \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.048874 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-ceph\") pod \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.048900 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-migration-ssh-key-1\") pod \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.048929 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-custom-ceph-combined-ca-bundle\") pod \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.048959 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-inventory\") pod \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.049017 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-extra-config-0\") pod \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.049040 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-ceph-nova-0\") pod \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\" (UID: \"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b\") " Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.055638 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-ceph" (OuterVolumeSpecName: "ceph") pod "e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b" (UID: "e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.055721 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-kube-api-access-9npb9" (OuterVolumeSpecName: "kube-api-access-9npb9") pod "e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b" (UID: "e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b"). InnerVolumeSpecName "kube-api-access-9npb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.056188 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b" (UID: "e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.078066 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b" (UID: "e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.079769 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-inventory" (OuterVolumeSpecName: "inventory") pod "e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b" (UID: "e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.080215 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b" (UID: "e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.082173 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b" (UID: "e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.082857 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b" (UID: "e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.085186 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b" (UID: "e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.085937 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b" (UID: "e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.086550 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b" (UID: "e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.150753 4908 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.150795 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9npb9\" (UniqueName: \"kubernetes.io/projected/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-kube-api-access-9npb9\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.150805 4908 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.150814 4908 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.150823 4908 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.150851 4908 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.150862 4908 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.150873 4908 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.150883 4908 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.150891 4908 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.150900 4908 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.607971 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7nndl" podUID="ea659b0e-ee06-43f9-a771-7f216a1a7bcb" containerName="registry-server" containerID="cri-o://201bef7e810018b4a0e593810612645fcaedf93d38a4a0da2236b6d56ebef463" gracePeriod=2 Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.608310 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.610107 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66" event={"ID":"e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b","Type":"ContainerDied","Data":"b5a5401fec7a0028418845d6b94adf5917cdab076eb6267f733041e9039b45e2"} Jan 31 08:15:55 crc kubenswrapper[4908]: I0131 08:15:55.610143 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5a5401fec7a0028418845d6b94adf5917cdab076eb6267f733041e9039b45e2" Jan 31 08:15:56 crc kubenswrapper[4908]: I0131 08:15:56.622932 4908 generic.go:334] "Generic (PLEG): container finished" podID="ea659b0e-ee06-43f9-a771-7f216a1a7bcb" containerID="201bef7e810018b4a0e593810612645fcaedf93d38a4a0da2236b6d56ebef463" exitCode=0 Jan 31 08:15:56 crc kubenswrapper[4908]: I0131 08:15:56.623129 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7nndl" event={"ID":"ea659b0e-ee06-43f9-a771-7f216a1a7bcb","Type":"ContainerDied","Data":"201bef7e810018b4a0e593810612645fcaedf93d38a4a0da2236b6d56ebef463"} Jan 31 08:15:56 crc kubenswrapper[4908]: I0131 08:15:56.873940 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7nndl" Jan 31 08:15:56 crc kubenswrapper[4908]: I0131 08:15:56.994050 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea659b0e-ee06-43f9-a771-7f216a1a7bcb-utilities\") pod \"ea659b0e-ee06-43f9-a771-7f216a1a7bcb\" (UID: \"ea659b0e-ee06-43f9-a771-7f216a1a7bcb\") " Jan 31 08:15:56 crc kubenswrapper[4908]: I0131 08:15:56.994174 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rsdc\" (UniqueName: \"kubernetes.io/projected/ea659b0e-ee06-43f9-a771-7f216a1a7bcb-kube-api-access-9rsdc\") pod \"ea659b0e-ee06-43f9-a771-7f216a1a7bcb\" (UID: \"ea659b0e-ee06-43f9-a771-7f216a1a7bcb\") " Jan 31 08:15:56 crc kubenswrapper[4908]: I0131 08:15:56.994230 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea659b0e-ee06-43f9-a771-7f216a1a7bcb-catalog-content\") pod \"ea659b0e-ee06-43f9-a771-7f216a1a7bcb\" (UID: \"ea659b0e-ee06-43f9-a771-7f216a1a7bcb\") " Jan 31 08:15:56 crc kubenswrapper[4908]: I0131 08:15:56.995634 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea659b0e-ee06-43f9-a771-7f216a1a7bcb-utilities" (OuterVolumeSpecName: "utilities") pod "ea659b0e-ee06-43f9-a771-7f216a1a7bcb" (UID: "ea659b0e-ee06-43f9-a771-7f216a1a7bcb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:15:57 crc kubenswrapper[4908]: I0131 08:15:57.000242 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea659b0e-ee06-43f9-a771-7f216a1a7bcb-kube-api-access-9rsdc" (OuterVolumeSpecName: "kube-api-access-9rsdc") pod "ea659b0e-ee06-43f9-a771-7f216a1a7bcb" (UID: "ea659b0e-ee06-43f9-a771-7f216a1a7bcb"). InnerVolumeSpecName "kube-api-access-9rsdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:15:57 crc kubenswrapper[4908]: I0131 08:15:57.016636 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea659b0e-ee06-43f9-a771-7f216a1a7bcb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea659b0e-ee06-43f9-a771-7f216a1a7bcb" (UID: "ea659b0e-ee06-43f9-a771-7f216a1a7bcb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:15:57 crc kubenswrapper[4908]: I0131 08:15:57.097378 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea659b0e-ee06-43f9-a771-7f216a1a7bcb-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:57 crc kubenswrapper[4908]: I0131 08:15:57.097430 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rsdc\" (UniqueName: \"kubernetes.io/projected/ea659b0e-ee06-43f9-a771-7f216a1a7bcb-kube-api-access-9rsdc\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:57 crc kubenswrapper[4908]: I0131 08:15:57.097756 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea659b0e-ee06-43f9-a771-7f216a1a7bcb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:57 crc kubenswrapper[4908]: I0131 08:15:57.637052 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7nndl" event={"ID":"ea659b0e-ee06-43f9-a771-7f216a1a7bcb","Type":"ContainerDied","Data":"ced12b813ce2b4abd10906bd1e22c10061b7f4dec47378c539dce55a8f9cf7c4"} Jan 31 08:15:57 crc kubenswrapper[4908]: I0131 08:15:57.637941 4908 scope.go:117] "RemoveContainer" containerID="201bef7e810018b4a0e593810612645fcaedf93d38a4a0da2236b6d56ebef463" Jan 31 08:15:57 crc kubenswrapper[4908]: I0131 08:15:57.638184 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7nndl" Jan 31 08:15:57 crc kubenswrapper[4908]: I0131 08:15:57.670842 4908 scope.go:117] "RemoveContainer" containerID="184087d08219e5627d5a54b8088e7980e543f3225404dc1775edf9fe3835a85d" Jan 31 08:15:57 crc kubenswrapper[4908]: I0131 08:15:57.738049 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7nndl"] Jan 31 08:15:57 crc kubenswrapper[4908]: I0131 08:15:57.752147 4908 scope.go:117] "RemoveContainer" containerID="302cb276efa5564c66e143a47bf13c99e659f364ac9c7e275b1ee5a3d058ead0" Jan 31 08:15:57 crc kubenswrapper[4908]: I0131 08:15:57.755683 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7nndl"] Jan 31 08:15:57 crc kubenswrapper[4908]: I0131 08:15:57.949560 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea659b0e-ee06-43f9-a771-7f216a1a7bcb" path="/var/lib/kubelet/pods/ea659b0e-ee06-43f9-a771-7f216a1a7bcb/volumes" Jan 31 08:16:04 crc kubenswrapper[4908]: I0131 08:16:04.940319 4908 scope.go:117] "RemoveContainer" containerID="45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109" Jan 31 08:16:04 crc kubenswrapper[4908]: E0131 08:16:04.941050 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.692195 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 31 08:16:10 crc kubenswrapper[4908]: E0131 08:16:10.693016 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea659b0e-ee06-43f9-a771-7f216a1a7bcb" containerName="registry-server" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.693035 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea659b0e-ee06-43f9-a771-7f216a1a7bcb" containerName="registry-server" Jan 31 08:16:10 crc kubenswrapper[4908]: E0131 08:16:10.693062 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.693071 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 31 08:16:10 crc kubenswrapper[4908]: E0131 08:16:10.693090 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea659b0e-ee06-43f9-a771-7f216a1a7bcb" containerName="extract-utilities" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.693096 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea659b0e-ee06-43f9-a771-7f216a1a7bcb" containerName="extract-utilities" Jan 31 08:16:10 crc kubenswrapper[4908]: E0131 08:16:10.693110 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea659b0e-ee06-43f9-a771-7f216a1a7bcb" containerName="extract-content" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.693117 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea659b0e-ee06-43f9-a771-7f216a1a7bcb" containerName="extract-content" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.693285 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea659b0e-ee06-43f9-a771-7f216a1a7bcb" containerName="registry-server" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.693303 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.694222 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.696511 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.702917 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.710575 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.785701 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.789726 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.792282 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.809496 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.844586 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/223899d0-e94c-4e2d-bfba-d9b7baec40e1-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.844633 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/223899d0-e94c-4e2d-bfba-d9b7baec40e1-sys\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.844657 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/223899d0-e94c-4e2d-bfba-d9b7baec40e1-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.844686 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/223899d0-e94c-4e2d-bfba-d9b7baec40e1-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.844702 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/223899d0-e94c-4e2d-bfba-d9b7baec40e1-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.844846 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/223899d0-e94c-4e2d-bfba-d9b7baec40e1-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.844885 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/223899d0-e94c-4e2d-bfba-d9b7baec40e1-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.845070 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/223899d0-e94c-4e2d-bfba-d9b7baec40e1-dev\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.845111 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/223899d0-e94c-4e2d-bfba-d9b7baec40e1-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.845155 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/223899d0-e94c-4e2d-bfba-d9b7baec40e1-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.845290 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/223899d0-e94c-4e2d-bfba-d9b7baec40e1-run\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.845321 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x9gb\" (UniqueName: \"kubernetes.io/projected/223899d0-e94c-4e2d-bfba-d9b7baec40e1-kube-api-access-5x9gb\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.845357 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/223899d0-e94c-4e2d-bfba-d9b7baec40e1-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.845387 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/223899d0-e94c-4e2d-bfba-d9b7baec40e1-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.845426 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/223899d0-e94c-4e2d-bfba-d9b7baec40e1-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.845580 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/223899d0-e94c-4e2d-bfba-d9b7baec40e1-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.947593 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/223899d0-e94c-4e2d-bfba-d9b7baec40e1-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.947637 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-dev\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.947670 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/223899d0-e94c-4e2d-bfba-d9b7baec40e1-run\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.947750 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/223899d0-e94c-4e2d-bfba-d9b7baec40e1-run\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.947763 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/223899d0-e94c-4e2d-bfba-d9b7baec40e1-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.947784 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x9gb\" (UniqueName: \"kubernetes.io/projected/223899d0-e94c-4e2d-bfba-d9b7baec40e1-kube-api-access-5x9gb\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.947818 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/223899d0-e94c-4e2d-bfba-d9b7baec40e1-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.948149 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/223899d0-e94c-4e2d-bfba-d9b7baec40e1-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.948937 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.949007 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/223899d0-e94c-4e2d-bfba-d9b7baec40e1-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.949074 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.949113 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-lib-modules\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.949143 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-run\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.949183 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-config-data-custom\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.949209 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-ceph\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.949235 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-sys\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.949262 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.949291 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/223899d0-e94c-4e2d-bfba-d9b7baec40e1-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.949311 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/223899d0-e94c-4e2d-bfba-d9b7baec40e1-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.949327 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/223899d0-e94c-4e2d-bfba-d9b7baec40e1-sys\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.949345 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/223899d0-e94c-4e2d-bfba-d9b7baec40e1-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.949366 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/223899d0-e94c-4e2d-bfba-d9b7baec40e1-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.949380 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/223899d0-e94c-4e2d-bfba-d9b7baec40e1-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.949408 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-config-data\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.949428 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-scripts\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.949446 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.949475 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/223899d0-e94c-4e2d-bfba-d9b7baec40e1-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.949492 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/223899d0-e94c-4e2d-bfba-d9b7baec40e1-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.949514 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.949550 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w68wq\" (UniqueName: \"kubernetes.io/projected/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-kube-api-access-w68wq\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.949653 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/223899d0-e94c-4e2d-bfba-d9b7baec40e1-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.949836 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/223899d0-e94c-4e2d-bfba-d9b7baec40e1-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.949596 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.950171 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/223899d0-e94c-4e2d-bfba-d9b7baec40e1-dev\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.950201 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/223899d0-e94c-4e2d-bfba-d9b7baec40e1-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.950217 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-etc-nvme\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.950295 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/223899d0-e94c-4e2d-bfba-d9b7baec40e1-sys\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.951357 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/223899d0-e94c-4e2d-bfba-d9b7baec40e1-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.951428 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/223899d0-e94c-4e2d-bfba-d9b7baec40e1-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.951506 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/223899d0-e94c-4e2d-bfba-d9b7baec40e1-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.951528 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/223899d0-e94c-4e2d-bfba-d9b7baec40e1-dev\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.951579 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/223899d0-e94c-4e2d-bfba-d9b7baec40e1-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.953651 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/223899d0-e94c-4e2d-bfba-d9b7baec40e1-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.954343 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/223899d0-e94c-4e2d-bfba-d9b7baec40e1-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.954638 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/223899d0-e94c-4e2d-bfba-d9b7baec40e1-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.954673 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/223899d0-e94c-4e2d-bfba-d9b7baec40e1-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.954934 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/223899d0-e94c-4e2d-bfba-d9b7baec40e1-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:10 crc kubenswrapper[4908]: I0131 08:16:10.973766 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x9gb\" (UniqueName: \"kubernetes.io/projected/223899d0-e94c-4e2d-bfba-d9b7baec40e1-kube-api-access-5x9gb\") pod \"cinder-volume-volume1-0\" (UID: \"223899d0-e94c-4e2d-bfba-d9b7baec40e1\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.011280 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.051709 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-config-data\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.051761 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-scripts\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.051797 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.051836 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.051871 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w68wq\" (UniqueName: \"kubernetes.io/projected/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-kube-api-access-w68wq\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.051905 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.051956 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-etc-nvme\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.052001 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-dev\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.052063 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.052100 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.052125 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-lib-modules\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.052152 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-run\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.052175 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-config-data-custom\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.052199 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-ceph\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.052220 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-sys\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.052245 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.052290 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-dev\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.052309 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-run\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.052356 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.052373 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.052380 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-etc-nvme\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.052432 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.052539 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-sys\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.052563 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.052929 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.053851 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-lib-modules\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.056330 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-scripts\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.056522 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-config-data-custom\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.056784 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-ceph\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.058745 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-config-data\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.073492 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.076965 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w68wq\" (UniqueName: \"kubernetes.io/projected/e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130-kube-api-access-w68wq\") pod \"cinder-backup-0\" (UID: \"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130\") " pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.113089 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.351475 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-x8nmk"] Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.352913 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-x8nmk" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.365350 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-x8nmk"] Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.460145 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-25c5-account-create-update-7btgk"] Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.461692 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-25c5-account-create-update-7btgk" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.464484 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.466711 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q755q\" (UniqueName: \"kubernetes.io/projected/fe31ee36-6d2c-43a5-b127-2659897ae68b-kube-api-access-q755q\") pod \"manila-db-create-x8nmk\" (UID: \"fe31ee36-6d2c-43a5-b127-2659897ae68b\") " pod="openstack/manila-db-create-x8nmk" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.466849 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe31ee36-6d2c-43a5-b127-2659897ae68b-operator-scripts\") pod \"manila-db-create-x8nmk\" (UID: \"fe31ee36-6d2c-43a5-b127-2659897ae68b\") " pod="openstack/manila-db-create-x8nmk" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.468688 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-25c5-account-create-update-7btgk"] Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.562121 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.563613 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.567738 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.568338 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-npr59" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.569391 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.569704 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.569920 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.570086 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmdgl\" (UniqueName: \"kubernetes.io/projected/2d096e8e-720c-4fc6-b18d-efea93875a89-kube-api-access-rmdgl\") pod \"manila-25c5-account-create-update-7btgk\" (UID: \"2d096e8e-720c-4fc6-b18d-efea93875a89\") " pod="openstack/manila-25c5-account-create-update-7btgk" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.570159 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q755q\" (UniqueName: \"kubernetes.io/projected/fe31ee36-6d2c-43a5-b127-2659897ae68b-kube-api-access-q755q\") pod \"manila-db-create-x8nmk\" (UID: \"fe31ee36-6d2c-43a5-b127-2659897ae68b\") " pod="openstack/manila-db-create-x8nmk" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.570338 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe31ee36-6d2c-43a5-b127-2659897ae68b-operator-scripts\") pod \"manila-db-create-x8nmk\" (UID: \"fe31ee36-6d2c-43a5-b127-2659897ae68b\") " pod="openstack/manila-db-create-x8nmk" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.570491 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d096e8e-720c-4fc6-b18d-efea93875a89-operator-scripts\") pod \"manila-25c5-account-create-update-7btgk\" (UID: \"2d096e8e-720c-4fc6-b18d-efea93875a89\") " pod="openstack/manila-25c5-account-create-update-7btgk" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.571485 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe31ee36-6d2c-43a5-b127-2659897ae68b-operator-scripts\") pod \"manila-db-create-x8nmk\" (UID: \"fe31ee36-6d2c-43a5-b127-2659897ae68b\") " pod="openstack/manila-db-create-x8nmk" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.595933 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q755q\" (UniqueName: \"kubernetes.io/projected/fe31ee36-6d2c-43a5-b127-2659897ae68b-kube-api-access-q755q\") pod \"manila-db-create-x8nmk\" (UID: \"fe31ee36-6d2c-43a5-b127-2659897ae68b\") " pod="openstack/manila-db-create-x8nmk" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.661313 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.663359 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.668271 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.668481 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.669268 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.674798 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6-scripts\") pod \"glance-default-external-api-0\" (UID: \"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.674893 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d096e8e-720c-4fc6-b18d-efea93875a89-operator-scripts\") pod \"manila-25c5-account-create-update-7btgk\" (UID: \"2d096e8e-720c-4fc6-b18d-efea93875a89\") " pod="openstack/manila-25c5-account-create-update-7btgk" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.674930 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6-logs\") pod \"glance-default-external-api-0\" (UID: \"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.674961 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6-config-data\") pod \"glance-default-external-api-0\" (UID: \"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.675047 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.675077 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmdgl\" (UniqueName: \"kubernetes.io/projected/2d096e8e-720c-4fc6-b18d-efea93875a89-kube-api-access-rmdgl\") pod \"manila-25c5-account-create-update-7btgk\" (UID: \"2d096e8e-720c-4fc6-b18d-efea93875a89\") " pod="openstack/manila-25c5-account-create-update-7btgk" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.675128 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.675181 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6-ceph\") pod \"glance-default-external-api-0\" (UID: \"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.675249 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.675297 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.675323 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5dzl\" (UniqueName: \"kubernetes.io/projected/4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6-kube-api-access-k5dzl\") pod \"glance-default-external-api-0\" (UID: \"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.676369 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d096e8e-720c-4fc6-b18d-efea93875a89-operator-scripts\") pod \"manila-25c5-account-create-update-7btgk\" (UID: \"2d096e8e-720c-4fc6-b18d-efea93875a89\") " pod="openstack/manila-25c5-account-create-update-7btgk" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.685491 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-x8nmk" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.701685 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmdgl\" (UniqueName: \"kubernetes.io/projected/2d096e8e-720c-4fc6-b18d-efea93875a89-kube-api-access-rmdgl\") pod \"manila-25c5-account-create-update-7btgk\" (UID: \"2d096e8e-720c-4fc6-b18d-efea93875a89\") " pod="openstack/manila-25c5-account-create-update-7btgk" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.777600 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6-scripts\") pod \"glance-default-external-api-0\" (UID: \"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.777669 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7812465e-1935-4283-865b-c02289d7bd1d-ceph\") pod \"glance-default-internal-api-0\" (UID: \"7812465e-1935-4283-865b-c02289d7bd1d\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.777696 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6-logs\") pod \"glance-default-external-api-0\" (UID: \"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.777761 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6-config-data\") pod \"glance-default-external-api-0\" (UID: \"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.777778 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7812465e-1935-4283-865b-c02289d7bd1d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7812465e-1935-4283-865b-c02289d7bd1d\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.777794 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7812465e-1935-4283-865b-c02289d7bd1d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7812465e-1935-4283-865b-c02289d7bd1d\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.777834 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7812465e-1935-4283-865b-c02289d7bd1d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7812465e-1935-4283-865b-c02289d7bd1d\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.777860 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.777906 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7812465e-1935-4283-865b-c02289d7bd1d-logs\") pod \"glance-default-internal-api-0\" (UID: \"7812465e-1935-4283-865b-c02289d7bd1d\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.777928 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.777949 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7812465e-1935-4283-865b-c02289d7bd1d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7812465e-1935-4283-865b-c02289d7bd1d\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.778014 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6-ceph\") pod \"glance-default-external-api-0\" (UID: \"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.778042 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2ntf\" (UniqueName: \"kubernetes.io/projected/7812465e-1935-4283-865b-c02289d7bd1d-kube-api-access-s2ntf\") pod \"glance-default-internal-api-0\" (UID: \"7812465e-1935-4283-865b-c02289d7bd1d\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.778085 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"7812465e-1935-4283-865b-c02289d7bd1d\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.778109 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.778133 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5dzl\" (UniqueName: \"kubernetes.io/projected/4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6-kube-api-access-k5dzl\") pod \"glance-default-external-api-0\" (UID: \"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.778163 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.778189 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7812465e-1935-4283-865b-c02289d7bd1d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7812465e-1935-4283-865b-c02289d7bd1d\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.778907 4908 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.779089 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6-logs\") pod \"glance-default-external-api-0\" (UID: \"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.780791 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.785485 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6-config-data\") pod \"glance-default-external-api-0\" (UID: \"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.785960 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-25c5-account-create-update-7btgk" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.786875 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.786962 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6-ceph\") pod \"glance-default-external-api-0\" (UID: \"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.789288 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.790907 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6-scripts\") pod \"glance-default-external-api-0\" (UID: \"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.810625 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5dzl\" (UniqueName: \"kubernetes.io/projected/4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6-kube-api-access-k5dzl\") pod \"glance-default-external-api-0\" (UID: \"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.826332 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.844381 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 31 08:16:11 crc kubenswrapper[4908]: W0131 08:16:11.849473 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod223899d0_e94c_4e2d_bfba_d9b7baec40e1.slice/crio-4b5d88f4a5794cdff18381196c79198bc44b9ea5e492157e39c9409920777e5a WatchSource:0}: Error finding container 4b5d88f4a5794cdff18381196c79198bc44b9ea5e492157e39c9409920777e5a: Status 404 returned error can't find the container with id 4b5d88f4a5794cdff18381196c79198bc44b9ea5e492157e39c9409920777e5a Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.879540 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7812465e-1935-4283-865b-c02289d7bd1d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7812465e-1935-4283-865b-c02289d7bd1d\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.879670 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7812465e-1935-4283-865b-c02289d7bd1d-ceph\") pod \"glance-default-internal-api-0\" (UID: \"7812465e-1935-4283-865b-c02289d7bd1d\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.904842 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.917064 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7812465e-1935-4283-865b-c02289d7bd1d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7812465e-1935-4283-865b-c02289d7bd1d\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.923745 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7812465e-1935-4283-865b-c02289d7bd1d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7812465e-1935-4283-865b-c02289d7bd1d\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.924801 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7812465e-1935-4283-865b-c02289d7bd1d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7812465e-1935-4283-865b-c02289d7bd1d\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.924888 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7812465e-1935-4283-865b-c02289d7bd1d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7812465e-1935-4283-865b-c02289d7bd1d\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.925038 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7812465e-1935-4283-865b-c02289d7bd1d-logs\") pod \"glance-default-internal-api-0\" (UID: \"7812465e-1935-4283-865b-c02289d7bd1d\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.925136 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7812465e-1935-4283-865b-c02289d7bd1d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7812465e-1935-4283-865b-c02289d7bd1d\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.925211 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7812465e-1935-4283-865b-c02289d7bd1d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7812465e-1935-4283-865b-c02289d7bd1d\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.925344 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2ntf\" (UniqueName: \"kubernetes.io/projected/7812465e-1935-4283-865b-c02289d7bd1d-kube-api-access-s2ntf\") pod \"glance-default-internal-api-0\" (UID: \"7812465e-1935-4283-865b-c02289d7bd1d\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.925449 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"7812465e-1935-4283-865b-c02289d7bd1d\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.926011 4908 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"7812465e-1935-4283-865b-c02289d7bd1d\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.929496 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7812465e-1935-4283-865b-c02289d7bd1d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7812465e-1935-4283-865b-c02289d7bd1d\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.929866 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7812465e-1935-4283-865b-c02289d7bd1d-logs\") pod \"glance-default-internal-api-0\" (UID: \"7812465e-1935-4283-865b-c02289d7bd1d\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.936167 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7812465e-1935-4283-865b-c02289d7bd1d-ceph\") pod \"glance-default-internal-api-0\" (UID: \"7812465e-1935-4283-865b-c02289d7bd1d\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.945310 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7812465e-1935-4283-865b-c02289d7bd1d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7812465e-1935-4283-865b-c02289d7bd1d\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.951760 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7812465e-1935-4283-865b-c02289d7bd1d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7812465e-1935-4283-865b-c02289d7bd1d\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.951942 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2ntf\" (UniqueName: \"kubernetes.io/projected/7812465e-1935-4283-865b-c02289d7bd1d-kube-api-access-s2ntf\") pod \"glance-default-internal-api-0\" (UID: \"7812465e-1935-4283-865b-c02289d7bd1d\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.994082 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 31 08:16:11 crc kubenswrapper[4908]: I0131 08:16:11.998612 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"7812465e-1935-4283-865b-c02289d7bd1d\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:16:12 crc kubenswrapper[4908]: I0131 08:16:12.242706 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-x8nmk"] Jan 31 08:16:12 crc kubenswrapper[4908]: I0131 08:16:12.292035 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 08:16:12 crc kubenswrapper[4908]: I0131 08:16:12.384834 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-25c5-account-create-update-7btgk"] Jan 31 08:16:12 crc kubenswrapper[4908]: I0131 08:16:12.607584 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 08:16:12 crc kubenswrapper[4908]: W0131 08:16:12.633960 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cb838df_d11c_4ba5_8c51_7f5cb5d83ca6.slice/crio-e52a1366620b9c60e006559cb112352277b413036cd0425f5772c7639a22fb9c WatchSource:0}: Error finding container e52a1366620b9c60e006559cb112352277b413036cd0425f5772c7639a22fb9c: Status 404 returned error can't find the container with id e52a1366620b9c60e006559cb112352277b413036cd0425f5772c7639a22fb9c Jan 31 08:16:12 crc kubenswrapper[4908]: I0131 08:16:12.774013 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-x8nmk" event={"ID":"fe31ee36-6d2c-43a5-b127-2659897ae68b","Type":"ContainerStarted","Data":"092b0a600e442db3dc6d7897ea0b903bc9e3128b5574da6a6b482f0b5e45aa46"} Jan 31 08:16:12 crc kubenswrapper[4908]: I0131 08:16:12.774068 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-x8nmk" event={"ID":"fe31ee36-6d2c-43a5-b127-2659897ae68b","Type":"ContainerStarted","Data":"72b8edc6ed2911f4217ea4c4d5e908c52af3979bb95b5d61ed931582a73db236"} Jan 31 08:16:12 crc kubenswrapper[4908]: I0131 08:16:12.776288 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6","Type":"ContainerStarted","Data":"e52a1366620b9c60e006559cb112352277b413036cd0425f5772c7639a22fb9c"} Jan 31 08:16:12 crc kubenswrapper[4908]: I0131 08:16:12.782540 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-25c5-account-create-update-7btgk" event={"ID":"2d096e8e-720c-4fc6-b18d-efea93875a89","Type":"ContainerStarted","Data":"e7c0784dfc672c17bc11928986b58ddd3f34a6a934466b54aa2e859bb11f9776"} Jan 31 08:16:12 crc kubenswrapper[4908]: I0131 08:16:12.782630 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-25c5-account-create-update-7btgk" event={"ID":"2d096e8e-720c-4fc6-b18d-efea93875a89","Type":"ContainerStarted","Data":"4dd68b1be6a3a517552b20d8acc66b31e99e7aab5756d561783742b032e77362"} Jan 31 08:16:12 crc kubenswrapper[4908]: I0131 08:16:12.790002 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130","Type":"ContainerStarted","Data":"8fe6fca2e982277fdb1cdec41d40126a898b308ed87d1c4df63fb1579fd179da"} Jan 31 08:16:12 crc kubenswrapper[4908]: I0131 08:16:12.796354 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"223899d0-e94c-4e2d-bfba-d9b7baec40e1","Type":"ContainerStarted","Data":"4b5d88f4a5794cdff18381196c79198bc44b9ea5e492157e39c9409920777e5a"} Jan 31 08:16:12 crc kubenswrapper[4908]: I0131 08:16:12.807303 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-create-x8nmk" podStartSLOduration=1.8072645779999998 podStartE2EDuration="1.807264578s" podCreationTimestamp="2026-01-31 08:16:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 08:16:12.806413437 +0000 UTC m=+3279.422358091" watchObservedRunningTime="2026-01-31 08:16:12.807264578 +0000 UTC m=+3279.423209232" Jan 31 08:16:12 crc kubenswrapper[4908]: I0131 08:16:12.830673 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-25c5-account-create-update-7btgk" podStartSLOduration=1.8306453139999999 podStartE2EDuration="1.830645314s" podCreationTimestamp="2026-01-31 08:16:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 08:16:12.825489405 +0000 UTC m=+3279.441434059" watchObservedRunningTime="2026-01-31 08:16:12.830645314 +0000 UTC m=+3279.446589968" Jan 31 08:16:13 crc kubenswrapper[4908]: I0131 08:16:13.045498 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 08:16:13 crc kubenswrapper[4908]: I0131 08:16:13.819553 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6","Type":"ContainerStarted","Data":"632e9b1580052f881c83b9cf60051bb9a21185fd3232f1833b896faef390c29b"} Jan 31 08:16:13 crc kubenswrapper[4908]: I0131 08:16:13.822447 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7812465e-1935-4283-865b-c02289d7bd1d","Type":"ContainerStarted","Data":"796628bd1422f4fc4a205a7016da1514dc613c969b83994246af9172816561e7"} Jan 31 08:16:13 crc kubenswrapper[4908]: I0131 08:16:13.826917 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130","Type":"ContainerStarted","Data":"127f2bedadc2f56f8bac1f40f9ff81db0bd5fa8892e6423152c5c08e9ac4a3fc"} Jan 31 08:16:13 crc kubenswrapper[4908]: I0131 08:16:13.832764 4908 generic.go:334] "Generic (PLEG): container finished" podID="2d096e8e-720c-4fc6-b18d-efea93875a89" containerID="e7c0784dfc672c17bc11928986b58ddd3f34a6a934466b54aa2e859bb11f9776" exitCode=0 Jan 31 08:16:13 crc kubenswrapper[4908]: I0131 08:16:13.834181 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-25c5-account-create-update-7btgk" event={"ID":"2d096e8e-720c-4fc6-b18d-efea93875a89","Type":"ContainerDied","Data":"e7c0784dfc672c17bc11928986b58ddd3f34a6a934466b54aa2e859bb11f9776"} Jan 31 08:16:13 crc kubenswrapper[4908]: I0131 08:16:13.845134 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"223899d0-e94c-4e2d-bfba-d9b7baec40e1","Type":"ContainerStarted","Data":"8b996b0585443c5cbd7ae98458d59d9d3a9c8dcf7c776041f079ed6717b7cca2"} Jan 31 08:16:13 crc kubenswrapper[4908]: I0131 08:16:13.850404 4908 generic.go:334] "Generic (PLEG): container finished" podID="fe31ee36-6d2c-43a5-b127-2659897ae68b" containerID="092b0a600e442db3dc6d7897ea0b903bc9e3128b5574da6a6b482f0b5e45aa46" exitCode=0 Jan 31 08:16:13 crc kubenswrapper[4908]: I0131 08:16:13.850570 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-x8nmk" event={"ID":"fe31ee36-6d2c-43a5-b127-2659897ae68b","Type":"ContainerDied","Data":"092b0a600e442db3dc6d7897ea0b903bc9e3128b5574da6a6b482f0b5e45aa46"} Jan 31 08:16:14 crc kubenswrapper[4908]: I0131 08:16:14.862259 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7812465e-1935-4283-865b-c02289d7bd1d","Type":"ContainerStarted","Data":"58760600d4f357a0bad929df88fcae42a3eaf6737d4a0f36a5596eb252d9dbf6"} Jan 31 08:16:14 crc kubenswrapper[4908]: I0131 08:16:14.862912 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7812465e-1935-4283-865b-c02289d7bd1d","Type":"ContainerStarted","Data":"89bef69aace4da0ec4721e6f6fbf680b94ad66818ea1dde229848917a3b3ce4b"} Jan 31 08:16:14 crc kubenswrapper[4908]: I0131 08:16:14.867706 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130","Type":"ContainerStarted","Data":"14113fae7e004cf0f01f2c05b5d8500fceca7d0651f12d6f6488a7bbeb31d767"} Jan 31 08:16:14 crc kubenswrapper[4908]: I0131 08:16:14.875262 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"223899d0-e94c-4e2d-bfba-d9b7baec40e1","Type":"ContainerStarted","Data":"2125ea86fc2319389422715ddb3244172e28abedd190235b3ade2de20cb04902"} Jan 31 08:16:14 crc kubenswrapper[4908]: I0131 08:16:14.880545 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6","Type":"ContainerStarted","Data":"db10fe62fcf5161c83627f76c8b0bdddc7445e671198f70d30de3354a8b56b65"} Jan 31 08:16:14 crc kubenswrapper[4908]: I0131 08:16:14.915303 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.915264976 podStartE2EDuration="4.915264976s" podCreationTimestamp="2026-01-31 08:16:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 08:16:14.890903716 +0000 UTC m=+3281.506848380" watchObservedRunningTime="2026-01-31 08:16:14.915264976 +0000 UTC m=+3281.531209630" Jan 31 08:16:14 crc kubenswrapper[4908]: I0131 08:16:14.976492 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.97645913 podStartE2EDuration="4.97645913s" podCreationTimestamp="2026-01-31 08:16:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 08:16:14.941024672 +0000 UTC m=+3281.556969316" watchObservedRunningTime="2026-01-31 08:16:14.97645913 +0000 UTC m=+3281.592403784" Jan 31 08:16:14 crc kubenswrapper[4908]: I0131 08:16:14.979936 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=3.505547342 podStartE2EDuration="4.979921807s" podCreationTimestamp="2026-01-31 08:16:10 +0000 UTC" firstStartedPulling="2026-01-31 08:16:11.854367614 +0000 UTC m=+3278.470312268" lastFinishedPulling="2026-01-31 08:16:13.328742079 +0000 UTC m=+3279.944686733" observedRunningTime="2026-01-31 08:16:14.967495346 +0000 UTC m=+3281.583440000" watchObservedRunningTime="2026-01-31 08:16:14.979921807 +0000 UTC m=+3281.595866461" Jan 31 08:16:15 crc kubenswrapper[4908]: I0131 08:16:15.006132 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.657226923 podStartE2EDuration="5.006103213s" podCreationTimestamp="2026-01-31 08:16:10 +0000 UTC" firstStartedPulling="2026-01-31 08:16:11.980833784 +0000 UTC m=+3278.596778438" lastFinishedPulling="2026-01-31 08:16:13.329710074 +0000 UTC m=+3279.945654728" observedRunningTime="2026-01-31 08:16:15.00039027 +0000 UTC m=+3281.616334944" watchObservedRunningTime="2026-01-31 08:16:15.006103213 +0000 UTC m=+3281.622047867" Jan 31 08:16:15 crc kubenswrapper[4908]: I0131 08:16:15.485695 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-25c5-account-create-update-7btgk" Jan 31 08:16:15 crc kubenswrapper[4908]: I0131 08:16:15.491885 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-x8nmk" Jan 31 08:16:15 crc kubenswrapper[4908]: I0131 08:16:15.622223 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d096e8e-720c-4fc6-b18d-efea93875a89-operator-scripts\") pod \"2d096e8e-720c-4fc6-b18d-efea93875a89\" (UID: \"2d096e8e-720c-4fc6-b18d-efea93875a89\") " Jan 31 08:16:15 crc kubenswrapper[4908]: I0131 08:16:15.622503 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q755q\" (UniqueName: \"kubernetes.io/projected/fe31ee36-6d2c-43a5-b127-2659897ae68b-kube-api-access-q755q\") pod \"fe31ee36-6d2c-43a5-b127-2659897ae68b\" (UID: \"fe31ee36-6d2c-43a5-b127-2659897ae68b\") " Jan 31 08:16:15 crc kubenswrapper[4908]: I0131 08:16:15.623894 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d096e8e-720c-4fc6-b18d-efea93875a89-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2d096e8e-720c-4fc6-b18d-efea93875a89" (UID: "2d096e8e-720c-4fc6-b18d-efea93875a89"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:16:15 crc kubenswrapper[4908]: I0131 08:16:15.624310 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe31ee36-6d2c-43a5-b127-2659897ae68b-operator-scripts\") pod \"fe31ee36-6d2c-43a5-b127-2659897ae68b\" (UID: \"fe31ee36-6d2c-43a5-b127-2659897ae68b\") " Jan 31 08:16:15 crc kubenswrapper[4908]: I0131 08:16:15.624424 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmdgl\" (UniqueName: \"kubernetes.io/projected/2d096e8e-720c-4fc6-b18d-efea93875a89-kube-api-access-rmdgl\") pod \"2d096e8e-720c-4fc6-b18d-efea93875a89\" (UID: \"2d096e8e-720c-4fc6-b18d-efea93875a89\") " Jan 31 08:16:15 crc kubenswrapper[4908]: I0131 08:16:15.625159 4908 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d096e8e-720c-4fc6-b18d-efea93875a89-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:15 crc kubenswrapper[4908]: I0131 08:16:15.625163 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe31ee36-6d2c-43a5-b127-2659897ae68b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe31ee36-6d2c-43a5-b127-2659897ae68b" (UID: "fe31ee36-6d2c-43a5-b127-2659897ae68b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:16:15 crc kubenswrapper[4908]: I0131 08:16:15.632423 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe31ee36-6d2c-43a5-b127-2659897ae68b-kube-api-access-q755q" (OuterVolumeSpecName: "kube-api-access-q755q") pod "fe31ee36-6d2c-43a5-b127-2659897ae68b" (UID: "fe31ee36-6d2c-43a5-b127-2659897ae68b"). InnerVolumeSpecName "kube-api-access-q755q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:16:15 crc kubenswrapper[4908]: I0131 08:16:15.635311 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d096e8e-720c-4fc6-b18d-efea93875a89-kube-api-access-rmdgl" (OuterVolumeSpecName: "kube-api-access-rmdgl") pod "2d096e8e-720c-4fc6-b18d-efea93875a89" (UID: "2d096e8e-720c-4fc6-b18d-efea93875a89"). InnerVolumeSpecName "kube-api-access-rmdgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:16:15 crc kubenswrapper[4908]: I0131 08:16:15.727452 4908 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe31ee36-6d2c-43a5-b127-2659897ae68b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:15 crc kubenswrapper[4908]: I0131 08:16:15.727723 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmdgl\" (UniqueName: \"kubernetes.io/projected/2d096e8e-720c-4fc6-b18d-efea93875a89-kube-api-access-rmdgl\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:15 crc kubenswrapper[4908]: I0131 08:16:15.727736 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q755q\" (UniqueName: \"kubernetes.io/projected/fe31ee36-6d2c-43a5-b127-2659897ae68b-kube-api-access-q755q\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:15 crc kubenswrapper[4908]: I0131 08:16:15.891560 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-x8nmk" event={"ID":"fe31ee36-6d2c-43a5-b127-2659897ae68b","Type":"ContainerDied","Data":"72b8edc6ed2911f4217ea4c4d5e908c52af3979bb95b5d61ed931582a73db236"} Jan 31 08:16:15 crc kubenswrapper[4908]: I0131 08:16:15.891598 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72b8edc6ed2911f4217ea4c4d5e908c52af3979bb95b5d61ed931582a73db236" Jan 31 08:16:15 crc kubenswrapper[4908]: I0131 08:16:15.891650 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-x8nmk" Jan 31 08:16:15 crc kubenswrapper[4908]: I0131 08:16:15.897078 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-25c5-account-create-update-7btgk" Jan 31 08:16:15 crc kubenswrapper[4908]: I0131 08:16:15.897782 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-25c5-account-create-update-7btgk" event={"ID":"2d096e8e-720c-4fc6-b18d-efea93875a89","Type":"ContainerDied","Data":"4dd68b1be6a3a517552b20d8acc66b31e99e7aab5756d561783742b032e77362"} Jan 31 08:16:15 crc kubenswrapper[4908]: I0131 08:16:15.897813 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dd68b1be6a3a517552b20d8acc66b31e99e7aab5756d561783742b032e77362" Jan 31 08:16:15 crc kubenswrapper[4908]: I0131 08:16:15.940135 4908 scope.go:117] "RemoveContainer" containerID="45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109" Jan 31 08:16:15 crc kubenswrapper[4908]: E0131 08:16:15.940552 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:16:16 crc kubenswrapper[4908]: I0131 08:16:16.012158 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:16 crc kubenswrapper[4908]: I0131 08:16:16.113624 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 31 08:16:16 crc kubenswrapper[4908]: I0131 08:16:16.808749 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-64n6d"] Jan 31 08:16:16 crc kubenswrapper[4908]: E0131 08:16:16.809548 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe31ee36-6d2c-43a5-b127-2659897ae68b" containerName="mariadb-database-create" Jan 31 08:16:16 crc kubenswrapper[4908]: I0131 08:16:16.809657 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe31ee36-6d2c-43a5-b127-2659897ae68b" containerName="mariadb-database-create" Jan 31 08:16:16 crc kubenswrapper[4908]: E0131 08:16:16.809754 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d096e8e-720c-4fc6-b18d-efea93875a89" containerName="mariadb-account-create-update" Jan 31 08:16:16 crc kubenswrapper[4908]: I0131 08:16:16.809834 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d096e8e-720c-4fc6-b18d-efea93875a89" containerName="mariadb-account-create-update" Jan 31 08:16:16 crc kubenswrapper[4908]: I0131 08:16:16.810163 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe31ee36-6d2c-43a5-b127-2659897ae68b" containerName="mariadb-database-create" Jan 31 08:16:16 crc kubenswrapper[4908]: I0131 08:16:16.810289 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d096e8e-720c-4fc6-b18d-efea93875a89" containerName="mariadb-account-create-update" Jan 31 08:16:16 crc kubenswrapper[4908]: I0131 08:16:16.811165 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-64n6d" Jan 31 08:16:16 crc kubenswrapper[4908]: I0131 08:16:16.817511 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 31 08:16:16 crc kubenswrapper[4908]: I0131 08:16:16.817618 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-r8jpp" Jan 31 08:16:16 crc kubenswrapper[4908]: I0131 08:16:16.825778 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-64n6d"] Jan 31 08:16:16 crc kubenswrapper[4908]: I0131 08:16:16.957275 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f11a3cb-6f5b-47cf-83e2-0836ba0b740e-combined-ca-bundle\") pod \"manila-db-sync-64n6d\" (UID: \"0f11a3cb-6f5b-47cf-83e2-0836ba0b740e\") " pod="openstack/manila-db-sync-64n6d" Jan 31 08:16:16 crc kubenswrapper[4908]: I0131 08:16:16.957444 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4585p\" (UniqueName: \"kubernetes.io/projected/0f11a3cb-6f5b-47cf-83e2-0836ba0b740e-kube-api-access-4585p\") pod \"manila-db-sync-64n6d\" (UID: \"0f11a3cb-6f5b-47cf-83e2-0836ba0b740e\") " pod="openstack/manila-db-sync-64n6d" Jan 31 08:16:16 crc kubenswrapper[4908]: I0131 08:16:16.957506 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/0f11a3cb-6f5b-47cf-83e2-0836ba0b740e-job-config-data\") pod \"manila-db-sync-64n6d\" (UID: \"0f11a3cb-6f5b-47cf-83e2-0836ba0b740e\") " pod="openstack/manila-db-sync-64n6d" Jan 31 08:16:16 crc kubenswrapper[4908]: I0131 08:16:16.957860 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f11a3cb-6f5b-47cf-83e2-0836ba0b740e-config-data\") pod \"manila-db-sync-64n6d\" (UID: \"0f11a3cb-6f5b-47cf-83e2-0836ba0b740e\") " pod="openstack/manila-db-sync-64n6d" Jan 31 08:16:17 crc kubenswrapper[4908]: I0131 08:16:17.061386 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f11a3cb-6f5b-47cf-83e2-0836ba0b740e-config-data\") pod \"manila-db-sync-64n6d\" (UID: \"0f11a3cb-6f5b-47cf-83e2-0836ba0b740e\") " pod="openstack/manila-db-sync-64n6d" Jan 31 08:16:17 crc kubenswrapper[4908]: I0131 08:16:17.061755 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f11a3cb-6f5b-47cf-83e2-0836ba0b740e-combined-ca-bundle\") pod \"manila-db-sync-64n6d\" (UID: \"0f11a3cb-6f5b-47cf-83e2-0836ba0b740e\") " pod="openstack/manila-db-sync-64n6d" Jan 31 08:16:17 crc kubenswrapper[4908]: I0131 08:16:17.061831 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4585p\" (UniqueName: \"kubernetes.io/projected/0f11a3cb-6f5b-47cf-83e2-0836ba0b740e-kube-api-access-4585p\") pod \"manila-db-sync-64n6d\" (UID: \"0f11a3cb-6f5b-47cf-83e2-0836ba0b740e\") " pod="openstack/manila-db-sync-64n6d" Jan 31 08:16:17 crc kubenswrapper[4908]: I0131 08:16:17.061866 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/0f11a3cb-6f5b-47cf-83e2-0836ba0b740e-job-config-data\") pod \"manila-db-sync-64n6d\" (UID: \"0f11a3cb-6f5b-47cf-83e2-0836ba0b740e\") " pod="openstack/manila-db-sync-64n6d" Jan 31 08:16:17 crc kubenswrapper[4908]: I0131 08:16:17.074673 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/0f11a3cb-6f5b-47cf-83e2-0836ba0b740e-job-config-data\") pod \"manila-db-sync-64n6d\" (UID: \"0f11a3cb-6f5b-47cf-83e2-0836ba0b740e\") " pod="openstack/manila-db-sync-64n6d" Jan 31 08:16:17 crc kubenswrapper[4908]: I0131 08:16:17.075947 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f11a3cb-6f5b-47cf-83e2-0836ba0b740e-config-data\") pod \"manila-db-sync-64n6d\" (UID: \"0f11a3cb-6f5b-47cf-83e2-0836ba0b740e\") " pod="openstack/manila-db-sync-64n6d" Jan 31 08:16:17 crc kubenswrapper[4908]: I0131 08:16:17.077792 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f11a3cb-6f5b-47cf-83e2-0836ba0b740e-combined-ca-bundle\") pod \"manila-db-sync-64n6d\" (UID: \"0f11a3cb-6f5b-47cf-83e2-0836ba0b740e\") " pod="openstack/manila-db-sync-64n6d" Jan 31 08:16:17 crc kubenswrapper[4908]: I0131 08:16:17.099935 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4585p\" (UniqueName: \"kubernetes.io/projected/0f11a3cb-6f5b-47cf-83e2-0836ba0b740e-kube-api-access-4585p\") pod \"manila-db-sync-64n6d\" (UID: \"0f11a3cb-6f5b-47cf-83e2-0836ba0b740e\") " pod="openstack/manila-db-sync-64n6d" Jan 31 08:16:17 crc kubenswrapper[4908]: I0131 08:16:17.128089 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-64n6d" Jan 31 08:16:17 crc kubenswrapper[4908]: I0131 08:16:17.967819 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-64n6d"] Jan 31 08:16:17 crc kubenswrapper[4908]: W0131 08:16:17.977030 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f11a3cb_6f5b_47cf_83e2_0836ba0b740e.slice/crio-c8a4b0132a4c40e743dbfbf02668389ca1523c54750e92df230bec1f11f8e8f1 WatchSource:0}: Error finding container c8a4b0132a4c40e743dbfbf02668389ca1523c54750e92df230bec1f11f8e8f1: Status 404 returned error can't find the container with id c8a4b0132a4c40e743dbfbf02668389ca1523c54750e92df230bec1f11f8e8f1 Jan 31 08:16:18 crc kubenswrapper[4908]: I0131 08:16:18.960210 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-64n6d" event={"ID":"0f11a3cb-6f5b-47cf-83e2-0836ba0b740e","Type":"ContainerStarted","Data":"c8a4b0132a4c40e743dbfbf02668389ca1523c54750e92df230bec1f11f8e8f1"} Jan 31 08:16:21 crc kubenswrapper[4908]: I0131 08:16:21.258123 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Jan 31 08:16:21 crc kubenswrapper[4908]: I0131 08:16:21.351329 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 31 08:16:21 crc kubenswrapper[4908]: I0131 08:16:21.905997 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 31 08:16:21 crc kubenswrapper[4908]: I0131 08:16:21.906248 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 31 08:16:21 crc kubenswrapper[4908]: I0131 08:16:21.950888 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 31 08:16:21 crc kubenswrapper[4908]: I0131 08:16:21.952047 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 31 08:16:22 crc kubenswrapper[4908]: I0131 08:16:22.008104 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 08:16:22 crc kubenswrapper[4908]: I0131 08:16:22.008146 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 08:16:22 crc kubenswrapper[4908]: I0131 08:16:22.292326 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 31 08:16:22 crc kubenswrapper[4908]: I0131 08:16:22.292377 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 31 08:16:22 crc kubenswrapper[4908]: I0131 08:16:22.425287 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 31 08:16:22 crc kubenswrapper[4908]: I0131 08:16:22.447619 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 31 08:16:23 crc kubenswrapper[4908]: I0131 08:16:23.020407 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 31 08:16:23 crc kubenswrapper[4908]: I0131 08:16:23.020899 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 31 08:16:24 crc kubenswrapper[4908]: I0131 08:16:24.039371 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-64n6d" event={"ID":"0f11a3cb-6f5b-47cf-83e2-0836ba0b740e","Type":"ContainerStarted","Data":"2e247c096290f816c7b74d9eb63ec613034312fda0ff7ffb3269c9a71bc8d610"} Jan 31 08:16:29 crc kubenswrapper[4908]: I0131 08:16:29.941424 4908 scope.go:117] "RemoveContainer" containerID="45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109" Jan 31 08:16:29 crc kubenswrapper[4908]: E0131 08:16:29.942733 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:16:32 crc kubenswrapper[4908]: I0131 08:16:32.275466 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 31 08:16:32 crc kubenswrapper[4908]: I0131 08:16:32.275910 4908 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 08:16:32 crc kubenswrapper[4908]: I0131 08:16:32.284114 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 31 08:16:32 crc kubenswrapper[4908]: I0131 08:16:32.304477 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-64n6d" podStartSLOduration=11.624160783 podStartE2EDuration="16.304458892s" podCreationTimestamp="2026-01-31 08:16:16 +0000 UTC" firstStartedPulling="2026-01-31 08:16:17.979122183 +0000 UTC m=+3284.595066837" lastFinishedPulling="2026-01-31 08:16:22.659420292 +0000 UTC m=+3289.275364946" observedRunningTime="2026-01-31 08:16:24.092205975 +0000 UTC m=+3290.708150639" watchObservedRunningTime="2026-01-31 08:16:32.304458892 +0000 UTC m=+3298.920403546" Jan 31 08:16:32 crc kubenswrapper[4908]: I0131 08:16:32.322513 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 31 08:16:32 crc kubenswrapper[4908]: I0131 08:16:32.322684 4908 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 08:16:32 crc kubenswrapper[4908]: I0131 08:16:32.340166 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 31 08:16:35 crc kubenswrapper[4908]: I0131 08:16:35.137730 4908 generic.go:334] "Generic (PLEG): container finished" podID="0f11a3cb-6f5b-47cf-83e2-0836ba0b740e" containerID="2e247c096290f816c7b74d9eb63ec613034312fda0ff7ffb3269c9a71bc8d610" exitCode=0 Jan 31 08:16:35 crc kubenswrapper[4908]: I0131 08:16:35.137803 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-64n6d" event={"ID":"0f11a3cb-6f5b-47cf-83e2-0836ba0b740e","Type":"ContainerDied","Data":"2e247c096290f816c7b74d9eb63ec613034312fda0ff7ffb3269c9a71bc8d610"} Jan 31 08:16:36 crc kubenswrapper[4908]: I0131 08:16:36.544499 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-64n6d" Jan 31 08:16:36 crc kubenswrapper[4908]: I0131 08:16:36.682038 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/0f11a3cb-6f5b-47cf-83e2-0836ba0b740e-job-config-data\") pod \"0f11a3cb-6f5b-47cf-83e2-0836ba0b740e\" (UID: \"0f11a3cb-6f5b-47cf-83e2-0836ba0b740e\") " Jan 31 08:16:36 crc kubenswrapper[4908]: I0131 08:16:36.682201 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4585p\" (UniqueName: \"kubernetes.io/projected/0f11a3cb-6f5b-47cf-83e2-0836ba0b740e-kube-api-access-4585p\") pod \"0f11a3cb-6f5b-47cf-83e2-0836ba0b740e\" (UID: \"0f11a3cb-6f5b-47cf-83e2-0836ba0b740e\") " Jan 31 08:16:36 crc kubenswrapper[4908]: I0131 08:16:36.682310 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f11a3cb-6f5b-47cf-83e2-0836ba0b740e-combined-ca-bundle\") pod \"0f11a3cb-6f5b-47cf-83e2-0836ba0b740e\" (UID: \"0f11a3cb-6f5b-47cf-83e2-0836ba0b740e\") " Jan 31 08:16:36 crc kubenswrapper[4908]: I0131 08:16:36.682496 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f11a3cb-6f5b-47cf-83e2-0836ba0b740e-config-data\") pod \"0f11a3cb-6f5b-47cf-83e2-0836ba0b740e\" (UID: \"0f11a3cb-6f5b-47cf-83e2-0836ba0b740e\") " Jan 31 08:16:36 crc kubenswrapper[4908]: I0131 08:16:36.690687 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f11a3cb-6f5b-47cf-83e2-0836ba0b740e-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "0f11a3cb-6f5b-47cf-83e2-0836ba0b740e" (UID: "0f11a3cb-6f5b-47cf-83e2-0836ba0b740e"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:16:36 crc kubenswrapper[4908]: I0131 08:16:36.691129 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f11a3cb-6f5b-47cf-83e2-0836ba0b740e-kube-api-access-4585p" (OuterVolumeSpecName: "kube-api-access-4585p") pod "0f11a3cb-6f5b-47cf-83e2-0836ba0b740e" (UID: "0f11a3cb-6f5b-47cf-83e2-0836ba0b740e"). InnerVolumeSpecName "kube-api-access-4585p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:16:36 crc kubenswrapper[4908]: I0131 08:16:36.694850 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f11a3cb-6f5b-47cf-83e2-0836ba0b740e-config-data" (OuterVolumeSpecName: "config-data") pod "0f11a3cb-6f5b-47cf-83e2-0836ba0b740e" (UID: "0f11a3cb-6f5b-47cf-83e2-0836ba0b740e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:16:36 crc kubenswrapper[4908]: I0131 08:16:36.714317 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f11a3cb-6f5b-47cf-83e2-0836ba0b740e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f11a3cb-6f5b-47cf-83e2-0836ba0b740e" (UID: "0f11a3cb-6f5b-47cf-83e2-0836ba0b740e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:16:36 crc kubenswrapper[4908]: I0131 08:16:36.785447 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f11a3cb-6f5b-47cf-83e2-0836ba0b740e-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:36 crc kubenswrapper[4908]: I0131 08:16:36.785756 4908 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/0f11a3cb-6f5b-47cf-83e2-0836ba0b740e-job-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:36 crc kubenswrapper[4908]: I0131 08:16:36.785861 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4585p\" (UniqueName: \"kubernetes.io/projected/0f11a3cb-6f5b-47cf-83e2-0836ba0b740e-kube-api-access-4585p\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:36 crc kubenswrapper[4908]: I0131 08:16:36.785950 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f11a3cb-6f5b-47cf-83e2-0836ba0b740e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.157238 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-64n6d" event={"ID":"0f11a3cb-6f5b-47cf-83e2-0836ba0b740e","Type":"ContainerDied","Data":"c8a4b0132a4c40e743dbfbf02668389ca1523c54750e92df230bec1f11f8e8f1"} Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.157835 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8a4b0132a4c40e743dbfbf02668389ca1523c54750e92df230bec1f11f8e8f1" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.157345 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-64n6d" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.532049 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 08:16:37 crc kubenswrapper[4908]: E0131 08:16:37.532452 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f11a3cb-6f5b-47cf-83e2-0836ba0b740e" containerName="manila-db-sync" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.532464 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f11a3cb-6f5b-47cf-83e2-0836ba0b740e" containerName="manila-db-sync" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.532692 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f11a3cb-6f5b-47cf-83e2-0836ba0b740e" containerName="manila-db-sync" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.533620 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.539550 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-r8jpp" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.539955 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.540138 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.540292 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.567687 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.607009 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.608526 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.612291 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.620407 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.662318 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-cq764"] Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.664203 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69655fd4bf-cq764" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.677741 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-cq764"] Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.708008 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb54fd85-3449-4e89-9d17-717d7adcbf8a-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"eb54fd85-3449-4e89-9d17-717d7adcbf8a\") " pod="openstack/manila-scheduler-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.708091 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb54fd85-3449-4e89-9d17-717d7adcbf8a-config-data\") pod \"manila-scheduler-0\" (UID: \"eb54fd85-3449-4e89-9d17-717d7adcbf8a\") " pod="openstack/manila-scheduler-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.708150 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb54fd85-3449-4e89-9d17-717d7adcbf8a-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"eb54fd85-3449-4e89-9d17-717d7adcbf8a\") " pod="openstack/manila-scheduler-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.708298 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8ffq\" (UniqueName: \"kubernetes.io/projected/eb54fd85-3449-4e89-9d17-717d7adcbf8a-kube-api-access-d8ffq\") pod \"manila-scheduler-0\" (UID: \"eb54fd85-3449-4e89-9d17-717d7adcbf8a\") " pod="openstack/manila-scheduler-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.708386 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb54fd85-3449-4e89-9d17-717d7adcbf8a-scripts\") pod \"manila-scheduler-0\" (UID: \"eb54fd85-3449-4e89-9d17-717d7adcbf8a\") " pod="openstack/manila-scheduler-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.708467 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb54fd85-3449-4e89-9d17-717d7adcbf8a-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"eb54fd85-3449-4e89-9d17-717d7adcbf8a\") " pod="openstack/manila-scheduler-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.776105 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.777935 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.784425 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.806397 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.814078 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/995cd9ca-8380-4cfa-957c-a7e76208f2d7-ceph\") pod \"manila-share-share1-0\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " pod="openstack/manila-share-share1-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.814134 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb54fd85-3449-4e89-9d17-717d7adcbf8a-scripts\") pod \"manila-scheduler-0\" (UID: \"eb54fd85-3449-4e89-9d17-717d7adcbf8a\") " pod="openstack/manila-scheduler-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.814177 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b5bca03-0d34-432b-a2a8-2a30d6cec2cd-config\") pod \"dnsmasq-dns-69655fd4bf-cq764\" (UID: \"6b5bca03-0d34-432b-a2a8-2a30d6cec2cd\") " pod="openstack/dnsmasq-dns-69655fd4bf-cq764" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.814216 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb54fd85-3449-4e89-9d17-717d7adcbf8a-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"eb54fd85-3449-4e89-9d17-717d7adcbf8a\") " pod="openstack/manila-scheduler-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.814684 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbps9\" (UniqueName: \"kubernetes.io/projected/6b5bca03-0d34-432b-a2a8-2a30d6cec2cd-kube-api-access-kbps9\") pod \"dnsmasq-dns-69655fd4bf-cq764\" (UID: \"6b5bca03-0d34-432b-a2a8-2a30d6cec2cd\") " pod="openstack/dnsmasq-dns-69655fd4bf-cq764" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.814741 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/995cd9ca-8380-4cfa-957c-a7e76208f2d7-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " pod="openstack/manila-share-share1-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.814807 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6b5bca03-0d34-432b-a2a8-2a30d6cec2cd-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-cq764\" (UID: \"6b5bca03-0d34-432b-a2a8-2a30d6cec2cd\") " pod="openstack/dnsmasq-dns-69655fd4bf-cq764" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.814829 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b5bca03-0d34-432b-a2a8-2a30d6cec2cd-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-cq764\" (UID: \"6b5bca03-0d34-432b-a2a8-2a30d6cec2cd\") " pod="openstack/dnsmasq-dns-69655fd4bf-cq764" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.814853 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76dd2d97-1350-4cda-aa9d-a255722c67b6-scripts\") pod \"manila-api-0\" (UID: \"76dd2d97-1350-4cda-aa9d-a255722c67b6\") " pod="openstack/manila-api-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.814892 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76dd2d97-1350-4cda-aa9d-a255722c67b6-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"76dd2d97-1350-4cda-aa9d-a255722c67b6\") " pod="openstack/manila-api-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.814924 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/995cd9ca-8380-4cfa-957c-a7e76208f2d7-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " pod="openstack/manila-share-share1-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.814968 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76dd2d97-1350-4cda-aa9d-a255722c67b6-etc-machine-id\") pod \"manila-api-0\" (UID: \"76dd2d97-1350-4cda-aa9d-a255722c67b6\") " pod="openstack/manila-api-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.815016 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/995cd9ca-8380-4cfa-957c-a7e76208f2d7-config-data\") pod \"manila-share-share1-0\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " pod="openstack/manila-share-share1-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.815047 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/995cd9ca-8380-4cfa-957c-a7e76208f2d7-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " pod="openstack/manila-share-share1-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.815075 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb54fd85-3449-4e89-9d17-717d7adcbf8a-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"eb54fd85-3449-4e89-9d17-717d7adcbf8a\") " pod="openstack/manila-scheduler-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.815137 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b5bca03-0d34-432b-a2a8-2a30d6cec2cd-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-cq764\" (UID: \"6b5bca03-0d34-432b-a2a8-2a30d6cec2cd\") " pod="openstack/dnsmasq-dns-69655fd4bf-cq764" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.815172 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb54fd85-3449-4e89-9d17-717d7adcbf8a-config-data\") pod \"manila-scheduler-0\" (UID: \"eb54fd85-3449-4e89-9d17-717d7adcbf8a\") " pod="openstack/manila-scheduler-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.815207 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/995cd9ca-8380-4cfa-957c-a7e76208f2d7-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " pod="openstack/manila-share-share1-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.815233 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/995cd9ca-8380-4cfa-957c-a7e76208f2d7-scripts\") pod \"manila-share-share1-0\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " pod="openstack/manila-share-share1-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.815256 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b5bca03-0d34-432b-a2a8-2a30d6cec2cd-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-cq764\" (UID: \"6b5bca03-0d34-432b-a2a8-2a30d6cec2cd\") " pod="openstack/dnsmasq-dns-69655fd4bf-cq764" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.815272 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76dd2d97-1350-4cda-aa9d-a255722c67b6-config-data-custom\") pod \"manila-api-0\" (UID: \"76dd2d97-1350-4cda-aa9d-a255722c67b6\") " pod="openstack/manila-api-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.815294 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb54fd85-3449-4e89-9d17-717d7adcbf8a-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"eb54fd85-3449-4e89-9d17-717d7adcbf8a\") " pod="openstack/manila-scheduler-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.815333 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75dw4\" (UniqueName: \"kubernetes.io/projected/995cd9ca-8380-4cfa-957c-a7e76208f2d7-kube-api-access-75dw4\") pod \"manila-share-share1-0\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " pod="openstack/manila-share-share1-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.815397 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76dd2d97-1350-4cda-aa9d-a255722c67b6-config-data\") pod \"manila-api-0\" (UID: \"76dd2d97-1350-4cda-aa9d-a255722c67b6\") " pod="openstack/manila-api-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.815487 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4g7h\" (UniqueName: \"kubernetes.io/projected/76dd2d97-1350-4cda-aa9d-a255722c67b6-kube-api-access-n4g7h\") pod \"manila-api-0\" (UID: \"76dd2d97-1350-4cda-aa9d-a255722c67b6\") " pod="openstack/manila-api-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.815524 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8ffq\" (UniqueName: \"kubernetes.io/projected/eb54fd85-3449-4e89-9d17-717d7adcbf8a-kube-api-access-d8ffq\") pod \"manila-scheduler-0\" (UID: \"eb54fd85-3449-4e89-9d17-717d7adcbf8a\") " pod="openstack/manila-scheduler-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.815684 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76dd2d97-1350-4cda-aa9d-a255722c67b6-logs\") pod \"manila-api-0\" (UID: \"76dd2d97-1350-4cda-aa9d-a255722c67b6\") " pod="openstack/manila-api-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.819124 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb54fd85-3449-4e89-9d17-717d7adcbf8a-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"eb54fd85-3449-4e89-9d17-717d7adcbf8a\") " pod="openstack/manila-scheduler-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.822096 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.822189 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.822233 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.838256 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb54fd85-3449-4e89-9d17-717d7adcbf8a-scripts\") pod \"manila-scheduler-0\" (UID: \"eb54fd85-3449-4e89-9d17-717d7adcbf8a\") " pod="openstack/manila-scheduler-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.846743 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8ffq\" (UniqueName: \"kubernetes.io/projected/eb54fd85-3449-4e89-9d17-717d7adcbf8a-kube-api-access-d8ffq\") pod \"manila-scheduler-0\" (UID: \"eb54fd85-3449-4e89-9d17-717d7adcbf8a\") " pod="openstack/manila-scheduler-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.853098 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb54fd85-3449-4e89-9d17-717d7adcbf8a-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"eb54fd85-3449-4e89-9d17-717d7adcbf8a\") " pod="openstack/manila-scheduler-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.858586 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb54fd85-3449-4e89-9d17-717d7adcbf8a-config-data\") pod \"manila-scheduler-0\" (UID: \"eb54fd85-3449-4e89-9d17-717d7adcbf8a\") " pod="openstack/manila-scheduler-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.860398 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb54fd85-3449-4e89-9d17-717d7adcbf8a-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"eb54fd85-3449-4e89-9d17-717d7adcbf8a\") " pod="openstack/manila-scheduler-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.875706 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-r8jpp" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.879504 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.920091 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/995cd9ca-8380-4cfa-957c-a7e76208f2d7-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " pod="openstack/manila-share-share1-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.920334 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/995cd9ca-8380-4cfa-957c-a7e76208f2d7-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " pod="openstack/manila-share-share1-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.920340 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76dd2d97-1350-4cda-aa9d-a255722c67b6-etc-machine-id\") pod \"manila-api-0\" (UID: \"76dd2d97-1350-4cda-aa9d-a255722c67b6\") " pod="openstack/manila-api-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.920404 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/995cd9ca-8380-4cfa-957c-a7e76208f2d7-config-data\") pod \"manila-share-share1-0\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " pod="openstack/manila-share-share1-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.920435 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/995cd9ca-8380-4cfa-957c-a7e76208f2d7-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " pod="openstack/manila-share-share1-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.920486 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b5bca03-0d34-432b-a2a8-2a30d6cec2cd-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-cq764\" (UID: \"6b5bca03-0d34-432b-a2a8-2a30d6cec2cd\") " pod="openstack/dnsmasq-dns-69655fd4bf-cq764" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.920523 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/995cd9ca-8380-4cfa-957c-a7e76208f2d7-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " pod="openstack/manila-share-share1-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.920551 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/995cd9ca-8380-4cfa-957c-a7e76208f2d7-scripts\") pod \"manila-share-share1-0\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " pod="openstack/manila-share-share1-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.920580 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b5bca03-0d34-432b-a2a8-2a30d6cec2cd-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-cq764\" (UID: \"6b5bca03-0d34-432b-a2a8-2a30d6cec2cd\") " pod="openstack/dnsmasq-dns-69655fd4bf-cq764" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.920598 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76dd2d97-1350-4cda-aa9d-a255722c67b6-config-data-custom\") pod \"manila-api-0\" (UID: \"76dd2d97-1350-4cda-aa9d-a255722c67b6\") " pod="openstack/manila-api-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.920632 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75dw4\" (UniqueName: \"kubernetes.io/projected/995cd9ca-8380-4cfa-957c-a7e76208f2d7-kube-api-access-75dw4\") pod \"manila-share-share1-0\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " pod="openstack/manila-share-share1-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.920674 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76dd2d97-1350-4cda-aa9d-a255722c67b6-config-data\") pod \"manila-api-0\" (UID: \"76dd2d97-1350-4cda-aa9d-a255722c67b6\") " pod="openstack/manila-api-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.920738 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4g7h\" (UniqueName: \"kubernetes.io/projected/76dd2d97-1350-4cda-aa9d-a255722c67b6-kube-api-access-n4g7h\") pod \"manila-api-0\" (UID: \"76dd2d97-1350-4cda-aa9d-a255722c67b6\") " pod="openstack/manila-api-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.920766 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76dd2d97-1350-4cda-aa9d-a255722c67b6-logs\") pod \"manila-api-0\" (UID: \"76dd2d97-1350-4cda-aa9d-a255722c67b6\") " pod="openstack/manila-api-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.920803 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/995cd9ca-8380-4cfa-957c-a7e76208f2d7-ceph\") pod \"manila-share-share1-0\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " pod="openstack/manila-share-share1-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.920840 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b5bca03-0d34-432b-a2a8-2a30d6cec2cd-config\") pod \"dnsmasq-dns-69655fd4bf-cq764\" (UID: \"6b5bca03-0d34-432b-a2a8-2a30d6cec2cd\") " pod="openstack/dnsmasq-dns-69655fd4bf-cq764" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.920868 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbps9\" (UniqueName: \"kubernetes.io/projected/6b5bca03-0d34-432b-a2a8-2a30d6cec2cd-kube-api-access-kbps9\") pod \"dnsmasq-dns-69655fd4bf-cq764\" (UID: \"6b5bca03-0d34-432b-a2a8-2a30d6cec2cd\") " pod="openstack/dnsmasq-dns-69655fd4bf-cq764" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.920886 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/995cd9ca-8380-4cfa-957c-a7e76208f2d7-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " pod="openstack/manila-share-share1-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.920911 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6b5bca03-0d34-432b-a2a8-2a30d6cec2cd-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-cq764\" (UID: \"6b5bca03-0d34-432b-a2a8-2a30d6cec2cd\") " pod="openstack/dnsmasq-dns-69655fd4bf-cq764" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.920926 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b5bca03-0d34-432b-a2a8-2a30d6cec2cd-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-cq764\" (UID: \"6b5bca03-0d34-432b-a2a8-2a30d6cec2cd\") " pod="openstack/dnsmasq-dns-69655fd4bf-cq764" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.920944 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76dd2d97-1350-4cda-aa9d-a255722c67b6-scripts\") pod \"manila-api-0\" (UID: \"76dd2d97-1350-4cda-aa9d-a255722c67b6\") " pod="openstack/manila-api-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.920969 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76dd2d97-1350-4cda-aa9d-a255722c67b6-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"76dd2d97-1350-4cda-aa9d-a255722c67b6\") " pod="openstack/manila-api-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.924428 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76dd2d97-1350-4cda-aa9d-a255722c67b6-logs\") pod \"manila-api-0\" (UID: \"76dd2d97-1350-4cda-aa9d-a255722c67b6\") " pod="openstack/manila-api-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.925085 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b5bca03-0d34-432b-a2a8-2a30d6cec2cd-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-cq764\" (UID: \"6b5bca03-0d34-432b-a2a8-2a30d6cec2cd\") " pod="openstack/dnsmasq-dns-69655fd4bf-cq764" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.925130 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/995cd9ca-8380-4cfa-957c-a7e76208f2d7-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " pod="openstack/manila-share-share1-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.925276 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b5bca03-0d34-432b-a2a8-2a30d6cec2cd-config\") pod \"dnsmasq-dns-69655fd4bf-cq764\" (UID: \"6b5bca03-0d34-432b-a2a8-2a30d6cec2cd\") " pod="openstack/dnsmasq-dns-69655fd4bf-cq764" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.925752 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6b5bca03-0d34-432b-a2a8-2a30d6cec2cd-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-cq764\" (UID: \"6b5bca03-0d34-432b-a2a8-2a30d6cec2cd\") " pod="openstack/dnsmasq-dns-69655fd4bf-cq764" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.929051 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b5bca03-0d34-432b-a2a8-2a30d6cec2cd-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-cq764\" (UID: \"6b5bca03-0d34-432b-a2a8-2a30d6cec2cd\") " pod="openstack/dnsmasq-dns-69655fd4bf-cq764" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.929622 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b5bca03-0d34-432b-a2a8-2a30d6cec2cd-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-cq764\" (UID: \"6b5bca03-0d34-432b-a2a8-2a30d6cec2cd\") " pod="openstack/dnsmasq-dns-69655fd4bf-cq764" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.936748 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76dd2d97-1350-4cda-aa9d-a255722c67b6-scripts\") pod \"manila-api-0\" (UID: \"76dd2d97-1350-4cda-aa9d-a255722c67b6\") " pod="openstack/manila-api-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.946088 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76dd2d97-1350-4cda-aa9d-a255722c67b6-config-data-custom\") pod \"manila-api-0\" (UID: \"76dd2d97-1350-4cda-aa9d-a255722c67b6\") " pod="openstack/manila-api-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.947103 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/995cd9ca-8380-4cfa-957c-a7e76208f2d7-scripts\") pod \"manila-share-share1-0\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " pod="openstack/manila-share-share1-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.948210 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76dd2d97-1350-4cda-aa9d-a255722c67b6-config-data\") pod \"manila-api-0\" (UID: \"76dd2d97-1350-4cda-aa9d-a255722c67b6\") " pod="openstack/manila-api-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.950168 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/995cd9ca-8380-4cfa-957c-a7e76208f2d7-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " pod="openstack/manila-share-share1-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.950652 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/995cd9ca-8380-4cfa-957c-a7e76208f2d7-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " pod="openstack/manila-share-share1-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.950546 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/995cd9ca-8380-4cfa-957c-a7e76208f2d7-ceph\") pod \"manila-share-share1-0\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " pod="openstack/manila-share-share1-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.951056 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/995cd9ca-8380-4cfa-957c-a7e76208f2d7-config-data\") pod \"manila-share-share1-0\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " pod="openstack/manila-share-share1-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.951367 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76dd2d97-1350-4cda-aa9d-a255722c67b6-etc-machine-id\") pod \"manila-api-0\" (UID: \"76dd2d97-1350-4cda-aa9d-a255722c67b6\") " pod="openstack/manila-api-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.952718 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76dd2d97-1350-4cda-aa9d-a255722c67b6-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"76dd2d97-1350-4cda-aa9d-a255722c67b6\") " pod="openstack/manila-api-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.957723 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbps9\" (UniqueName: \"kubernetes.io/projected/6b5bca03-0d34-432b-a2a8-2a30d6cec2cd-kube-api-access-kbps9\") pod \"dnsmasq-dns-69655fd4bf-cq764\" (UID: \"6b5bca03-0d34-432b-a2a8-2a30d6cec2cd\") " pod="openstack/dnsmasq-dns-69655fd4bf-cq764" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.957820 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75dw4\" (UniqueName: \"kubernetes.io/projected/995cd9ca-8380-4cfa-957c-a7e76208f2d7-kube-api-access-75dw4\") pod \"manila-share-share1-0\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " pod="openstack/manila-share-share1-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.970265 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4g7h\" (UniqueName: \"kubernetes.io/projected/76dd2d97-1350-4cda-aa9d-a255722c67b6-kube-api-access-n4g7h\") pod \"manila-api-0\" (UID: \"76dd2d97-1350-4cda-aa9d-a255722c67b6\") " pod="openstack/manila-api-0" Jan 31 08:16:37 crc kubenswrapper[4908]: I0131 08:16:37.983681 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69655fd4bf-cq764" Jan 31 08:16:38 crc kubenswrapper[4908]: I0131 08:16:38.107718 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 31 08:16:38 crc kubenswrapper[4908]: I0131 08:16:38.231155 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 31 08:16:38 crc kubenswrapper[4908]: I0131 08:16:38.643239 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-cq764"] Jan 31 08:16:38 crc kubenswrapper[4908]: I0131 08:16:38.668698 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 08:16:39 crc kubenswrapper[4908]: I0131 08:16:39.143641 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 31 08:16:39 crc kubenswrapper[4908]: I0131 08:16:39.213942 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"eb54fd85-3449-4e89-9d17-717d7adcbf8a","Type":"ContainerStarted","Data":"5a9cefeff155b7510a76562bfb57b7675919b9eaac3816cc7ec6932e923987aa"} Jan 31 08:16:39 crc kubenswrapper[4908]: I0131 08:16:39.225089 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-cq764" event={"ID":"6b5bca03-0d34-432b-a2a8-2a30d6cec2cd","Type":"ContainerStarted","Data":"56230ab45cfcf672078b6df522abb1cae5448f1dfbd56cd1826ec11e0b0115e0"} Jan 31 08:16:39 crc kubenswrapper[4908]: W0131 08:16:39.311157 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76dd2d97_1350_4cda_aa9d_a255722c67b6.slice/crio-a0b70a675d577179ea53437fa6160902d8399bf5a9632d6418a7970e607b086e WatchSource:0}: Error finding container a0b70a675d577179ea53437fa6160902d8399bf5a9632d6418a7970e607b086e: Status 404 returned error can't find the container with id a0b70a675d577179ea53437fa6160902d8399bf5a9632d6418a7970e607b086e Jan 31 08:16:39 crc kubenswrapper[4908]: I0131 08:16:39.414124 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 08:16:40 crc kubenswrapper[4908]: I0131 08:16:40.270319 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"eb54fd85-3449-4e89-9d17-717d7adcbf8a","Type":"ContainerStarted","Data":"c4811b0f7d7b0b503fa4f92764defa45afb1fc7138246abafbc21c6951d4f181"} Jan 31 08:16:40 crc kubenswrapper[4908]: I0131 08:16:40.272027 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"995cd9ca-8380-4cfa-957c-a7e76208f2d7","Type":"ContainerStarted","Data":"acdbbfe4f7053d806e943c43fbafd44d5d3cde9bb7f67b9f769c45b02ee9692e"} Jan 31 08:16:40 crc kubenswrapper[4908]: I0131 08:16:40.274930 4908 generic.go:334] "Generic (PLEG): container finished" podID="6b5bca03-0d34-432b-a2a8-2a30d6cec2cd" containerID="418df6e43d7fb6befdaca376b813034cd2408f767adf55c09c43e4e19cf476c9" exitCode=0 Jan 31 08:16:40 crc kubenswrapper[4908]: I0131 08:16:40.275368 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-cq764" event={"ID":"6b5bca03-0d34-432b-a2a8-2a30d6cec2cd","Type":"ContainerDied","Data":"418df6e43d7fb6befdaca376b813034cd2408f767adf55c09c43e4e19cf476c9"} Jan 31 08:16:40 crc kubenswrapper[4908]: I0131 08:16:40.281071 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"76dd2d97-1350-4cda-aa9d-a255722c67b6","Type":"ContainerStarted","Data":"8cb6b5f5423e3067aad563c9eeb46ff949132697a11c99f408d6d11aebd4e800"} Jan 31 08:16:40 crc kubenswrapper[4908]: I0131 08:16:40.281120 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"76dd2d97-1350-4cda-aa9d-a255722c67b6","Type":"ContainerStarted","Data":"a0b70a675d577179ea53437fa6160902d8399bf5a9632d6418a7970e607b086e"} Jan 31 08:16:40 crc kubenswrapper[4908]: I0131 08:16:40.783458 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 31 08:16:41 crc kubenswrapper[4908]: I0131 08:16:41.301006 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-cq764" event={"ID":"6b5bca03-0d34-432b-a2a8-2a30d6cec2cd","Type":"ContainerStarted","Data":"1dfd3b92f15852190e558d773344b0e27feee154a22602cd75665717befed594"} Jan 31 08:16:41 crc kubenswrapper[4908]: I0131 08:16:41.301403 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69655fd4bf-cq764" Jan 31 08:16:41 crc kubenswrapper[4908]: I0131 08:16:41.304288 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"76dd2d97-1350-4cda-aa9d-a255722c67b6","Type":"ContainerStarted","Data":"c24be85d63bdcd4dbbdae97f83ca34fe8ddb1a8f45c2259bb5851c9beb6085ac"} Jan 31 08:16:41 crc kubenswrapper[4908]: I0131 08:16:41.304513 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 31 08:16:41 crc kubenswrapper[4908]: I0131 08:16:41.312771 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"eb54fd85-3449-4e89-9d17-717d7adcbf8a","Type":"ContainerStarted","Data":"677ee8d4b88a895aca4313491f2c9c27720ca6bfaff4d186c8be36bb1d70dad0"} Jan 31 08:16:41 crc kubenswrapper[4908]: I0131 08:16:41.336452 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69655fd4bf-cq764" podStartSLOduration=4.336434424 podStartE2EDuration="4.336434424s" podCreationTimestamp="2026-01-31 08:16:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 08:16:41.327176557 +0000 UTC m=+3307.943121211" watchObservedRunningTime="2026-01-31 08:16:41.336434424 +0000 UTC m=+3307.952379078" Jan 31 08:16:41 crc kubenswrapper[4908]: I0131 08:16:41.351136 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.351115861 podStartE2EDuration="4.351115861s" podCreationTimestamp="2026-01-31 08:16:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 08:16:41.345536028 +0000 UTC m=+3307.961480682" watchObservedRunningTime="2026-01-31 08:16:41.351115861 +0000 UTC m=+3307.967060516" Jan 31 08:16:41 crc kubenswrapper[4908]: I0131 08:16:41.398662 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.611320788 podStartE2EDuration="4.398637832s" podCreationTimestamp="2026-01-31 08:16:37 +0000 UTC" firstStartedPulling="2026-01-31 08:16:38.654648848 +0000 UTC m=+3305.270593502" lastFinishedPulling="2026-01-31 08:16:39.441965892 +0000 UTC m=+3306.057910546" observedRunningTime="2026-01-31 08:16:41.385814203 +0000 UTC m=+3308.001758867" watchObservedRunningTime="2026-01-31 08:16:41.398637832 +0000 UTC m=+3308.014582486" Jan 31 08:16:42 crc kubenswrapper[4908]: I0131 08:16:42.326762 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="76dd2d97-1350-4cda-aa9d-a255722c67b6" containerName="manila-api-log" containerID="cri-o://8cb6b5f5423e3067aad563c9eeb46ff949132697a11c99f408d6d11aebd4e800" gracePeriod=30 Jan 31 08:16:42 crc kubenswrapper[4908]: I0131 08:16:42.326880 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="76dd2d97-1350-4cda-aa9d-a255722c67b6" containerName="manila-api" containerID="cri-o://c24be85d63bdcd4dbbdae97f83ca34fe8ddb1a8f45c2259bb5851c9beb6085ac" gracePeriod=30 Jan 31 08:16:42 crc kubenswrapper[4908]: I0131 08:16:42.940327 4908 scope.go:117] "RemoveContainer" containerID="45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109" Jan 31 08:16:42 crc kubenswrapper[4908]: E0131 08:16:42.941006 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:43.365087 4908 generic.go:334] "Generic (PLEG): container finished" podID="76dd2d97-1350-4cda-aa9d-a255722c67b6" containerID="c24be85d63bdcd4dbbdae97f83ca34fe8ddb1a8f45c2259bb5851c9beb6085ac" exitCode=0 Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:43.365390 4908 generic.go:334] "Generic (PLEG): container finished" podID="76dd2d97-1350-4cda-aa9d-a255722c67b6" containerID="8cb6b5f5423e3067aad563c9eeb46ff949132697a11c99f408d6d11aebd4e800" exitCode=143 Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:43.365429 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"76dd2d97-1350-4cda-aa9d-a255722c67b6","Type":"ContainerDied","Data":"c24be85d63bdcd4dbbdae97f83ca34fe8ddb1a8f45c2259bb5851c9beb6085ac"} Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:43.365458 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"76dd2d97-1350-4cda-aa9d-a255722c67b6","Type":"ContainerDied","Data":"8cb6b5f5423e3067aad563c9eeb46ff949132697a11c99f408d6d11aebd4e800"} Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:47.881184 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:47.986126 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69655fd4bf-cq764" Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:48.056125 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-8qn5b"] Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:48.434399 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" podUID="96324b48-9ef5-4df7-aa47-d586f228789e" containerName="dnsmasq-dns" containerID="cri-o://e3544921f1f0faf9b4d8e5928970a63e9386801cfe8c66cd77a49374592af458" gracePeriod=10 Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:48.434953 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"76dd2d97-1350-4cda-aa9d-a255722c67b6","Type":"ContainerDied","Data":"a0b70a675d577179ea53437fa6160902d8399bf5a9632d6418a7970e607b086e"} Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:48.435225 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0b70a675d577179ea53437fa6160902d8399bf5a9632d6418a7970e607b086e" Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:48.481705 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:48.608849 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76dd2d97-1350-4cda-aa9d-a255722c67b6-logs\") pod \"76dd2d97-1350-4cda-aa9d-a255722c67b6\" (UID: \"76dd2d97-1350-4cda-aa9d-a255722c67b6\") " Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:48.608913 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76dd2d97-1350-4cda-aa9d-a255722c67b6-scripts\") pod \"76dd2d97-1350-4cda-aa9d-a255722c67b6\" (UID: \"76dd2d97-1350-4cda-aa9d-a255722c67b6\") " Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:48.608945 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76dd2d97-1350-4cda-aa9d-a255722c67b6-combined-ca-bundle\") pod \"76dd2d97-1350-4cda-aa9d-a255722c67b6\" (UID: \"76dd2d97-1350-4cda-aa9d-a255722c67b6\") " Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:48.609068 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4g7h\" (UniqueName: \"kubernetes.io/projected/76dd2d97-1350-4cda-aa9d-a255722c67b6-kube-api-access-n4g7h\") pod \"76dd2d97-1350-4cda-aa9d-a255722c67b6\" (UID: \"76dd2d97-1350-4cda-aa9d-a255722c67b6\") " Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:48.609129 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76dd2d97-1350-4cda-aa9d-a255722c67b6-etc-machine-id\") pod \"76dd2d97-1350-4cda-aa9d-a255722c67b6\" (UID: \"76dd2d97-1350-4cda-aa9d-a255722c67b6\") " Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:48.609284 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76dd2d97-1350-4cda-aa9d-a255722c67b6-config-data\") pod \"76dd2d97-1350-4cda-aa9d-a255722c67b6\" (UID: \"76dd2d97-1350-4cda-aa9d-a255722c67b6\") " Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:48.609362 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76dd2d97-1350-4cda-aa9d-a255722c67b6-config-data-custom\") pod \"76dd2d97-1350-4cda-aa9d-a255722c67b6\" (UID: \"76dd2d97-1350-4cda-aa9d-a255722c67b6\") " Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:48.610369 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76dd2d97-1350-4cda-aa9d-a255722c67b6-logs" (OuterVolumeSpecName: "logs") pod "76dd2d97-1350-4cda-aa9d-a255722c67b6" (UID: "76dd2d97-1350-4cda-aa9d-a255722c67b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:48.610638 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76dd2d97-1350-4cda-aa9d-a255722c67b6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "76dd2d97-1350-4cda-aa9d-a255722c67b6" (UID: "76dd2d97-1350-4cda-aa9d-a255722c67b6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:48.620749 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76dd2d97-1350-4cda-aa9d-a255722c67b6-scripts" (OuterVolumeSpecName: "scripts") pod "76dd2d97-1350-4cda-aa9d-a255722c67b6" (UID: "76dd2d97-1350-4cda-aa9d-a255722c67b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:48.620793 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76dd2d97-1350-4cda-aa9d-a255722c67b6-kube-api-access-n4g7h" (OuterVolumeSpecName: "kube-api-access-n4g7h") pod "76dd2d97-1350-4cda-aa9d-a255722c67b6" (UID: "76dd2d97-1350-4cda-aa9d-a255722c67b6"). InnerVolumeSpecName "kube-api-access-n4g7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:48.621229 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76dd2d97-1350-4cda-aa9d-a255722c67b6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "76dd2d97-1350-4cda-aa9d-a255722c67b6" (UID: "76dd2d97-1350-4cda-aa9d-a255722c67b6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:48.643335 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76dd2d97-1350-4cda-aa9d-a255722c67b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76dd2d97-1350-4cda-aa9d-a255722c67b6" (UID: "76dd2d97-1350-4cda-aa9d-a255722c67b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:48.680664 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76dd2d97-1350-4cda-aa9d-a255722c67b6-config-data" (OuterVolumeSpecName: "config-data") pod "76dd2d97-1350-4cda-aa9d-a255722c67b6" (UID: "76dd2d97-1350-4cda-aa9d-a255722c67b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:48.712035 4908 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76dd2d97-1350-4cda-aa9d-a255722c67b6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:48.712083 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76dd2d97-1350-4cda-aa9d-a255722c67b6-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:48.712095 4908 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76dd2d97-1350-4cda-aa9d-a255722c67b6-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:48.712108 4908 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76dd2d97-1350-4cda-aa9d-a255722c67b6-logs\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:48.712118 4908 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76dd2d97-1350-4cda-aa9d-a255722c67b6-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:48.712130 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76dd2d97-1350-4cda-aa9d-a255722c67b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:48 crc kubenswrapper[4908]: I0131 08:16:48.712142 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4g7h\" (UniqueName: \"kubernetes.io/projected/76dd2d97-1350-4cda-aa9d-a255722c67b6-kube-api-access-n4g7h\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.491997 4908 generic.go:334] "Generic (PLEG): container finished" podID="96324b48-9ef5-4df7-aa47-d586f228789e" containerID="e3544921f1f0faf9b4d8e5928970a63e9386801cfe8c66cd77a49374592af458" exitCode=0 Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.492157 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" event={"ID":"96324b48-9ef5-4df7-aa47-d586f228789e","Type":"ContainerDied","Data":"e3544921f1f0faf9b4d8e5928970a63e9386801cfe8c66cd77a49374592af458"} Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.495265 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.545415 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.565624 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.583092 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 31 08:16:49 crc kubenswrapper[4908]: E0131 08:16:49.583497 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76dd2d97-1350-4cda-aa9d-a255722c67b6" containerName="manila-api-log" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.583514 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="76dd2d97-1350-4cda-aa9d-a255722c67b6" containerName="manila-api-log" Jan 31 08:16:49 crc kubenswrapper[4908]: E0131 08:16:49.583577 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76dd2d97-1350-4cda-aa9d-a255722c67b6" containerName="manila-api" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.583586 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="76dd2d97-1350-4cda-aa9d-a255722c67b6" containerName="manila-api" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.583774 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="76dd2d97-1350-4cda-aa9d-a255722c67b6" containerName="manila-api-log" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.583799 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="76dd2d97-1350-4cda-aa9d-a255722c67b6" containerName="manila-api" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.584795 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.586723 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.586901 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.587908 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.594484 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.762508 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b36d23df-e364-41df-bfd4-751e0104325d-etc-machine-id\") pod \"manila-api-0\" (UID: \"b36d23df-e364-41df-bfd4-751e0104325d\") " pod="openstack/manila-api-0" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.762595 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b36d23df-e364-41df-bfd4-751e0104325d-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"b36d23df-e364-41df-bfd4-751e0104325d\") " pod="openstack/manila-api-0" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.762851 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p8dg\" (UniqueName: \"kubernetes.io/projected/b36d23df-e364-41df-bfd4-751e0104325d-kube-api-access-5p8dg\") pod \"manila-api-0\" (UID: \"b36d23df-e364-41df-bfd4-751e0104325d\") " pod="openstack/manila-api-0" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.763568 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b36d23df-e364-41df-bfd4-751e0104325d-public-tls-certs\") pod \"manila-api-0\" (UID: \"b36d23df-e364-41df-bfd4-751e0104325d\") " pod="openstack/manila-api-0" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.763641 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b36d23df-e364-41df-bfd4-751e0104325d-config-data-custom\") pod \"manila-api-0\" (UID: \"b36d23df-e364-41df-bfd4-751e0104325d\") " pod="openstack/manila-api-0" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.763672 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b36d23df-e364-41df-bfd4-751e0104325d-scripts\") pod \"manila-api-0\" (UID: \"b36d23df-e364-41df-bfd4-751e0104325d\") " pod="openstack/manila-api-0" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.763869 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b36d23df-e364-41df-bfd4-751e0104325d-internal-tls-certs\") pod \"manila-api-0\" (UID: \"b36d23df-e364-41df-bfd4-751e0104325d\") " pod="openstack/manila-api-0" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.764030 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b36d23df-e364-41df-bfd4-751e0104325d-config-data\") pod \"manila-api-0\" (UID: \"b36d23df-e364-41df-bfd4-751e0104325d\") " pod="openstack/manila-api-0" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.764268 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b36d23df-e364-41df-bfd4-751e0104325d-logs\") pod \"manila-api-0\" (UID: \"b36d23df-e364-41df-bfd4-751e0104325d\") " pod="openstack/manila-api-0" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.810484 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.865766 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b36d23df-e364-41df-bfd4-751e0104325d-logs\") pod \"manila-api-0\" (UID: \"b36d23df-e364-41df-bfd4-751e0104325d\") " pod="openstack/manila-api-0" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.866147 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b36d23df-e364-41df-bfd4-751e0104325d-etc-machine-id\") pod \"manila-api-0\" (UID: \"b36d23df-e364-41df-bfd4-751e0104325d\") " pod="openstack/manila-api-0" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.866172 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b36d23df-e364-41df-bfd4-751e0104325d-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"b36d23df-e364-41df-bfd4-751e0104325d\") " pod="openstack/manila-api-0" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.866204 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p8dg\" (UniqueName: \"kubernetes.io/projected/b36d23df-e364-41df-bfd4-751e0104325d-kube-api-access-5p8dg\") pod \"manila-api-0\" (UID: \"b36d23df-e364-41df-bfd4-751e0104325d\") " pod="openstack/manila-api-0" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.866211 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b36d23df-e364-41df-bfd4-751e0104325d-etc-machine-id\") pod \"manila-api-0\" (UID: \"b36d23df-e364-41df-bfd4-751e0104325d\") " pod="openstack/manila-api-0" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.866229 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b36d23df-e364-41df-bfd4-751e0104325d-public-tls-certs\") pod \"manila-api-0\" (UID: \"b36d23df-e364-41df-bfd4-751e0104325d\") " pod="openstack/manila-api-0" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.866266 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b36d23df-e364-41df-bfd4-751e0104325d-config-data-custom\") pod \"manila-api-0\" (UID: \"b36d23df-e364-41df-bfd4-751e0104325d\") " pod="openstack/manila-api-0" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.866270 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b36d23df-e364-41df-bfd4-751e0104325d-logs\") pod \"manila-api-0\" (UID: \"b36d23df-e364-41df-bfd4-751e0104325d\") " pod="openstack/manila-api-0" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.866288 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b36d23df-e364-41df-bfd4-751e0104325d-scripts\") pod \"manila-api-0\" (UID: \"b36d23df-e364-41df-bfd4-751e0104325d\") " pod="openstack/manila-api-0" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.866393 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b36d23df-e364-41df-bfd4-751e0104325d-internal-tls-certs\") pod \"manila-api-0\" (UID: \"b36d23df-e364-41df-bfd4-751e0104325d\") " pod="openstack/manila-api-0" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.866437 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b36d23df-e364-41df-bfd4-751e0104325d-config-data\") pod \"manila-api-0\" (UID: \"b36d23df-e364-41df-bfd4-751e0104325d\") " pod="openstack/manila-api-0" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.872817 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b36d23df-e364-41df-bfd4-751e0104325d-config-data\") pod \"manila-api-0\" (UID: \"b36d23df-e364-41df-bfd4-751e0104325d\") " pod="openstack/manila-api-0" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.874823 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b36d23df-e364-41df-bfd4-751e0104325d-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"b36d23df-e364-41df-bfd4-751e0104325d\") " pod="openstack/manila-api-0" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.875356 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b36d23df-e364-41df-bfd4-751e0104325d-internal-tls-certs\") pod \"manila-api-0\" (UID: \"b36d23df-e364-41df-bfd4-751e0104325d\") " pod="openstack/manila-api-0" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.876044 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b36d23df-e364-41df-bfd4-751e0104325d-config-data-custom\") pod \"manila-api-0\" (UID: \"b36d23df-e364-41df-bfd4-751e0104325d\") " pod="openstack/manila-api-0" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.879249 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b36d23df-e364-41df-bfd4-751e0104325d-scripts\") pod \"manila-api-0\" (UID: \"b36d23df-e364-41df-bfd4-751e0104325d\") " pod="openstack/manila-api-0" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.883371 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b36d23df-e364-41df-bfd4-751e0104325d-public-tls-certs\") pod \"manila-api-0\" (UID: \"b36d23df-e364-41df-bfd4-751e0104325d\") " pod="openstack/manila-api-0" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.891953 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p8dg\" (UniqueName: \"kubernetes.io/projected/b36d23df-e364-41df-bfd4-751e0104325d-kube-api-access-5p8dg\") pod \"manila-api-0\" (UID: \"b36d23df-e364-41df-bfd4-751e0104325d\") " pod="openstack/manila-api-0" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.911639 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.972047 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/96324b48-9ef5-4df7-aa47-d586f228789e-openstack-edpm-ipam\") pod \"96324b48-9ef5-4df7-aa47-d586f228789e\" (UID: \"96324b48-9ef5-4df7-aa47-d586f228789e\") " Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.972120 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96324b48-9ef5-4df7-aa47-d586f228789e-dns-svc\") pod \"96324b48-9ef5-4df7-aa47-d586f228789e\" (UID: \"96324b48-9ef5-4df7-aa47-d586f228789e\") " Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.972161 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96324b48-9ef5-4df7-aa47-d586f228789e-ovsdbserver-sb\") pod \"96324b48-9ef5-4df7-aa47-d586f228789e\" (UID: \"96324b48-9ef5-4df7-aa47-d586f228789e\") " Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.972253 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96324b48-9ef5-4df7-aa47-d586f228789e-config\") pod \"96324b48-9ef5-4df7-aa47-d586f228789e\" (UID: \"96324b48-9ef5-4df7-aa47-d586f228789e\") " Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.972317 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbn2h\" (UniqueName: \"kubernetes.io/projected/96324b48-9ef5-4df7-aa47-d586f228789e-kube-api-access-nbn2h\") pod \"96324b48-9ef5-4df7-aa47-d586f228789e\" (UID: \"96324b48-9ef5-4df7-aa47-d586f228789e\") " Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.972343 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96324b48-9ef5-4df7-aa47-d586f228789e-ovsdbserver-nb\") pod \"96324b48-9ef5-4df7-aa47-d586f228789e\" (UID: \"96324b48-9ef5-4df7-aa47-d586f228789e\") " Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.982179 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76dd2d97-1350-4cda-aa9d-a255722c67b6" path="/var/lib/kubelet/pods/76dd2d97-1350-4cda-aa9d-a255722c67b6/volumes" Jan 31 08:16:49 crc kubenswrapper[4908]: I0131 08:16:49.991584 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96324b48-9ef5-4df7-aa47-d586f228789e-kube-api-access-nbn2h" (OuterVolumeSpecName: "kube-api-access-nbn2h") pod "96324b48-9ef5-4df7-aa47-d586f228789e" (UID: "96324b48-9ef5-4df7-aa47-d586f228789e"). InnerVolumeSpecName "kube-api-access-nbn2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:16:50 crc kubenswrapper[4908]: I0131 08:16:50.028304 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96324b48-9ef5-4df7-aa47-d586f228789e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "96324b48-9ef5-4df7-aa47-d586f228789e" (UID: "96324b48-9ef5-4df7-aa47-d586f228789e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:16:50 crc kubenswrapper[4908]: I0131 08:16:50.033936 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96324b48-9ef5-4df7-aa47-d586f228789e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "96324b48-9ef5-4df7-aa47-d586f228789e" (UID: "96324b48-9ef5-4df7-aa47-d586f228789e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:16:50 crc kubenswrapper[4908]: I0131 08:16:50.037571 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96324b48-9ef5-4df7-aa47-d586f228789e-config" (OuterVolumeSpecName: "config") pod "96324b48-9ef5-4df7-aa47-d586f228789e" (UID: "96324b48-9ef5-4df7-aa47-d586f228789e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:16:50 crc kubenswrapper[4908]: I0131 08:16:50.037921 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96324b48-9ef5-4df7-aa47-d586f228789e-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "96324b48-9ef5-4df7-aa47-d586f228789e" (UID: "96324b48-9ef5-4df7-aa47-d586f228789e"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:16:50 crc kubenswrapper[4908]: I0131 08:16:50.053632 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96324b48-9ef5-4df7-aa47-d586f228789e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "96324b48-9ef5-4df7-aa47-d586f228789e" (UID: "96324b48-9ef5-4df7-aa47-d586f228789e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:16:50 crc kubenswrapper[4908]: I0131 08:16:50.076274 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbn2h\" (UniqueName: \"kubernetes.io/projected/96324b48-9ef5-4df7-aa47-d586f228789e-kube-api-access-nbn2h\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:50 crc kubenswrapper[4908]: I0131 08:16:50.076797 4908 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96324b48-9ef5-4df7-aa47-d586f228789e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:50 crc kubenswrapper[4908]: I0131 08:16:50.076811 4908 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/96324b48-9ef5-4df7-aa47-d586f228789e-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:50 crc kubenswrapper[4908]: I0131 08:16:50.076826 4908 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96324b48-9ef5-4df7-aa47-d586f228789e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:50 crc kubenswrapper[4908]: I0131 08:16:50.076837 4908 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96324b48-9ef5-4df7-aa47-d586f228789e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:50 crc kubenswrapper[4908]: I0131 08:16:50.076848 4908 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96324b48-9ef5-4df7-aa47-d586f228789e-config\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:50 crc kubenswrapper[4908]: I0131 08:16:50.503857 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" event={"ID":"96324b48-9ef5-4df7-aa47-d586f228789e","Type":"ContainerDied","Data":"24ce2957b07b2acf905e6bbc313705ea768fe1592030dc4b638830ad1ca8db1c"} Jan 31 08:16:50 crc kubenswrapper[4908]: I0131 08:16:50.503900 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-8qn5b" Jan 31 08:16:50 crc kubenswrapper[4908]: I0131 08:16:50.503914 4908 scope.go:117] "RemoveContainer" containerID="e3544921f1f0faf9b4d8e5928970a63e9386801cfe8c66cd77a49374592af458" Jan 31 08:16:50 crc kubenswrapper[4908]: I0131 08:16:50.507538 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"995cd9ca-8380-4cfa-957c-a7e76208f2d7","Type":"ContainerStarted","Data":"c8e325fb4fdeadf4e279f56e5983b0bb3eaa2b787e70a4f0c40e625ff4fcb036"} Jan 31 08:16:50 crc kubenswrapper[4908]: I0131 08:16:50.533680 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 31 08:16:50 crc kubenswrapper[4908]: I0131 08:16:50.548357 4908 scope.go:117] "RemoveContainer" containerID="66f2f7229c139f71c0f8c76e9348aced376bfd53255a8536bffdf4bb0d680b9a" Jan 31 08:16:50 crc kubenswrapper[4908]: I0131 08:16:50.758755 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-8qn5b"] Jan 31 08:16:50 crc kubenswrapper[4908]: I0131 08:16:50.781127 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-8qn5b"] Jan 31 08:16:51 crc kubenswrapper[4908]: I0131 08:16:51.526654 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"b36d23df-e364-41df-bfd4-751e0104325d","Type":"ContainerStarted","Data":"542a91f3287555acf9064a17fb4ae2fa393b081e844e1854a674f77326fc65d7"} Jan 31 08:16:51 crc kubenswrapper[4908]: I0131 08:16:51.526904 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"b36d23df-e364-41df-bfd4-751e0104325d","Type":"ContainerStarted","Data":"6f97b3c09208ac62f0be13f74b9efb8bd37a1f5e6f95b19243f3cf9a8d9c9da9"} Jan 31 08:16:51 crc kubenswrapper[4908]: I0131 08:16:51.531376 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"995cd9ca-8380-4cfa-957c-a7e76208f2d7","Type":"ContainerStarted","Data":"14e41483a8903ba3389983b01154d3d62db13e25c6418d95ef7afc94a9d542f4"} Jan 31 08:16:51 crc kubenswrapper[4908]: I0131 08:16:51.552323 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=4.674718775 podStartE2EDuration="14.552300929s" podCreationTimestamp="2026-01-31 08:16:37 +0000 UTC" firstStartedPulling="2026-01-31 08:16:39.452168144 +0000 UTC m=+3306.068112798" lastFinishedPulling="2026-01-31 08:16:49.329750298 +0000 UTC m=+3315.945694952" observedRunningTime="2026-01-31 08:16:51.550487673 +0000 UTC m=+3318.166432327" watchObservedRunningTime="2026-01-31 08:16:51.552300929 +0000 UTC m=+3318.168245583" Jan 31 08:16:51 crc kubenswrapper[4908]: I0131 08:16:51.666690 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 08:16:51 crc kubenswrapper[4908]: I0131 08:16:51.667347 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8da7412c-bb80-4222-b7a5-4a88e50a86f2" containerName="sg-core" containerID="cri-o://82080723f2252835eadeba740395d67bb1eaa2c0863221f56f3e58e0f7c838ad" gracePeriod=30 Jan 31 08:16:51 crc kubenswrapper[4908]: I0131 08:16:51.667441 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8da7412c-bb80-4222-b7a5-4a88e50a86f2" containerName="ceilometer-notification-agent" containerID="cri-o://16cbf5496603fc4d9ec91a8cb68dcd8805f40b41b93bd370dde8c679d970251a" gracePeriod=30 Jan 31 08:16:51 crc kubenswrapper[4908]: I0131 08:16:51.667566 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8da7412c-bb80-4222-b7a5-4a88e50a86f2" containerName="proxy-httpd" containerID="cri-o://6dd6f37e595f22acc4ab98d88422db9041d09aeecbfeb11f138dd024d4b5eb84" gracePeriod=30 Jan 31 08:16:51 crc kubenswrapper[4908]: I0131 08:16:51.667069 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8da7412c-bb80-4222-b7a5-4a88e50a86f2" containerName="ceilometer-central-agent" containerID="cri-o://9db0fac8c22fd03cb0a7b92b87d300daa1fc4a35ffd092a51e5290123b4184fb" gracePeriod=30 Jan 31 08:16:51 crc kubenswrapper[4908]: I0131 08:16:51.951088 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96324b48-9ef5-4df7-aa47-d586f228789e" path="/var/lib/kubelet/pods/96324b48-9ef5-4df7-aa47-d586f228789e/volumes" Jan 31 08:16:52 crc kubenswrapper[4908]: I0131 08:16:52.545414 4908 generic.go:334] "Generic (PLEG): container finished" podID="8da7412c-bb80-4222-b7a5-4a88e50a86f2" containerID="6dd6f37e595f22acc4ab98d88422db9041d09aeecbfeb11f138dd024d4b5eb84" exitCode=0 Jan 31 08:16:52 crc kubenswrapper[4908]: I0131 08:16:52.545681 4908 generic.go:334] "Generic (PLEG): container finished" podID="8da7412c-bb80-4222-b7a5-4a88e50a86f2" containerID="82080723f2252835eadeba740395d67bb1eaa2c0863221f56f3e58e0f7c838ad" exitCode=2 Jan 31 08:16:52 crc kubenswrapper[4908]: I0131 08:16:52.545690 4908 generic.go:334] "Generic (PLEG): container finished" podID="8da7412c-bb80-4222-b7a5-4a88e50a86f2" containerID="9db0fac8c22fd03cb0a7b92b87d300daa1fc4a35ffd092a51e5290123b4184fb" exitCode=0 Jan 31 08:16:52 crc kubenswrapper[4908]: I0131 08:16:52.545449 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8da7412c-bb80-4222-b7a5-4a88e50a86f2","Type":"ContainerDied","Data":"6dd6f37e595f22acc4ab98d88422db9041d09aeecbfeb11f138dd024d4b5eb84"} Jan 31 08:16:52 crc kubenswrapper[4908]: I0131 08:16:52.545751 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8da7412c-bb80-4222-b7a5-4a88e50a86f2","Type":"ContainerDied","Data":"82080723f2252835eadeba740395d67bb1eaa2c0863221f56f3e58e0f7c838ad"} Jan 31 08:16:52 crc kubenswrapper[4908]: I0131 08:16:52.545764 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8da7412c-bb80-4222-b7a5-4a88e50a86f2","Type":"ContainerDied","Data":"9db0fac8c22fd03cb0a7b92b87d300daa1fc4a35ffd092a51e5290123b4184fb"} Jan 31 08:16:52 crc kubenswrapper[4908]: I0131 08:16:52.549193 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"b36d23df-e364-41df-bfd4-751e0104325d","Type":"ContainerStarted","Data":"468e99b3348c94dd796b2d9d769e65f54f6fa1b20d13b697f20b5b16ef3a928b"} Jan 31 08:16:52 crc kubenswrapper[4908]: I0131 08:16:52.568996 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.568954883 podStartE2EDuration="3.568954883s" podCreationTimestamp="2026-01-31 08:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 08:16:52.567968138 +0000 UTC m=+3319.183912792" watchObservedRunningTime="2026-01-31 08:16:52.568954883 +0000 UTC m=+3319.184899547" Jan 31 08:16:53 crc kubenswrapper[4908]: I0131 08:16:53.559830 4908 generic.go:334] "Generic (PLEG): container finished" podID="8da7412c-bb80-4222-b7a5-4a88e50a86f2" containerID="16cbf5496603fc4d9ec91a8cb68dcd8805f40b41b93bd370dde8c679d970251a" exitCode=0 Jan 31 08:16:53 crc kubenswrapper[4908]: I0131 08:16:53.559896 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8da7412c-bb80-4222-b7a5-4a88e50a86f2","Type":"ContainerDied","Data":"16cbf5496603fc4d9ec91a8cb68dcd8805f40b41b93bd370dde8c679d970251a"} Jan 31 08:16:53 crc kubenswrapper[4908]: I0131 08:16:53.560469 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 31 08:16:53 crc kubenswrapper[4908]: I0131 08:16:53.919268 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 08:16:53 crc kubenswrapper[4908]: I0131 08:16:53.940863 4908 scope.go:117] "RemoveContainer" containerID="45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109" Jan 31 08:16:53 crc kubenswrapper[4908]: E0131 08:16:53.941145 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.073827 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8da7412c-bb80-4222-b7a5-4a88e50a86f2-run-httpd\") pod \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.073881 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8da7412c-bb80-4222-b7a5-4a88e50a86f2-log-httpd\") pod \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.073916 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8da7412c-bb80-4222-b7a5-4a88e50a86f2-config-data\") pod \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.073966 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cfmr\" (UniqueName: \"kubernetes.io/projected/8da7412c-bb80-4222-b7a5-4a88e50a86f2-kube-api-access-9cfmr\") pod \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.074030 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8da7412c-bb80-4222-b7a5-4a88e50a86f2-scripts\") pod \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.074089 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8da7412c-bb80-4222-b7a5-4a88e50a86f2-combined-ca-bundle\") pod \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.074235 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8da7412c-bb80-4222-b7a5-4a88e50a86f2-sg-core-conf-yaml\") pod \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.074304 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8da7412c-bb80-4222-b7a5-4a88e50a86f2-ceilometer-tls-certs\") pod \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\" (UID: \"8da7412c-bb80-4222-b7a5-4a88e50a86f2\") " Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.074496 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8da7412c-bb80-4222-b7a5-4a88e50a86f2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8da7412c-bb80-4222-b7a5-4a88e50a86f2" (UID: "8da7412c-bb80-4222-b7a5-4a88e50a86f2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.075146 4908 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8da7412c-bb80-4222-b7a5-4a88e50a86f2-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.077468 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8da7412c-bb80-4222-b7a5-4a88e50a86f2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8da7412c-bb80-4222-b7a5-4a88e50a86f2" (UID: "8da7412c-bb80-4222-b7a5-4a88e50a86f2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.080624 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8da7412c-bb80-4222-b7a5-4a88e50a86f2-kube-api-access-9cfmr" (OuterVolumeSpecName: "kube-api-access-9cfmr") pod "8da7412c-bb80-4222-b7a5-4a88e50a86f2" (UID: "8da7412c-bb80-4222-b7a5-4a88e50a86f2"). InnerVolumeSpecName "kube-api-access-9cfmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.080813 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8da7412c-bb80-4222-b7a5-4a88e50a86f2-scripts" (OuterVolumeSpecName: "scripts") pod "8da7412c-bb80-4222-b7a5-4a88e50a86f2" (UID: "8da7412c-bb80-4222-b7a5-4a88e50a86f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.107304 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8da7412c-bb80-4222-b7a5-4a88e50a86f2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8da7412c-bb80-4222-b7a5-4a88e50a86f2" (UID: "8da7412c-bb80-4222-b7a5-4a88e50a86f2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.136167 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8da7412c-bb80-4222-b7a5-4a88e50a86f2-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "8da7412c-bb80-4222-b7a5-4a88e50a86f2" (UID: "8da7412c-bb80-4222-b7a5-4a88e50a86f2"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.162757 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8da7412c-bb80-4222-b7a5-4a88e50a86f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8da7412c-bb80-4222-b7a5-4a88e50a86f2" (UID: "8da7412c-bb80-4222-b7a5-4a88e50a86f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.177806 4908 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8da7412c-bb80-4222-b7a5-4a88e50a86f2-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.177845 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cfmr\" (UniqueName: \"kubernetes.io/projected/8da7412c-bb80-4222-b7a5-4a88e50a86f2-kube-api-access-9cfmr\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.177857 4908 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8da7412c-bb80-4222-b7a5-4a88e50a86f2-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.177870 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8da7412c-bb80-4222-b7a5-4a88e50a86f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.177880 4908 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8da7412c-bb80-4222-b7a5-4a88e50a86f2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.177890 4908 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8da7412c-bb80-4222-b7a5-4a88e50a86f2-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.189569 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8da7412c-bb80-4222-b7a5-4a88e50a86f2-config-data" (OuterVolumeSpecName: "config-data") pod "8da7412c-bb80-4222-b7a5-4a88e50a86f2" (UID: "8da7412c-bb80-4222-b7a5-4a88e50a86f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.280030 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8da7412c-bb80-4222-b7a5-4a88e50a86f2-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.572601 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.572619 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8da7412c-bb80-4222-b7a5-4a88e50a86f2","Type":"ContainerDied","Data":"a6bd73170bbed5714b5a36dbed657dab246832f5ab5c43632f0b94a99cc52fd6"} Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.572694 4908 scope.go:117] "RemoveContainer" containerID="6dd6f37e595f22acc4ab98d88422db9041d09aeecbfeb11f138dd024d4b5eb84" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.619088 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.620801 4908 scope.go:117] "RemoveContainer" containerID="82080723f2252835eadeba740395d67bb1eaa2c0863221f56f3e58e0f7c838ad" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.631844 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.655310 4908 scope.go:117] "RemoveContainer" containerID="16cbf5496603fc4d9ec91a8cb68dcd8805f40b41b93bd370dde8c679d970251a" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.657280 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 08:16:54 crc kubenswrapper[4908]: E0131 08:16:54.657711 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96324b48-9ef5-4df7-aa47-d586f228789e" containerName="init" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.657736 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="96324b48-9ef5-4df7-aa47-d586f228789e" containerName="init" Jan 31 08:16:54 crc kubenswrapper[4908]: E0131 08:16:54.657753 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8da7412c-bb80-4222-b7a5-4a88e50a86f2" containerName="ceilometer-central-agent" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.657761 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="8da7412c-bb80-4222-b7a5-4a88e50a86f2" containerName="ceilometer-central-agent" Jan 31 08:16:54 crc kubenswrapper[4908]: E0131 08:16:54.657783 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96324b48-9ef5-4df7-aa47-d586f228789e" containerName="dnsmasq-dns" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.657792 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="96324b48-9ef5-4df7-aa47-d586f228789e" containerName="dnsmasq-dns" Jan 31 08:16:54 crc kubenswrapper[4908]: E0131 08:16:54.657810 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8da7412c-bb80-4222-b7a5-4a88e50a86f2" containerName="proxy-httpd" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.657819 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="8da7412c-bb80-4222-b7a5-4a88e50a86f2" containerName="proxy-httpd" Jan 31 08:16:54 crc kubenswrapper[4908]: E0131 08:16:54.657834 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8da7412c-bb80-4222-b7a5-4a88e50a86f2" containerName="ceilometer-notification-agent" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.657841 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="8da7412c-bb80-4222-b7a5-4a88e50a86f2" containerName="ceilometer-notification-agent" Jan 31 08:16:54 crc kubenswrapper[4908]: E0131 08:16:54.657858 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8da7412c-bb80-4222-b7a5-4a88e50a86f2" containerName="sg-core" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.657865 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="8da7412c-bb80-4222-b7a5-4a88e50a86f2" containerName="sg-core" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.658087 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="96324b48-9ef5-4df7-aa47-d586f228789e" containerName="dnsmasq-dns" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.658114 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="8da7412c-bb80-4222-b7a5-4a88e50a86f2" containerName="sg-core" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.658129 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="8da7412c-bb80-4222-b7a5-4a88e50a86f2" containerName="proxy-httpd" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.658146 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="8da7412c-bb80-4222-b7a5-4a88e50a86f2" containerName="ceilometer-central-agent" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.658160 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="8da7412c-bb80-4222-b7a5-4a88e50a86f2" containerName="ceilometer-notification-agent" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.660274 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.663367 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.663391 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.670444 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.673796 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.695376 4908 scope.go:117] "RemoveContainer" containerID="9db0fac8c22fd03cb0a7b92b87d300daa1fc4a35ffd092a51e5290123b4184fb" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.794344 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a010224-6b0f-4037-a4ca-f32fce0ab77b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7a010224-6b0f-4037-a4ca-f32fce0ab77b\") " pod="openstack/ceilometer-0" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.794489 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a010224-6b0f-4037-a4ca-f32fce0ab77b-log-httpd\") pod \"ceilometer-0\" (UID: \"7a010224-6b0f-4037-a4ca-f32fce0ab77b\") " pod="openstack/ceilometer-0" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.794511 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a010224-6b0f-4037-a4ca-f32fce0ab77b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7a010224-6b0f-4037-a4ca-f32fce0ab77b\") " pod="openstack/ceilometer-0" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.794535 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7a010224-6b0f-4037-a4ca-f32fce0ab77b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7a010224-6b0f-4037-a4ca-f32fce0ab77b\") " pod="openstack/ceilometer-0" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.794581 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a010224-6b0f-4037-a4ca-f32fce0ab77b-scripts\") pod \"ceilometer-0\" (UID: \"7a010224-6b0f-4037-a4ca-f32fce0ab77b\") " pod="openstack/ceilometer-0" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.794602 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a010224-6b0f-4037-a4ca-f32fce0ab77b-config-data\") pod \"ceilometer-0\" (UID: \"7a010224-6b0f-4037-a4ca-f32fce0ab77b\") " pod="openstack/ceilometer-0" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.794629 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntdpm\" (UniqueName: \"kubernetes.io/projected/7a010224-6b0f-4037-a4ca-f32fce0ab77b-kube-api-access-ntdpm\") pod \"ceilometer-0\" (UID: \"7a010224-6b0f-4037-a4ca-f32fce0ab77b\") " pod="openstack/ceilometer-0" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.794683 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a010224-6b0f-4037-a4ca-f32fce0ab77b-run-httpd\") pod \"ceilometer-0\" (UID: \"7a010224-6b0f-4037-a4ca-f32fce0ab77b\") " pod="openstack/ceilometer-0" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.896520 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a010224-6b0f-4037-a4ca-f32fce0ab77b-log-httpd\") pod \"ceilometer-0\" (UID: \"7a010224-6b0f-4037-a4ca-f32fce0ab77b\") " pod="openstack/ceilometer-0" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.896579 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a010224-6b0f-4037-a4ca-f32fce0ab77b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7a010224-6b0f-4037-a4ca-f32fce0ab77b\") " pod="openstack/ceilometer-0" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.896616 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7a010224-6b0f-4037-a4ca-f32fce0ab77b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7a010224-6b0f-4037-a4ca-f32fce0ab77b\") " pod="openstack/ceilometer-0" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.896649 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a010224-6b0f-4037-a4ca-f32fce0ab77b-scripts\") pod \"ceilometer-0\" (UID: \"7a010224-6b0f-4037-a4ca-f32fce0ab77b\") " pod="openstack/ceilometer-0" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.896680 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a010224-6b0f-4037-a4ca-f32fce0ab77b-config-data\") pod \"ceilometer-0\" (UID: \"7a010224-6b0f-4037-a4ca-f32fce0ab77b\") " pod="openstack/ceilometer-0" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.896718 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntdpm\" (UniqueName: \"kubernetes.io/projected/7a010224-6b0f-4037-a4ca-f32fce0ab77b-kube-api-access-ntdpm\") pod \"ceilometer-0\" (UID: \"7a010224-6b0f-4037-a4ca-f32fce0ab77b\") " pod="openstack/ceilometer-0" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.896760 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a010224-6b0f-4037-a4ca-f32fce0ab77b-run-httpd\") pod \"ceilometer-0\" (UID: \"7a010224-6b0f-4037-a4ca-f32fce0ab77b\") " pod="openstack/ceilometer-0" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.896864 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a010224-6b0f-4037-a4ca-f32fce0ab77b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7a010224-6b0f-4037-a4ca-f32fce0ab77b\") " pod="openstack/ceilometer-0" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.897432 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a010224-6b0f-4037-a4ca-f32fce0ab77b-run-httpd\") pod \"ceilometer-0\" (UID: \"7a010224-6b0f-4037-a4ca-f32fce0ab77b\") " pod="openstack/ceilometer-0" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.897543 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a010224-6b0f-4037-a4ca-f32fce0ab77b-log-httpd\") pod \"ceilometer-0\" (UID: \"7a010224-6b0f-4037-a4ca-f32fce0ab77b\") " pod="openstack/ceilometer-0" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.900479 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a010224-6b0f-4037-a4ca-f32fce0ab77b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7a010224-6b0f-4037-a4ca-f32fce0ab77b\") " pod="openstack/ceilometer-0" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.902390 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7a010224-6b0f-4037-a4ca-f32fce0ab77b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7a010224-6b0f-4037-a4ca-f32fce0ab77b\") " pod="openstack/ceilometer-0" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.902919 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a010224-6b0f-4037-a4ca-f32fce0ab77b-scripts\") pod \"ceilometer-0\" (UID: \"7a010224-6b0f-4037-a4ca-f32fce0ab77b\") " pod="openstack/ceilometer-0" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.906648 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a010224-6b0f-4037-a4ca-f32fce0ab77b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7a010224-6b0f-4037-a4ca-f32fce0ab77b\") " pod="openstack/ceilometer-0" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.916622 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a010224-6b0f-4037-a4ca-f32fce0ab77b-config-data\") pod \"ceilometer-0\" (UID: \"7a010224-6b0f-4037-a4ca-f32fce0ab77b\") " pod="openstack/ceilometer-0" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.933735 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntdpm\" (UniqueName: \"kubernetes.io/projected/7a010224-6b0f-4037-a4ca-f32fce0ab77b-kube-api-access-ntdpm\") pod \"ceilometer-0\" (UID: \"7a010224-6b0f-4037-a4ca-f32fce0ab77b\") " pod="openstack/ceilometer-0" Jan 31 08:16:54 crc kubenswrapper[4908]: I0131 08:16:54.992456 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 08:16:55 crc kubenswrapper[4908]: I0131 08:16:55.518798 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 08:16:55 crc kubenswrapper[4908]: W0131 08:16:55.522628 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a010224_6b0f_4037_a4ca_f32fce0ab77b.slice/crio-7d9ad67619a994fe4700592ebd6d221dc3bb3c7a1d604bd0f0cb13a97e8e2a9f WatchSource:0}: Error finding container 7d9ad67619a994fe4700592ebd6d221dc3bb3c7a1d604bd0f0cb13a97e8e2a9f: Status 404 returned error can't find the container with id 7d9ad67619a994fe4700592ebd6d221dc3bb3c7a1d604bd0f0cb13a97e8e2a9f Jan 31 08:16:55 crc kubenswrapper[4908]: I0131 08:16:55.583321 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a010224-6b0f-4037-a4ca-f32fce0ab77b","Type":"ContainerStarted","Data":"7d9ad67619a994fe4700592ebd6d221dc3bb3c7a1d604bd0f0cb13a97e8e2a9f"} Jan 31 08:16:55 crc kubenswrapper[4908]: I0131 08:16:55.951554 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8da7412c-bb80-4222-b7a5-4a88e50a86f2" path="/var/lib/kubelet/pods/8da7412c-bb80-4222-b7a5-4a88e50a86f2/volumes" Jan 31 08:16:58 crc kubenswrapper[4908]: I0131 08:16:58.232185 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 31 08:16:58 crc kubenswrapper[4908]: I0131 08:16:58.614552 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a010224-6b0f-4037-a4ca-f32fce0ab77b","Type":"ContainerStarted","Data":"90a506a935268f49e6de90587a2ec95b83b3b813013c0338d01a6d71c5a647fd"} Jan 31 08:16:59 crc kubenswrapper[4908]: I0131 08:16:59.519677 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 31 08:16:59 crc kubenswrapper[4908]: I0131 08:16:59.587649 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 08:16:59 crc kubenswrapper[4908]: I0131 08:16:59.627148 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a010224-6b0f-4037-a4ca-f32fce0ab77b","Type":"ContainerStarted","Data":"2cb97bd3999468890d93d61575e3e5af15068c959517c35336f93dab5d41cf26"} Jan 31 08:16:59 crc kubenswrapper[4908]: I0131 08:16:59.627361 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="eb54fd85-3449-4e89-9d17-717d7adcbf8a" containerName="manila-scheduler" containerID="cri-o://c4811b0f7d7b0b503fa4f92764defa45afb1fc7138246abafbc21c6951d4f181" gracePeriod=30 Jan 31 08:16:59 crc kubenswrapper[4908]: I0131 08:16:59.627429 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="eb54fd85-3449-4e89-9d17-717d7adcbf8a" containerName="probe" containerID="cri-o://677ee8d4b88a895aca4313491f2c9c27720ca6bfaff4d186c8be36bb1d70dad0" gracePeriod=30 Jan 31 08:17:00 crc kubenswrapper[4908]: I0131 08:17:00.661298 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a010224-6b0f-4037-a4ca-f32fce0ab77b","Type":"ContainerStarted","Data":"f478b9f6847aae226e3c8c552d02ae8a853f31c4a7b8b413becdb8907c4bd347"} Jan 31 08:17:00 crc kubenswrapper[4908]: I0131 08:17:00.671921 4908 generic.go:334] "Generic (PLEG): container finished" podID="eb54fd85-3449-4e89-9d17-717d7adcbf8a" containerID="677ee8d4b88a895aca4313491f2c9c27720ca6bfaff4d186c8be36bb1d70dad0" exitCode=0 Jan 31 08:17:00 crc kubenswrapper[4908]: I0131 08:17:00.672007 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"eb54fd85-3449-4e89-9d17-717d7adcbf8a","Type":"ContainerDied","Data":"677ee8d4b88a895aca4313491f2c9c27720ca6bfaff4d186c8be36bb1d70dad0"} Jan 31 08:17:01 crc kubenswrapper[4908]: E0131 08:17:01.334590 4908 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb54fd85_3449_4e89_9d17_717d7adcbf8a.slice/crio-c4811b0f7d7b0b503fa4f92764defa45afb1fc7138246abafbc21c6951d4f181.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb54fd85_3449_4e89_9d17_717d7adcbf8a.slice/crio-conmon-c4811b0f7d7b0b503fa4f92764defa45afb1fc7138246abafbc21c6951d4f181.scope\": RecentStats: unable to find data in memory cache]" Jan 31 08:17:01 crc kubenswrapper[4908]: I0131 08:17:01.604455 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 31 08:17:01 crc kubenswrapper[4908]: I0131 08:17:01.688567 4908 generic.go:334] "Generic (PLEG): container finished" podID="eb54fd85-3449-4e89-9d17-717d7adcbf8a" containerID="c4811b0f7d7b0b503fa4f92764defa45afb1fc7138246abafbc21c6951d4f181" exitCode=0 Jan 31 08:17:01 crc kubenswrapper[4908]: I0131 08:17:01.688612 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"eb54fd85-3449-4e89-9d17-717d7adcbf8a","Type":"ContainerDied","Data":"c4811b0f7d7b0b503fa4f92764defa45afb1fc7138246abafbc21c6951d4f181"} Jan 31 08:17:01 crc kubenswrapper[4908]: I0131 08:17:01.688642 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"eb54fd85-3449-4e89-9d17-717d7adcbf8a","Type":"ContainerDied","Data":"5a9cefeff155b7510a76562bfb57b7675919b9eaac3816cc7ec6932e923987aa"} Jan 31 08:17:01 crc kubenswrapper[4908]: I0131 08:17:01.688666 4908 scope.go:117] "RemoveContainer" containerID="677ee8d4b88a895aca4313491f2c9c27720ca6bfaff4d186c8be36bb1d70dad0" Jan 31 08:17:01 crc kubenswrapper[4908]: I0131 08:17:01.688794 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 31 08:17:01 crc kubenswrapper[4908]: I0131 08:17:01.714145 4908 scope.go:117] "RemoveContainer" containerID="c4811b0f7d7b0b503fa4f92764defa45afb1fc7138246abafbc21c6951d4f181" Jan 31 08:17:01 crc kubenswrapper[4908]: I0131 08:17:01.740873 4908 scope.go:117] "RemoveContainer" containerID="677ee8d4b88a895aca4313491f2c9c27720ca6bfaff4d186c8be36bb1d70dad0" Jan 31 08:17:01 crc kubenswrapper[4908]: E0131 08:17:01.741418 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"677ee8d4b88a895aca4313491f2c9c27720ca6bfaff4d186c8be36bb1d70dad0\": container with ID starting with 677ee8d4b88a895aca4313491f2c9c27720ca6bfaff4d186c8be36bb1d70dad0 not found: ID does not exist" containerID="677ee8d4b88a895aca4313491f2c9c27720ca6bfaff4d186c8be36bb1d70dad0" Jan 31 08:17:01 crc kubenswrapper[4908]: I0131 08:17:01.741458 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"677ee8d4b88a895aca4313491f2c9c27720ca6bfaff4d186c8be36bb1d70dad0"} err="failed to get container status \"677ee8d4b88a895aca4313491f2c9c27720ca6bfaff4d186c8be36bb1d70dad0\": rpc error: code = NotFound desc = could not find container \"677ee8d4b88a895aca4313491f2c9c27720ca6bfaff4d186c8be36bb1d70dad0\": container with ID starting with 677ee8d4b88a895aca4313491f2c9c27720ca6bfaff4d186c8be36bb1d70dad0 not found: ID does not exist" Jan 31 08:17:01 crc kubenswrapper[4908]: I0131 08:17:01.741484 4908 scope.go:117] "RemoveContainer" containerID="c4811b0f7d7b0b503fa4f92764defa45afb1fc7138246abafbc21c6951d4f181" Jan 31 08:17:01 crc kubenswrapper[4908]: E0131 08:17:01.741993 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4811b0f7d7b0b503fa4f92764defa45afb1fc7138246abafbc21c6951d4f181\": container with ID starting with c4811b0f7d7b0b503fa4f92764defa45afb1fc7138246abafbc21c6951d4f181 not found: ID does not exist" containerID="c4811b0f7d7b0b503fa4f92764defa45afb1fc7138246abafbc21c6951d4f181" Jan 31 08:17:01 crc kubenswrapper[4908]: I0131 08:17:01.742042 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4811b0f7d7b0b503fa4f92764defa45afb1fc7138246abafbc21c6951d4f181"} err="failed to get container status \"c4811b0f7d7b0b503fa4f92764defa45afb1fc7138246abafbc21c6951d4f181\": rpc error: code = NotFound desc = could not find container \"c4811b0f7d7b0b503fa4f92764defa45afb1fc7138246abafbc21c6951d4f181\": container with ID starting with c4811b0f7d7b0b503fa4f92764defa45afb1fc7138246abafbc21c6951d4f181 not found: ID does not exist" Jan 31 08:17:01 crc kubenswrapper[4908]: I0131 08:17:01.788481 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb54fd85-3449-4e89-9d17-717d7adcbf8a-combined-ca-bundle\") pod \"eb54fd85-3449-4e89-9d17-717d7adcbf8a\" (UID: \"eb54fd85-3449-4e89-9d17-717d7adcbf8a\") " Jan 31 08:17:01 crc kubenswrapper[4908]: I0131 08:17:01.788617 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb54fd85-3449-4e89-9d17-717d7adcbf8a-scripts\") pod \"eb54fd85-3449-4e89-9d17-717d7adcbf8a\" (UID: \"eb54fd85-3449-4e89-9d17-717d7adcbf8a\") " Jan 31 08:17:01 crc kubenswrapper[4908]: I0131 08:17:01.788650 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8ffq\" (UniqueName: \"kubernetes.io/projected/eb54fd85-3449-4e89-9d17-717d7adcbf8a-kube-api-access-d8ffq\") pod \"eb54fd85-3449-4e89-9d17-717d7adcbf8a\" (UID: \"eb54fd85-3449-4e89-9d17-717d7adcbf8a\") " Jan 31 08:17:01 crc kubenswrapper[4908]: I0131 08:17:01.788784 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb54fd85-3449-4e89-9d17-717d7adcbf8a-config-data-custom\") pod \"eb54fd85-3449-4e89-9d17-717d7adcbf8a\" (UID: \"eb54fd85-3449-4e89-9d17-717d7adcbf8a\") " Jan 31 08:17:01 crc kubenswrapper[4908]: I0131 08:17:01.788814 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb54fd85-3449-4e89-9d17-717d7adcbf8a-etc-machine-id\") pod \"eb54fd85-3449-4e89-9d17-717d7adcbf8a\" (UID: \"eb54fd85-3449-4e89-9d17-717d7adcbf8a\") " Jan 31 08:17:01 crc kubenswrapper[4908]: I0131 08:17:01.788847 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb54fd85-3449-4e89-9d17-717d7adcbf8a-config-data\") pod \"eb54fd85-3449-4e89-9d17-717d7adcbf8a\" (UID: \"eb54fd85-3449-4e89-9d17-717d7adcbf8a\") " Jan 31 08:17:01 crc kubenswrapper[4908]: I0131 08:17:01.789078 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb54fd85-3449-4e89-9d17-717d7adcbf8a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "eb54fd85-3449-4e89-9d17-717d7adcbf8a" (UID: "eb54fd85-3449-4e89-9d17-717d7adcbf8a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 08:17:01 crc kubenswrapper[4908]: I0131 08:17:01.789764 4908 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb54fd85-3449-4e89-9d17-717d7adcbf8a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 08:17:01 crc kubenswrapper[4908]: I0131 08:17:01.794505 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb54fd85-3449-4e89-9d17-717d7adcbf8a-kube-api-access-d8ffq" (OuterVolumeSpecName: "kube-api-access-d8ffq") pod "eb54fd85-3449-4e89-9d17-717d7adcbf8a" (UID: "eb54fd85-3449-4e89-9d17-717d7adcbf8a"). InnerVolumeSpecName "kube-api-access-d8ffq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:17:01 crc kubenswrapper[4908]: I0131 08:17:01.795311 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb54fd85-3449-4e89-9d17-717d7adcbf8a-scripts" (OuterVolumeSpecName: "scripts") pod "eb54fd85-3449-4e89-9d17-717d7adcbf8a" (UID: "eb54fd85-3449-4e89-9d17-717d7adcbf8a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:17:01 crc kubenswrapper[4908]: I0131 08:17:01.797002 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb54fd85-3449-4e89-9d17-717d7adcbf8a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "eb54fd85-3449-4e89-9d17-717d7adcbf8a" (UID: "eb54fd85-3449-4e89-9d17-717d7adcbf8a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:17:01 crc kubenswrapper[4908]: I0131 08:17:01.843309 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb54fd85-3449-4e89-9d17-717d7adcbf8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb54fd85-3449-4e89-9d17-717d7adcbf8a" (UID: "eb54fd85-3449-4e89-9d17-717d7adcbf8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:17:01 crc kubenswrapper[4908]: I0131 08:17:01.892970 4908 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb54fd85-3449-4e89-9d17-717d7adcbf8a-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 08:17:01 crc kubenswrapper[4908]: I0131 08:17:01.893021 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8ffq\" (UniqueName: \"kubernetes.io/projected/eb54fd85-3449-4e89-9d17-717d7adcbf8a-kube-api-access-d8ffq\") on node \"crc\" DevicePath \"\"" Jan 31 08:17:01 crc kubenswrapper[4908]: I0131 08:17:01.893036 4908 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb54fd85-3449-4e89-9d17-717d7adcbf8a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 08:17:01 crc kubenswrapper[4908]: I0131 08:17:01.893052 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb54fd85-3449-4e89-9d17-717d7adcbf8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:17:01 crc kubenswrapper[4908]: I0131 08:17:01.910379 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb54fd85-3449-4e89-9d17-717d7adcbf8a-config-data" (OuterVolumeSpecName: "config-data") pod "eb54fd85-3449-4e89-9d17-717d7adcbf8a" (UID: "eb54fd85-3449-4e89-9d17-717d7adcbf8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:17:01 crc kubenswrapper[4908]: I0131 08:17:01.996079 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb54fd85-3449-4e89-9d17-717d7adcbf8a-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.043580 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.059137 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.074697 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 08:17:02 crc kubenswrapper[4908]: E0131 08:17:02.075205 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb54fd85-3449-4e89-9d17-717d7adcbf8a" containerName="probe" Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.075224 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb54fd85-3449-4e89-9d17-717d7adcbf8a" containerName="probe" Jan 31 08:17:02 crc kubenswrapper[4908]: E0131 08:17:02.075247 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb54fd85-3449-4e89-9d17-717d7adcbf8a" containerName="manila-scheduler" Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.075255 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb54fd85-3449-4e89-9d17-717d7adcbf8a" containerName="manila-scheduler" Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.075924 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb54fd85-3449-4e89-9d17-717d7adcbf8a" containerName="manila-scheduler" Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.075950 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb54fd85-3449-4e89-9d17-717d7adcbf8a" containerName="probe" Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.077281 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.080104 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.094795 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.199748 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7d1af3f-b25e-47f1-9b97-ee268d46505f-scripts\") pod \"manila-scheduler-0\" (UID: \"e7d1af3f-b25e-47f1-9b97-ee268d46505f\") " pod="openstack/manila-scheduler-0" Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.199823 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxx6h\" (UniqueName: \"kubernetes.io/projected/e7d1af3f-b25e-47f1-9b97-ee268d46505f-kube-api-access-vxx6h\") pod \"manila-scheduler-0\" (UID: \"e7d1af3f-b25e-47f1-9b97-ee268d46505f\") " pod="openstack/manila-scheduler-0" Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.199848 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e7d1af3f-b25e-47f1-9b97-ee268d46505f-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"e7d1af3f-b25e-47f1-9b97-ee268d46505f\") " pod="openstack/manila-scheduler-0" Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.201859 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7d1af3f-b25e-47f1-9b97-ee268d46505f-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"e7d1af3f-b25e-47f1-9b97-ee268d46505f\") " pod="openstack/manila-scheduler-0" Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.201913 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7d1af3f-b25e-47f1-9b97-ee268d46505f-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"e7d1af3f-b25e-47f1-9b97-ee268d46505f\") " pod="openstack/manila-scheduler-0" Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.202035 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7d1af3f-b25e-47f1-9b97-ee268d46505f-config-data\") pod \"manila-scheduler-0\" (UID: \"e7d1af3f-b25e-47f1-9b97-ee268d46505f\") " pod="openstack/manila-scheduler-0" Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.304580 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7d1af3f-b25e-47f1-9b97-ee268d46505f-config-data\") pod \"manila-scheduler-0\" (UID: \"e7d1af3f-b25e-47f1-9b97-ee268d46505f\") " pod="openstack/manila-scheduler-0" Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.304687 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7d1af3f-b25e-47f1-9b97-ee268d46505f-scripts\") pod \"manila-scheduler-0\" (UID: \"e7d1af3f-b25e-47f1-9b97-ee268d46505f\") " pod="openstack/manila-scheduler-0" Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.304759 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxx6h\" (UniqueName: \"kubernetes.io/projected/e7d1af3f-b25e-47f1-9b97-ee268d46505f-kube-api-access-vxx6h\") pod \"manila-scheduler-0\" (UID: \"e7d1af3f-b25e-47f1-9b97-ee268d46505f\") " pod="openstack/manila-scheduler-0" Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.304783 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e7d1af3f-b25e-47f1-9b97-ee268d46505f-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"e7d1af3f-b25e-47f1-9b97-ee268d46505f\") " pod="openstack/manila-scheduler-0" Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.305007 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7d1af3f-b25e-47f1-9b97-ee268d46505f-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"e7d1af3f-b25e-47f1-9b97-ee268d46505f\") " pod="openstack/manila-scheduler-0" Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.305045 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7d1af3f-b25e-47f1-9b97-ee268d46505f-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"e7d1af3f-b25e-47f1-9b97-ee268d46505f\") " pod="openstack/manila-scheduler-0" Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.305208 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e7d1af3f-b25e-47f1-9b97-ee268d46505f-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"e7d1af3f-b25e-47f1-9b97-ee268d46505f\") " pod="openstack/manila-scheduler-0" Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.308429 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7d1af3f-b25e-47f1-9b97-ee268d46505f-config-data\") pod \"manila-scheduler-0\" (UID: \"e7d1af3f-b25e-47f1-9b97-ee268d46505f\") " pod="openstack/manila-scheduler-0" Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.311865 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7d1af3f-b25e-47f1-9b97-ee268d46505f-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"e7d1af3f-b25e-47f1-9b97-ee268d46505f\") " pod="openstack/manila-scheduler-0" Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.320669 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7d1af3f-b25e-47f1-9b97-ee268d46505f-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"e7d1af3f-b25e-47f1-9b97-ee268d46505f\") " pod="openstack/manila-scheduler-0" Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.321019 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7d1af3f-b25e-47f1-9b97-ee268d46505f-scripts\") pod \"manila-scheduler-0\" (UID: \"e7d1af3f-b25e-47f1-9b97-ee268d46505f\") " pod="openstack/manila-scheduler-0" Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.324925 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxx6h\" (UniqueName: \"kubernetes.io/projected/e7d1af3f-b25e-47f1-9b97-ee268d46505f-kube-api-access-vxx6h\") pod \"manila-scheduler-0\" (UID: \"e7d1af3f-b25e-47f1-9b97-ee268d46505f\") " pod="openstack/manila-scheduler-0" Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.403949 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.705257 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a010224-6b0f-4037-a4ca-f32fce0ab77b","Type":"ContainerStarted","Data":"1dfe3666780cedbad15258561979528ed18816ad5057697bec13a3f02a159dc5"} Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.706129 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.737961 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.992389619 podStartE2EDuration="8.737936852s" podCreationTimestamp="2026-01-31 08:16:54 +0000 UTC" firstStartedPulling="2026-01-31 08:16:55.525096997 +0000 UTC m=+3322.141041651" lastFinishedPulling="2026-01-31 08:17:02.27064423 +0000 UTC m=+3328.886588884" observedRunningTime="2026-01-31 08:17:02.729764993 +0000 UTC m=+3329.345709657" watchObservedRunningTime="2026-01-31 08:17:02.737936852 +0000 UTC m=+3329.353881506" Jan 31 08:17:02 crc kubenswrapper[4908]: I0131 08:17:02.850254 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 08:17:02 crc kubenswrapper[4908]: W0131 08:17:02.855964 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7d1af3f_b25e_47f1_9b97_ee268d46505f.slice/crio-a1c2d3c48cebd662963bcb77f786f7834b66d7a714ce4ab719ac55fdb25f6463 WatchSource:0}: Error finding container a1c2d3c48cebd662963bcb77f786f7834b66d7a714ce4ab719ac55fdb25f6463: Status 404 returned error can't find the container with id a1c2d3c48cebd662963bcb77f786f7834b66d7a714ce4ab719ac55fdb25f6463 Jan 31 08:17:03 crc kubenswrapper[4908]: I0131 08:17:03.716688 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"e7d1af3f-b25e-47f1-9b97-ee268d46505f","Type":"ContainerStarted","Data":"22904746e4dea04ba4166c35d16083892acf25b721f9f7f2c3f1ca4ff1a6dc0a"} Jan 31 08:17:03 crc kubenswrapper[4908]: I0131 08:17:03.717063 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"e7d1af3f-b25e-47f1-9b97-ee268d46505f","Type":"ContainerStarted","Data":"b0c107f382eb101bb12ebc4e2a1f025d220a99ac33b0db14f8488c41c9d59d31"} Jan 31 08:17:03 crc kubenswrapper[4908]: I0131 08:17:03.718781 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"e7d1af3f-b25e-47f1-9b97-ee268d46505f","Type":"ContainerStarted","Data":"a1c2d3c48cebd662963bcb77f786f7834b66d7a714ce4ab719ac55fdb25f6463"} Jan 31 08:17:03 crc kubenswrapper[4908]: I0131 08:17:03.951739 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb54fd85-3449-4e89-9d17-717d7adcbf8a" path="/var/lib/kubelet/pods/eb54fd85-3449-4e89-9d17-717d7adcbf8a/volumes" Jan 31 08:17:08 crc kubenswrapper[4908]: I0131 08:17:08.940276 4908 scope.go:117] "RemoveContainer" containerID="45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109" Jan 31 08:17:08 crc kubenswrapper[4908]: E0131 08:17:08.941040 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:17:10 crc kubenswrapper[4908]: I0131 08:17:10.018758 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 31 08:17:10 crc kubenswrapper[4908]: I0131 08:17:10.040262 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=8.040243466 podStartE2EDuration="8.040243466s" podCreationTimestamp="2026-01-31 08:17:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 08:17:03.754489595 +0000 UTC m=+3330.370434269" watchObservedRunningTime="2026-01-31 08:17:10.040243466 +0000 UTC m=+3336.656188120" Jan 31 08:17:10 crc kubenswrapper[4908]: I0131 08:17:10.077915 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 08:17:10 crc kubenswrapper[4908]: I0131 08:17:10.780609 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="995cd9ca-8380-4cfa-957c-a7e76208f2d7" containerName="manila-share" containerID="cri-o://c8e325fb4fdeadf4e279f56e5983b0bb3eaa2b787e70a4f0c40e625ff4fcb036" gracePeriod=30 Jan 31 08:17:10 crc kubenswrapper[4908]: I0131 08:17:10.780699 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="995cd9ca-8380-4cfa-957c-a7e76208f2d7" containerName="probe" containerID="cri-o://14e41483a8903ba3389983b01154d3d62db13e25c6418d95ef7afc94a9d542f4" gracePeriod=30 Jan 31 08:17:11 crc kubenswrapper[4908]: I0131 08:17:11.460432 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Jan 31 08:17:11 crc kubenswrapper[4908]: I0131 08:17:11.810189 4908 generic.go:334] "Generic (PLEG): container finished" podID="995cd9ca-8380-4cfa-957c-a7e76208f2d7" containerID="14e41483a8903ba3389983b01154d3d62db13e25c6418d95ef7afc94a9d542f4" exitCode=0 Jan 31 08:17:11 crc kubenswrapper[4908]: I0131 08:17:11.810285 4908 generic.go:334] "Generic (PLEG): container finished" podID="995cd9ca-8380-4cfa-957c-a7e76208f2d7" containerID="c8e325fb4fdeadf4e279f56e5983b0bb3eaa2b787e70a4f0c40e625ff4fcb036" exitCode=1 Jan 31 08:17:11 crc kubenswrapper[4908]: I0131 08:17:11.810308 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"995cd9ca-8380-4cfa-957c-a7e76208f2d7","Type":"ContainerDied","Data":"14e41483a8903ba3389983b01154d3d62db13e25c6418d95ef7afc94a9d542f4"} Jan 31 08:17:11 crc kubenswrapper[4908]: I0131 08:17:11.810342 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"995cd9ca-8380-4cfa-957c-a7e76208f2d7","Type":"ContainerDied","Data":"c8e325fb4fdeadf4e279f56e5983b0bb3eaa2b787e70a4f0c40e625ff4fcb036"} Jan 31 08:17:11 crc kubenswrapper[4908]: I0131 08:17:11.810367 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"995cd9ca-8380-4cfa-957c-a7e76208f2d7","Type":"ContainerDied","Data":"acdbbfe4f7053d806e943c43fbafd44d5d3cde9bb7f67b9f769c45b02ee9692e"} Jan 31 08:17:11 crc kubenswrapper[4908]: I0131 08:17:11.810376 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acdbbfe4f7053d806e943c43fbafd44d5d3cde9bb7f67b9f769c45b02ee9692e" Jan 31 08:17:11 crc kubenswrapper[4908]: I0131 08:17:11.828668 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 31 08:17:11 crc kubenswrapper[4908]: I0131 08:17:11.912675 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/995cd9ca-8380-4cfa-957c-a7e76208f2d7-combined-ca-bundle\") pod \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " Jan 31 08:17:11 crc kubenswrapper[4908]: I0131 08:17:11.913034 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/995cd9ca-8380-4cfa-957c-a7e76208f2d7-var-lib-manila\") pod \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " Jan 31 08:17:11 crc kubenswrapper[4908]: I0131 08:17:11.913227 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/995cd9ca-8380-4cfa-957c-a7e76208f2d7-config-data-custom\") pod \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " Jan 31 08:17:11 crc kubenswrapper[4908]: I0131 08:17:11.913288 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/995cd9ca-8380-4cfa-957c-a7e76208f2d7-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "995cd9ca-8380-4cfa-957c-a7e76208f2d7" (UID: "995cd9ca-8380-4cfa-957c-a7e76208f2d7"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 08:17:11 crc kubenswrapper[4908]: I0131 08:17:11.913879 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/995cd9ca-8380-4cfa-957c-a7e76208f2d7-etc-machine-id\") pod \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " Jan 31 08:17:11 crc kubenswrapper[4908]: I0131 08:17:11.914101 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/995cd9ca-8380-4cfa-957c-a7e76208f2d7-ceph\") pod \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " Jan 31 08:17:11 crc kubenswrapper[4908]: I0131 08:17:11.914370 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/995cd9ca-8380-4cfa-957c-a7e76208f2d7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "995cd9ca-8380-4cfa-957c-a7e76208f2d7" (UID: "995cd9ca-8380-4cfa-957c-a7e76208f2d7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 08:17:11 crc kubenswrapper[4908]: I0131 08:17:11.914466 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75dw4\" (UniqueName: \"kubernetes.io/projected/995cd9ca-8380-4cfa-957c-a7e76208f2d7-kube-api-access-75dw4\") pod \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " Jan 31 08:17:11 crc kubenswrapper[4908]: I0131 08:17:11.914627 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/995cd9ca-8380-4cfa-957c-a7e76208f2d7-config-data\") pod \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " Jan 31 08:17:11 crc kubenswrapper[4908]: I0131 08:17:11.914735 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/995cd9ca-8380-4cfa-957c-a7e76208f2d7-scripts\") pod \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\" (UID: \"995cd9ca-8380-4cfa-957c-a7e76208f2d7\") " Jan 31 08:17:11 crc kubenswrapper[4908]: I0131 08:17:11.915515 4908 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/995cd9ca-8380-4cfa-957c-a7e76208f2d7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 08:17:11 crc kubenswrapper[4908]: I0131 08:17:11.915608 4908 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/995cd9ca-8380-4cfa-957c-a7e76208f2d7-var-lib-manila\") on node \"crc\" DevicePath \"\"" Jan 31 08:17:11 crc kubenswrapper[4908]: I0131 08:17:11.922398 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/995cd9ca-8380-4cfa-957c-a7e76208f2d7-kube-api-access-75dw4" (OuterVolumeSpecName: "kube-api-access-75dw4") pod "995cd9ca-8380-4cfa-957c-a7e76208f2d7" (UID: "995cd9ca-8380-4cfa-957c-a7e76208f2d7"). InnerVolumeSpecName "kube-api-access-75dw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:17:11 crc kubenswrapper[4908]: I0131 08:17:11.927763 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/995cd9ca-8380-4cfa-957c-a7e76208f2d7-ceph" (OuterVolumeSpecName: "ceph") pod "995cd9ca-8380-4cfa-957c-a7e76208f2d7" (UID: "995cd9ca-8380-4cfa-957c-a7e76208f2d7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:17:11 crc kubenswrapper[4908]: I0131 08:17:11.927828 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/995cd9ca-8380-4cfa-957c-a7e76208f2d7-scripts" (OuterVolumeSpecName: "scripts") pod "995cd9ca-8380-4cfa-957c-a7e76208f2d7" (UID: "995cd9ca-8380-4cfa-957c-a7e76208f2d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:17:11 crc kubenswrapper[4908]: I0131 08:17:11.927856 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/995cd9ca-8380-4cfa-957c-a7e76208f2d7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "995cd9ca-8380-4cfa-957c-a7e76208f2d7" (UID: "995cd9ca-8380-4cfa-957c-a7e76208f2d7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:17:11 crc kubenswrapper[4908]: I0131 08:17:11.974185 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/995cd9ca-8380-4cfa-957c-a7e76208f2d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "995cd9ca-8380-4cfa-957c-a7e76208f2d7" (UID: "995cd9ca-8380-4cfa-957c-a7e76208f2d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:17:12 crc kubenswrapper[4908]: I0131 08:17:12.019257 4908 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/995cd9ca-8380-4cfa-957c-a7e76208f2d7-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 08:17:12 crc kubenswrapper[4908]: I0131 08:17:12.019301 4908 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/995cd9ca-8380-4cfa-957c-a7e76208f2d7-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:17:12 crc kubenswrapper[4908]: I0131 08:17:12.019312 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75dw4\" (UniqueName: \"kubernetes.io/projected/995cd9ca-8380-4cfa-957c-a7e76208f2d7-kube-api-access-75dw4\") on node \"crc\" DevicePath \"\"" Jan 31 08:17:12 crc kubenswrapper[4908]: I0131 08:17:12.019322 4908 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/995cd9ca-8380-4cfa-957c-a7e76208f2d7-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 08:17:12 crc kubenswrapper[4908]: I0131 08:17:12.019340 4908 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/995cd9ca-8380-4cfa-957c-a7e76208f2d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:17:12 crc kubenswrapper[4908]: I0131 08:17:12.036685 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/995cd9ca-8380-4cfa-957c-a7e76208f2d7-config-data" (OuterVolumeSpecName: "config-data") pod "995cd9ca-8380-4cfa-957c-a7e76208f2d7" (UID: "995cd9ca-8380-4cfa-957c-a7e76208f2d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:17:12 crc kubenswrapper[4908]: I0131 08:17:12.120706 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/995cd9ca-8380-4cfa-957c-a7e76208f2d7-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 08:17:12 crc kubenswrapper[4908]: I0131 08:17:12.405019 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 31 08:17:12 crc kubenswrapper[4908]: I0131 08:17:12.817873 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 31 08:17:12 crc kubenswrapper[4908]: I0131 08:17:12.854245 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 08:17:12 crc kubenswrapper[4908]: I0131 08:17:12.862148 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 08:17:12 crc kubenswrapper[4908]: I0131 08:17:12.875303 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 08:17:12 crc kubenswrapper[4908]: E0131 08:17:12.875665 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="995cd9ca-8380-4cfa-957c-a7e76208f2d7" containerName="manila-share" Jan 31 08:17:12 crc kubenswrapper[4908]: I0131 08:17:12.875680 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="995cd9ca-8380-4cfa-957c-a7e76208f2d7" containerName="manila-share" Jan 31 08:17:12 crc kubenswrapper[4908]: E0131 08:17:12.875699 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="995cd9ca-8380-4cfa-957c-a7e76208f2d7" containerName="probe" Jan 31 08:17:12 crc kubenswrapper[4908]: I0131 08:17:12.875707 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="995cd9ca-8380-4cfa-957c-a7e76208f2d7" containerName="probe" Jan 31 08:17:12 crc kubenswrapper[4908]: I0131 08:17:12.875912 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="995cd9ca-8380-4cfa-957c-a7e76208f2d7" containerName="manila-share" Jan 31 08:17:12 crc kubenswrapper[4908]: I0131 08:17:12.875948 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="995cd9ca-8380-4cfa-957c-a7e76208f2d7" containerName="probe" Jan 31 08:17:12 crc kubenswrapper[4908]: I0131 08:17:12.876969 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 31 08:17:12 crc kubenswrapper[4908]: I0131 08:17:12.880620 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 31 08:17:12 crc kubenswrapper[4908]: I0131 08:17:12.888547 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 08:17:12 crc kubenswrapper[4908]: I0131 08:17:12.936783 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57e85c46-6c67-4c97-b7e4-49b3090ceda1-config-data\") pod \"manila-share-share1-0\" (UID: \"57e85c46-6c67-4c97-b7e4-49b3090ceda1\") " pod="openstack/manila-share-share1-0" Jan 31 08:17:12 crc kubenswrapper[4908]: I0131 08:17:12.936867 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g29md\" (UniqueName: \"kubernetes.io/projected/57e85c46-6c67-4c97-b7e4-49b3090ceda1-kube-api-access-g29md\") pod \"manila-share-share1-0\" (UID: \"57e85c46-6c67-4c97-b7e4-49b3090ceda1\") " pod="openstack/manila-share-share1-0" Jan 31 08:17:12 crc kubenswrapper[4908]: I0131 08:17:12.936910 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57e85c46-6c67-4c97-b7e4-49b3090ceda1-scripts\") pod \"manila-share-share1-0\" (UID: \"57e85c46-6c67-4c97-b7e4-49b3090ceda1\") " pod="openstack/manila-share-share1-0" Jan 31 08:17:12 crc kubenswrapper[4908]: I0131 08:17:12.936942 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/57e85c46-6c67-4c97-b7e4-49b3090ceda1-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"57e85c46-6c67-4c97-b7e4-49b3090ceda1\") " pod="openstack/manila-share-share1-0" Jan 31 08:17:12 crc kubenswrapper[4908]: I0131 08:17:12.937008 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/57e85c46-6c67-4c97-b7e4-49b3090ceda1-ceph\") pod \"manila-share-share1-0\" (UID: \"57e85c46-6c67-4c97-b7e4-49b3090ceda1\") " pod="openstack/manila-share-share1-0" Jan 31 08:17:12 crc kubenswrapper[4908]: I0131 08:17:12.937063 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e85c46-6c67-4c97-b7e4-49b3090ceda1-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"57e85c46-6c67-4c97-b7e4-49b3090ceda1\") " pod="openstack/manila-share-share1-0" Jan 31 08:17:12 crc kubenswrapper[4908]: I0131 08:17:12.937096 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/57e85c46-6c67-4c97-b7e4-49b3090ceda1-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"57e85c46-6c67-4c97-b7e4-49b3090ceda1\") " pod="openstack/manila-share-share1-0" Jan 31 08:17:12 crc kubenswrapper[4908]: I0131 08:17:12.937149 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57e85c46-6c67-4c97-b7e4-49b3090ceda1-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"57e85c46-6c67-4c97-b7e4-49b3090ceda1\") " pod="openstack/manila-share-share1-0" Jan 31 08:17:13 crc kubenswrapper[4908]: I0131 08:17:13.038741 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57e85c46-6c67-4c97-b7e4-49b3090ceda1-config-data\") pod \"manila-share-share1-0\" (UID: \"57e85c46-6c67-4c97-b7e4-49b3090ceda1\") " pod="openstack/manila-share-share1-0" Jan 31 08:17:13 crc kubenswrapper[4908]: I0131 08:17:13.038813 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g29md\" (UniqueName: \"kubernetes.io/projected/57e85c46-6c67-4c97-b7e4-49b3090ceda1-kube-api-access-g29md\") pod \"manila-share-share1-0\" (UID: \"57e85c46-6c67-4c97-b7e4-49b3090ceda1\") " pod="openstack/manila-share-share1-0" Jan 31 08:17:13 crc kubenswrapper[4908]: I0131 08:17:13.038853 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57e85c46-6c67-4c97-b7e4-49b3090ceda1-scripts\") pod \"manila-share-share1-0\" (UID: \"57e85c46-6c67-4c97-b7e4-49b3090ceda1\") " pod="openstack/manila-share-share1-0" Jan 31 08:17:13 crc kubenswrapper[4908]: I0131 08:17:13.038884 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/57e85c46-6c67-4c97-b7e4-49b3090ceda1-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"57e85c46-6c67-4c97-b7e4-49b3090ceda1\") " pod="openstack/manila-share-share1-0" Jan 31 08:17:13 crc kubenswrapper[4908]: I0131 08:17:13.038952 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/57e85c46-6c67-4c97-b7e4-49b3090ceda1-ceph\") pod \"manila-share-share1-0\" (UID: \"57e85c46-6c67-4c97-b7e4-49b3090ceda1\") " pod="openstack/manila-share-share1-0" Jan 31 08:17:13 crc kubenswrapper[4908]: I0131 08:17:13.039054 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/57e85c46-6c67-4c97-b7e4-49b3090ceda1-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"57e85c46-6c67-4c97-b7e4-49b3090ceda1\") " pod="openstack/manila-share-share1-0" Jan 31 08:17:13 crc kubenswrapper[4908]: I0131 08:17:13.039597 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e85c46-6c67-4c97-b7e4-49b3090ceda1-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"57e85c46-6c67-4c97-b7e4-49b3090ceda1\") " pod="openstack/manila-share-share1-0" Jan 31 08:17:13 crc kubenswrapper[4908]: I0131 08:17:13.039631 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/57e85c46-6c67-4c97-b7e4-49b3090ceda1-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"57e85c46-6c67-4c97-b7e4-49b3090ceda1\") " pod="openstack/manila-share-share1-0" Jan 31 08:17:13 crc kubenswrapper[4908]: I0131 08:17:13.039693 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57e85c46-6c67-4c97-b7e4-49b3090ceda1-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"57e85c46-6c67-4c97-b7e4-49b3090ceda1\") " pod="openstack/manila-share-share1-0" Jan 31 08:17:13 crc kubenswrapper[4908]: I0131 08:17:13.039780 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/57e85c46-6c67-4c97-b7e4-49b3090ceda1-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"57e85c46-6c67-4c97-b7e4-49b3090ceda1\") " pod="openstack/manila-share-share1-0" Jan 31 08:17:13 crc kubenswrapper[4908]: I0131 08:17:13.044447 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/57e85c46-6c67-4c97-b7e4-49b3090ceda1-ceph\") pod \"manila-share-share1-0\" (UID: \"57e85c46-6c67-4c97-b7e4-49b3090ceda1\") " pod="openstack/manila-share-share1-0" Jan 31 08:17:13 crc kubenswrapper[4908]: I0131 08:17:13.044689 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57e85c46-6c67-4c97-b7e4-49b3090ceda1-scripts\") pod \"manila-share-share1-0\" (UID: \"57e85c46-6c67-4c97-b7e4-49b3090ceda1\") " pod="openstack/manila-share-share1-0" Jan 31 08:17:13 crc kubenswrapper[4908]: I0131 08:17:13.044894 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e85c46-6c67-4c97-b7e4-49b3090ceda1-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"57e85c46-6c67-4c97-b7e4-49b3090ceda1\") " pod="openstack/manila-share-share1-0" Jan 31 08:17:13 crc kubenswrapper[4908]: I0131 08:17:13.047430 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57e85c46-6c67-4c97-b7e4-49b3090ceda1-config-data\") pod \"manila-share-share1-0\" (UID: \"57e85c46-6c67-4c97-b7e4-49b3090ceda1\") " pod="openstack/manila-share-share1-0" Jan 31 08:17:13 crc kubenswrapper[4908]: I0131 08:17:13.047812 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57e85c46-6c67-4c97-b7e4-49b3090ceda1-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"57e85c46-6c67-4c97-b7e4-49b3090ceda1\") " pod="openstack/manila-share-share1-0" Jan 31 08:17:13 crc kubenswrapper[4908]: I0131 08:17:13.056825 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g29md\" (UniqueName: \"kubernetes.io/projected/57e85c46-6c67-4c97-b7e4-49b3090ceda1-kube-api-access-g29md\") pod \"manila-share-share1-0\" (UID: \"57e85c46-6c67-4c97-b7e4-49b3090ceda1\") " pod="openstack/manila-share-share1-0" Jan 31 08:17:13 crc kubenswrapper[4908]: I0131 08:17:13.200492 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 31 08:17:13 crc kubenswrapper[4908]: I0131 08:17:13.952875 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="995cd9ca-8380-4cfa-957c-a7e76208f2d7" path="/var/lib/kubelet/pods/995cd9ca-8380-4cfa-957c-a7e76208f2d7/volumes" Jan 31 08:17:14 crc kubenswrapper[4908]: I0131 08:17:14.230485 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 08:17:14 crc kubenswrapper[4908]: W0131 08:17:14.236406 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57e85c46_6c67_4c97_b7e4_49b3090ceda1.slice/crio-6f4a9002550f6aa6b1a1da366876e4446ef25256abff488e5b357c76b16c4de4 WatchSource:0}: Error finding container 6f4a9002550f6aa6b1a1da366876e4446ef25256abff488e5b357c76b16c4de4: Status 404 returned error can't find the container with id 6f4a9002550f6aa6b1a1da366876e4446ef25256abff488e5b357c76b16c4de4 Jan 31 08:17:14 crc kubenswrapper[4908]: I0131 08:17:14.845599 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"57e85c46-6c67-4c97-b7e4-49b3090ceda1","Type":"ContainerStarted","Data":"ae50daafe01b888d69ea2794196518b75f2321be3e0e6592d3a8a96cbfdda69c"} Jan 31 08:17:14 crc kubenswrapper[4908]: I0131 08:17:14.845866 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"57e85c46-6c67-4c97-b7e4-49b3090ceda1","Type":"ContainerStarted","Data":"6f4a9002550f6aa6b1a1da366876e4446ef25256abff488e5b357c76b16c4de4"} Jan 31 08:17:15 crc kubenswrapper[4908]: I0131 08:17:15.858003 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"57e85c46-6c67-4c97-b7e4-49b3090ceda1","Type":"ContainerStarted","Data":"90130b2ab5d3e7d985e93eb0f3b6b78533eb5e8be0f3a155522f7c20fcd11414"} Jan 31 08:17:15 crc kubenswrapper[4908]: I0131 08:17:15.877322 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.877307902 podStartE2EDuration="3.877307902s" podCreationTimestamp="2026-01-31 08:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 08:17:15.87606959 +0000 UTC m=+3342.492014254" watchObservedRunningTime="2026-01-31 08:17:15.877307902 +0000 UTC m=+3342.493252556" Jan 31 08:17:21 crc kubenswrapper[4908]: I0131 08:17:21.943246 4908 scope.go:117] "RemoveContainer" containerID="45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109" Jan 31 08:17:21 crc kubenswrapper[4908]: E0131 08:17:21.943908 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:17:23 crc kubenswrapper[4908]: I0131 08:17:23.200861 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 31 08:17:24 crc kubenswrapper[4908]: I0131 08:17:24.083851 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 31 08:17:25 crc kubenswrapper[4908]: I0131 08:17:25.007065 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 31 08:17:34 crc kubenswrapper[4908]: I0131 08:17:34.870359 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 31 08:17:36 crc kubenswrapper[4908]: I0131 08:17:36.939856 4908 scope.go:117] "RemoveContainer" containerID="45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109" Jan 31 08:17:36 crc kubenswrapper[4908]: E0131 08:17:36.940567 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:17:44 crc kubenswrapper[4908]: E0131 08:17:44.367615 4908 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.46:44772->38.102.83.46:44229: read tcp 38.102.83.46:44772->38.102.83.46:44229: read: connection reset by peer Jan 31 08:17:48 crc kubenswrapper[4908]: I0131 08:17:48.940114 4908 scope.go:117] "RemoveContainer" containerID="45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109" Jan 31 08:17:50 crc kubenswrapper[4908]: I0131 08:17:50.034136 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerStarted","Data":"8cadb6f892082fd46c47d5bdc04b74496272271bbbb352ab53ff378471c91b53"} Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.429586 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s00-full"] Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.431429 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.438967 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-lgv6f" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.440527 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.441616 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.441905 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.445564 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-full"] Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.570088 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/40735625-500c-4305-a02b-1ba667645b50-openstack-config\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.570148 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/40735625-500c-4305-a02b-1ba667645b50-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.570224 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqzzd\" (UniqueName: \"kubernetes.io/projected/40735625-500c-4305-a02b-1ba667645b50-kube-api-access-vqzzd\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.570365 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.570410 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/40735625-500c-4305-a02b-1ba667645b50-config-data\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.570431 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/40735625-500c-4305-a02b-1ba667645b50-ca-certs\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.570460 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/40735625-500c-4305-a02b-1ba667645b50-ceph\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.570506 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/40735625-500c-4305-a02b-1ba667645b50-openstack-config-secret\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.570603 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40735625-500c-4305-a02b-1ba667645b50-ssh-key\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.570659 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/40735625-500c-4305-a02b-1ba667645b50-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.672106 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.672175 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/40735625-500c-4305-a02b-1ba667645b50-config-data\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.672204 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/40735625-500c-4305-a02b-1ba667645b50-ca-certs\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.672227 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/40735625-500c-4305-a02b-1ba667645b50-ceph\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.672273 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/40735625-500c-4305-a02b-1ba667645b50-openstack-config-secret\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.672308 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40735625-500c-4305-a02b-1ba667645b50-ssh-key\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.672360 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/40735625-500c-4305-a02b-1ba667645b50-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.672395 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/40735625-500c-4305-a02b-1ba667645b50-openstack-config\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.672431 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/40735625-500c-4305-a02b-1ba667645b50-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.672460 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqzzd\" (UniqueName: \"kubernetes.io/projected/40735625-500c-4305-a02b-1ba667645b50-kube-api-access-vqzzd\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.672564 4908 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.673232 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/40735625-500c-4305-a02b-1ba667645b50-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.673673 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/40735625-500c-4305-a02b-1ba667645b50-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.673853 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/40735625-500c-4305-a02b-1ba667645b50-openstack-config\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.674510 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/40735625-500c-4305-a02b-1ba667645b50-config-data\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.677897 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/40735625-500c-4305-a02b-1ba667645b50-ceph\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.679635 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/40735625-500c-4305-a02b-1ba667645b50-ca-certs\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.686720 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/40735625-500c-4305-a02b-1ba667645b50-openstack-config-secret\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.691002 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40735625-500c-4305-a02b-1ba667645b50-ssh-key\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.693148 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqzzd\" (UniqueName: \"kubernetes.io/projected/40735625-500c-4305-a02b-1ba667645b50-kube-api-access-vqzzd\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.706153 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:25 crc kubenswrapper[4908]: I0131 08:18:25.762073 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:18:26 crc kubenswrapper[4908]: I0131 08:18:26.293123 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-full"] Jan 31 08:18:26 crc kubenswrapper[4908]: I0131 08:18:26.374971 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"40735625-500c-4305-a02b-1ba667645b50","Type":"ContainerStarted","Data":"9fc7aec99c752acdd4f66a6b0184e89c4c7f2e76a44e3040c64a2883e256b83a"} Jan 31 08:19:09 crc kubenswrapper[4908]: E0131 08:19:09.989605 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 31 08:19:09 crc kubenswrapper[4908]: E0131 08:19:09.990253 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceph,ReadOnly:true,MountPath:/etc/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vqzzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest-s00-full_openstack(40735625-500c-4305-a02b-1ba667645b50): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 08:19:09 crc kubenswrapper[4908]: E0131 08:19:09.991760 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest-s00-full" podUID="40735625-500c-4305-a02b-1ba667645b50" Jan 31 08:19:10 crc kubenswrapper[4908]: E0131 08:19:10.788432 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest-s00-full" podUID="40735625-500c-4305-a02b-1ba667645b50" Jan 31 08:19:26 crc kubenswrapper[4908]: I0131 08:19:26.678725 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 31 08:19:27 crc kubenswrapper[4908]: I0131 08:19:27.988499 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"40735625-500c-4305-a02b-1ba667645b50","Type":"ContainerStarted","Data":"48d4d6d2b7f13476503aa5090fd95afb05905ce16129df02c046e568ed532f9a"} Jan 31 08:19:28 crc kubenswrapper[4908]: I0131 08:19:28.010140 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s00-full" podStartSLOduration=3.635803664 podStartE2EDuration="1m4.010117531s" podCreationTimestamp="2026-01-31 08:18:24 +0000 UTC" firstStartedPulling="2026-01-31 08:18:26.301175301 +0000 UTC m=+3412.917119955" lastFinishedPulling="2026-01-31 08:19:26.675489168 +0000 UTC m=+3473.291433822" observedRunningTime="2026-01-31 08:19:28.002871374 +0000 UTC m=+3474.618816028" watchObservedRunningTime="2026-01-31 08:19:28.010117531 +0000 UTC m=+3474.626062195" Jan 31 08:20:10 crc kubenswrapper[4908]: I0131 08:20:10.431545 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:20:10 crc kubenswrapper[4908]: I0131 08:20:10.432218 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:20:40 crc kubenswrapper[4908]: I0131 08:20:40.431427 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:20:40 crc kubenswrapper[4908]: I0131 08:20:40.433315 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:21:10 crc kubenswrapper[4908]: I0131 08:21:10.431735 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:21:10 crc kubenswrapper[4908]: I0131 08:21:10.432356 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:21:10 crc kubenswrapper[4908]: I0131 08:21:10.432412 4908 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 08:21:10 crc kubenswrapper[4908]: I0131 08:21:10.433207 4908 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8cadb6f892082fd46c47d5bdc04b74496272271bbbb352ab53ff378471c91b53"} pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 08:21:10 crc kubenswrapper[4908]: I0131 08:21:10.433264 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" containerID="cri-o://8cadb6f892082fd46c47d5bdc04b74496272271bbbb352ab53ff378471c91b53" gracePeriod=600 Jan 31 08:21:10 crc kubenswrapper[4908]: I0131 08:21:10.896925 4908 generic.go:334] "Generic (PLEG): container finished" podID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerID="8cadb6f892082fd46c47d5bdc04b74496272271bbbb352ab53ff378471c91b53" exitCode=0 Jan 31 08:21:10 crc kubenswrapper[4908]: I0131 08:21:10.897001 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerDied","Data":"8cadb6f892082fd46c47d5bdc04b74496272271bbbb352ab53ff378471c91b53"} Jan 31 08:21:10 crc kubenswrapper[4908]: I0131 08:21:10.897343 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerStarted","Data":"4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154"} Jan 31 08:21:10 crc kubenswrapper[4908]: I0131 08:21:10.897365 4908 scope.go:117] "RemoveContainer" containerID="45b6442b02811fdd7ac25e5e4be784a5ca77f8b698061843f293ff138e736109" Jan 31 08:23:10 crc kubenswrapper[4908]: I0131 08:23:10.431537 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:23:10 crc kubenswrapper[4908]: I0131 08:23:10.432114 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:23:31 crc kubenswrapper[4908]: I0131 08:23:31.356332 4908 scope.go:117] "RemoveContainer" containerID="8cb6b5f5423e3067aad563c9eeb46ff949132697a11c99f408d6d11aebd4e800" Jan 31 08:23:31 crc kubenswrapper[4908]: I0131 08:23:31.386787 4908 scope.go:117] "RemoveContainer" containerID="c8e325fb4fdeadf4e279f56e5983b0bb3eaa2b787e70a4f0c40e625ff4fcb036" Jan 31 08:23:31 crc kubenswrapper[4908]: I0131 08:23:31.407541 4908 scope.go:117] "RemoveContainer" containerID="c24be85d63bdcd4dbbdae97f83ca34fe8ddb1a8f45c2259bb5851c9beb6085ac" Jan 31 08:23:31 crc kubenswrapper[4908]: I0131 08:23:31.427365 4908 scope.go:117] "RemoveContainer" containerID="14e41483a8903ba3389983b01154d3d62db13e25c6418d95ef7afc94a9d542f4" Jan 31 08:23:40 crc kubenswrapper[4908]: I0131 08:23:40.431545 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:23:40 crc kubenswrapper[4908]: I0131 08:23:40.432924 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:24:10 crc kubenswrapper[4908]: I0131 08:24:10.431406 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:24:10 crc kubenswrapper[4908]: I0131 08:24:10.431993 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:24:10 crc kubenswrapper[4908]: I0131 08:24:10.432050 4908 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 08:24:10 crc kubenswrapper[4908]: I0131 08:24:10.432895 4908 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154"} pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 08:24:10 crc kubenswrapper[4908]: I0131 08:24:10.432954 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" containerID="cri-o://4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154" gracePeriod=600 Jan 31 08:24:10 crc kubenswrapper[4908]: E0131 08:24:10.588922 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:24:11 crc kubenswrapper[4908]: I0131 08:24:11.534198 4908 generic.go:334] "Generic (PLEG): container finished" podID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerID="4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154" exitCode=0 Jan 31 08:24:11 crc kubenswrapper[4908]: I0131 08:24:11.534257 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerDied","Data":"4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154"} Jan 31 08:24:11 crc kubenswrapper[4908]: I0131 08:24:11.534538 4908 scope.go:117] "RemoveContainer" containerID="8cadb6f892082fd46c47d5bdc04b74496272271bbbb352ab53ff378471c91b53" Jan 31 08:24:11 crc kubenswrapper[4908]: I0131 08:24:11.535344 4908 scope.go:117] "RemoveContainer" containerID="4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154" Jan 31 08:24:11 crc kubenswrapper[4908]: E0131 08:24:11.535677 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:24:25 crc kubenswrapper[4908]: I0131 08:24:25.941088 4908 scope.go:117] "RemoveContainer" containerID="4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154" Jan 31 08:24:25 crc kubenswrapper[4908]: E0131 08:24:25.942314 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:24:36 crc kubenswrapper[4908]: I0131 08:24:36.940463 4908 scope.go:117] "RemoveContainer" containerID="4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154" Jan 31 08:24:36 crc kubenswrapper[4908]: E0131 08:24:36.941789 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:24:48 crc kubenswrapper[4908]: I0131 08:24:48.940371 4908 scope.go:117] "RemoveContainer" containerID="4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154" Jan 31 08:24:48 crc kubenswrapper[4908]: E0131 08:24:48.941073 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:25:02 crc kubenswrapper[4908]: I0131 08:25:02.941698 4908 scope.go:117] "RemoveContainer" containerID="4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154" Jan 31 08:25:02 crc kubenswrapper[4908]: E0131 08:25:02.944635 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:25:13 crc kubenswrapper[4908]: I0131 08:25:13.940505 4908 scope.go:117] "RemoveContainer" containerID="4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154" Jan 31 08:25:13 crc kubenswrapper[4908]: E0131 08:25:13.941472 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:25:25 crc kubenswrapper[4908]: I0131 08:25:25.940755 4908 scope.go:117] "RemoveContainer" containerID="4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154" Jan 31 08:25:25 crc kubenswrapper[4908]: E0131 08:25:25.941597 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:25:40 crc kubenswrapper[4908]: I0131 08:25:40.940602 4908 scope.go:117] "RemoveContainer" containerID="4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154" Jan 31 08:25:40 crc kubenswrapper[4908]: E0131 08:25:40.941642 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:25:48 crc kubenswrapper[4908]: I0131 08:25:48.003948 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lhggv"] Jan 31 08:25:48 crc kubenswrapper[4908]: I0131 08:25:48.007449 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lhggv" Jan 31 08:25:48 crc kubenswrapper[4908]: I0131 08:25:48.029678 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lhggv"] Jan 31 08:25:48 crc kubenswrapper[4908]: I0131 08:25:48.069725 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59a4fb9f-fba5-475b-b278-66ffa47541d6-utilities\") pod \"certified-operators-lhggv\" (UID: \"59a4fb9f-fba5-475b-b278-66ffa47541d6\") " pod="openshift-marketplace/certified-operators-lhggv" Jan 31 08:25:48 crc kubenswrapper[4908]: I0131 08:25:48.070116 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9khzh\" (UniqueName: \"kubernetes.io/projected/59a4fb9f-fba5-475b-b278-66ffa47541d6-kube-api-access-9khzh\") pod \"certified-operators-lhggv\" (UID: \"59a4fb9f-fba5-475b-b278-66ffa47541d6\") " pod="openshift-marketplace/certified-operators-lhggv" Jan 31 08:25:48 crc kubenswrapper[4908]: I0131 08:25:48.070319 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59a4fb9f-fba5-475b-b278-66ffa47541d6-catalog-content\") pod \"certified-operators-lhggv\" (UID: \"59a4fb9f-fba5-475b-b278-66ffa47541d6\") " pod="openshift-marketplace/certified-operators-lhggv" Jan 31 08:25:48 crc kubenswrapper[4908]: I0131 08:25:48.172452 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59a4fb9f-fba5-475b-b278-66ffa47541d6-utilities\") pod \"certified-operators-lhggv\" (UID: \"59a4fb9f-fba5-475b-b278-66ffa47541d6\") " pod="openshift-marketplace/certified-operators-lhggv" Jan 31 08:25:48 crc kubenswrapper[4908]: I0131 08:25:48.172549 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9khzh\" (UniqueName: \"kubernetes.io/projected/59a4fb9f-fba5-475b-b278-66ffa47541d6-kube-api-access-9khzh\") pod \"certified-operators-lhggv\" (UID: \"59a4fb9f-fba5-475b-b278-66ffa47541d6\") " pod="openshift-marketplace/certified-operators-lhggv" Jan 31 08:25:48 crc kubenswrapper[4908]: I0131 08:25:48.172603 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59a4fb9f-fba5-475b-b278-66ffa47541d6-catalog-content\") pod \"certified-operators-lhggv\" (UID: \"59a4fb9f-fba5-475b-b278-66ffa47541d6\") " pod="openshift-marketplace/certified-operators-lhggv" Jan 31 08:25:48 crc kubenswrapper[4908]: I0131 08:25:48.173114 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59a4fb9f-fba5-475b-b278-66ffa47541d6-utilities\") pod \"certified-operators-lhggv\" (UID: \"59a4fb9f-fba5-475b-b278-66ffa47541d6\") " pod="openshift-marketplace/certified-operators-lhggv" Jan 31 08:25:48 crc kubenswrapper[4908]: I0131 08:25:48.173146 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59a4fb9f-fba5-475b-b278-66ffa47541d6-catalog-content\") pod \"certified-operators-lhggv\" (UID: \"59a4fb9f-fba5-475b-b278-66ffa47541d6\") " pod="openshift-marketplace/certified-operators-lhggv" Jan 31 08:25:48 crc kubenswrapper[4908]: I0131 08:25:48.191556 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9khzh\" (UniqueName: \"kubernetes.io/projected/59a4fb9f-fba5-475b-b278-66ffa47541d6-kube-api-access-9khzh\") pod \"certified-operators-lhggv\" (UID: \"59a4fb9f-fba5-475b-b278-66ffa47541d6\") " pod="openshift-marketplace/certified-operators-lhggv" Jan 31 08:25:48 crc kubenswrapper[4908]: I0131 08:25:48.348301 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lhggv" Jan 31 08:25:48 crc kubenswrapper[4908]: I0131 08:25:48.888869 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lhggv"] Jan 31 08:25:49 crc kubenswrapper[4908]: I0131 08:25:49.400425 4908 generic.go:334] "Generic (PLEG): container finished" podID="59a4fb9f-fba5-475b-b278-66ffa47541d6" containerID="5b86bbcd51318a5a4bd55c253d3694e96c9151dc3fe9b5c11d5c5e52bd531c03" exitCode=0 Jan 31 08:25:49 crc kubenswrapper[4908]: I0131 08:25:49.400609 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhggv" event={"ID":"59a4fb9f-fba5-475b-b278-66ffa47541d6","Type":"ContainerDied","Data":"5b86bbcd51318a5a4bd55c253d3694e96c9151dc3fe9b5c11d5c5e52bd531c03"} Jan 31 08:25:49 crc kubenswrapper[4908]: I0131 08:25:49.401509 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhggv" event={"ID":"59a4fb9f-fba5-475b-b278-66ffa47541d6","Type":"ContainerStarted","Data":"886b451ac3566a7c2e1fa40af0f30fa73e173d79508925defe509239e7114269"} Jan 31 08:25:49 crc kubenswrapper[4908]: I0131 08:25:49.403024 4908 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 08:25:51 crc kubenswrapper[4908]: I0131 08:25:51.420109 4908 generic.go:334] "Generic (PLEG): container finished" podID="59a4fb9f-fba5-475b-b278-66ffa47541d6" containerID="b150101badc1b01a5b06e027d3c835f4cf35522f6daf470d76465be78a20f40b" exitCode=0 Jan 31 08:25:51 crc kubenswrapper[4908]: I0131 08:25:51.420243 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhggv" event={"ID":"59a4fb9f-fba5-475b-b278-66ffa47541d6","Type":"ContainerDied","Data":"b150101badc1b01a5b06e027d3c835f4cf35522f6daf470d76465be78a20f40b"} Jan 31 08:25:51 crc kubenswrapper[4908]: I0131 08:25:51.950230 4908 scope.go:117] "RemoveContainer" containerID="4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154" Jan 31 08:25:51 crc kubenswrapper[4908]: E0131 08:25:51.951087 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:25:52 crc kubenswrapper[4908]: I0131 08:25:52.430996 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhggv" event={"ID":"59a4fb9f-fba5-475b-b278-66ffa47541d6","Type":"ContainerStarted","Data":"01371380d742311d955bc7a5cf972a36f579b8d6c5dd34baa5f704dfe2730fba"} Jan 31 08:25:52 crc kubenswrapper[4908]: I0131 08:25:52.457839 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lhggv" podStartSLOduration=2.979438983 podStartE2EDuration="5.457815475s" podCreationTimestamp="2026-01-31 08:25:47 +0000 UTC" firstStartedPulling="2026-01-31 08:25:49.402726227 +0000 UTC m=+3856.018670881" lastFinishedPulling="2026-01-31 08:25:51.881102719 +0000 UTC m=+3858.497047373" observedRunningTime="2026-01-31 08:25:52.451212663 +0000 UTC m=+3859.067157337" watchObservedRunningTime="2026-01-31 08:25:52.457815475 +0000 UTC m=+3859.073760129" Jan 31 08:25:58 crc kubenswrapper[4908]: I0131 08:25:58.349125 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lhggv" Jan 31 08:25:58 crc kubenswrapper[4908]: I0131 08:25:58.349841 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lhggv" Jan 31 08:25:58 crc kubenswrapper[4908]: I0131 08:25:58.397737 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lhggv" Jan 31 08:25:58 crc kubenswrapper[4908]: I0131 08:25:58.577259 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lhggv" Jan 31 08:25:58 crc kubenswrapper[4908]: I0131 08:25:58.633584 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lhggv"] Jan 31 08:26:00 crc kubenswrapper[4908]: I0131 08:26:00.564148 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lhggv" podUID="59a4fb9f-fba5-475b-b278-66ffa47541d6" containerName="registry-server" containerID="cri-o://01371380d742311d955bc7a5cf972a36f579b8d6c5dd34baa5f704dfe2730fba" gracePeriod=2 Jan 31 08:26:01 crc kubenswrapper[4908]: I0131 08:26:01.576271 4908 generic.go:334] "Generic (PLEG): container finished" podID="59a4fb9f-fba5-475b-b278-66ffa47541d6" containerID="01371380d742311d955bc7a5cf972a36f579b8d6c5dd34baa5f704dfe2730fba" exitCode=0 Jan 31 08:26:01 crc kubenswrapper[4908]: I0131 08:26:01.576356 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhggv" event={"ID":"59a4fb9f-fba5-475b-b278-66ffa47541d6","Type":"ContainerDied","Data":"01371380d742311d955bc7a5cf972a36f579b8d6c5dd34baa5f704dfe2730fba"} Jan 31 08:26:02 crc kubenswrapper[4908]: I0131 08:26:02.319701 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lhggv" Jan 31 08:26:02 crc kubenswrapper[4908]: I0131 08:26:02.429721 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9khzh\" (UniqueName: \"kubernetes.io/projected/59a4fb9f-fba5-475b-b278-66ffa47541d6-kube-api-access-9khzh\") pod \"59a4fb9f-fba5-475b-b278-66ffa47541d6\" (UID: \"59a4fb9f-fba5-475b-b278-66ffa47541d6\") " Jan 31 08:26:02 crc kubenswrapper[4908]: I0131 08:26:02.429781 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59a4fb9f-fba5-475b-b278-66ffa47541d6-catalog-content\") pod \"59a4fb9f-fba5-475b-b278-66ffa47541d6\" (UID: \"59a4fb9f-fba5-475b-b278-66ffa47541d6\") " Jan 31 08:26:02 crc kubenswrapper[4908]: I0131 08:26:02.429820 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59a4fb9f-fba5-475b-b278-66ffa47541d6-utilities\") pod \"59a4fb9f-fba5-475b-b278-66ffa47541d6\" (UID: \"59a4fb9f-fba5-475b-b278-66ffa47541d6\") " Jan 31 08:26:02 crc kubenswrapper[4908]: I0131 08:26:02.430991 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59a4fb9f-fba5-475b-b278-66ffa47541d6-utilities" (OuterVolumeSpecName: "utilities") pod "59a4fb9f-fba5-475b-b278-66ffa47541d6" (UID: "59a4fb9f-fba5-475b-b278-66ffa47541d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:26:02 crc kubenswrapper[4908]: I0131 08:26:02.435036 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59a4fb9f-fba5-475b-b278-66ffa47541d6-kube-api-access-9khzh" (OuterVolumeSpecName: "kube-api-access-9khzh") pod "59a4fb9f-fba5-475b-b278-66ffa47541d6" (UID: "59a4fb9f-fba5-475b-b278-66ffa47541d6"). InnerVolumeSpecName "kube-api-access-9khzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:26:02 crc kubenswrapper[4908]: I0131 08:26:02.531218 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9khzh\" (UniqueName: \"kubernetes.io/projected/59a4fb9f-fba5-475b-b278-66ffa47541d6-kube-api-access-9khzh\") on node \"crc\" DevicePath \"\"" Jan 31 08:26:02 crc kubenswrapper[4908]: I0131 08:26:02.531259 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59a4fb9f-fba5-475b-b278-66ffa47541d6-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:26:02 crc kubenswrapper[4908]: I0131 08:26:02.588518 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhggv" event={"ID":"59a4fb9f-fba5-475b-b278-66ffa47541d6","Type":"ContainerDied","Data":"886b451ac3566a7c2e1fa40af0f30fa73e173d79508925defe509239e7114269"} Jan 31 08:26:02 crc kubenswrapper[4908]: I0131 08:26:02.588581 4908 scope.go:117] "RemoveContainer" containerID="01371380d742311d955bc7a5cf972a36f579b8d6c5dd34baa5f704dfe2730fba" Jan 31 08:26:02 crc kubenswrapper[4908]: I0131 08:26:02.588735 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lhggv" Jan 31 08:26:02 crc kubenswrapper[4908]: I0131 08:26:02.614468 4908 scope.go:117] "RemoveContainer" containerID="b150101badc1b01a5b06e027d3c835f4cf35522f6daf470d76465be78a20f40b" Jan 31 08:26:02 crc kubenswrapper[4908]: I0131 08:26:02.648617 4908 scope.go:117] "RemoveContainer" containerID="5b86bbcd51318a5a4bd55c253d3694e96c9151dc3fe9b5c11d5c5e52bd531c03" Jan 31 08:26:02 crc kubenswrapper[4908]: I0131 08:26:02.919012 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59a4fb9f-fba5-475b-b278-66ffa47541d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "59a4fb9f-fba5-475b-b278-66ffa47541d6" (UID: "59a4fb9f-fba5-475b-b278-66ffa47541d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:26:02 crc kubenswrapper[4908]: I0131 08:26:02.942748 4908 scope.go:117] "RemoveContainer" containerID="4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154" Jan 31 08:26:02 crc kubenswrapper[4908]: E0131 08:26:02.943025 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:26:02 crc kubenswrapper[4908]: I0131 08:26:02.952128 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59a4fb9f-fba5-475b-b278-66ffa47541d6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:26:03 crc kubenswrapper[4908]: I0131 08:26:03.250634 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lhggv"] Jan 31 08:26:03 crc kubenswrapper[4908]: I0131 08:26:03.260994 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lhggv"] Jan 31 08:26:03 crc kubenswrapper[4908]: I0131 08:26:03.951899 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59a4fb9f-fba5-475b-b278-66ffa47541d6" path="/var/lib/kubelet/pods/59a4fb9f-fba5-475b-b278-66ffa47541d6/volumes" Jan 31 08:26:13 crc kubenswrapper[4908]: I0131 08:26:13.940408 4908 scope.go:117] "RemoveContainer" containerID="4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154" Jan 31 08:26:13 crc kubenswrapper[4908]: E0131 08:26:13.941285 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:26:15 crc kubenswrapper[4908]: I0131 08:26:15.048604 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-25c5-account-create-update-7btgk"] Jan 31 08:26:15 crc kubenswrapper[4908]: I0131 08:26:15.060707 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-x8nmk"] Jan 31 08:26:15 crc kubenswrapper[4908]: I0131 08:26:15.069475 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-25c5-account-create-update-7btgk"] Jan 31 08:26:15 crc kubenswrapper[4908]: I0131 08:26:15.076701 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-x8nmk"] Jan 31 08:26:15 crc kubenswrapper[4908]: I0131 08:26:15.951359 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d096e8e-720c-4fc6-b18d-efea93875a89" path="/var/lib/kubelet/pods/2d096e8e-720c-4fc6-b18d-efea93875a89/volumes" Jan 31 08:26:15 crc kubenswrapper[4908]: I0131 08:26:15.951908 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe31ee36-6d2c-43a5-b127-2659897ae68b" path="/var/lib/kubelet/pods/fe31ee36-6d2c-43a5-b127-2659897ae68b/volumes" Jan 31 08:26:24 crc kubenswrapper[4908]: I0131 08:26:24.940582 4908 scope.go:117] "RemoveContainer" containerID="4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154" Jan 31 08:26:24 crc kubenswrapper[4908]: E0131 08:26:24.941685 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:26:31 crc kubenswrapper[4908]: I0131 08:26:31.523913 4908 scope.go:117] "RemoveContainer" containerID="092b0a600e442db3dc6d7897ea0b903bc9e3128b5574da6a6b482f0b5e45aa46" Jan 31 08:26:31 crc kubenswrapper[4908]: I0131 08:26:31.548595 4908 scope.go:117] "RemoveContainer" containerID="e7c0784dfc672c17bc11928986b58ddd3f34a6a934466b54aa2e859bb11f9776" Jan 31 08:26:35 crc kubenswrapper[4908]: I0131 08:26:35.940314 4908 scope.go:117] "RemoveContainer" containerID="4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154" Jan 31 08:26:35 crc kubenswrapper[4908]: E0131 08:26:35.941015 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:26:37 crc kubenswrapper[4908]: I0131 08:26:37.052861 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-64n6d"] Jan 31 08:26:37 crc kubenswrapper[4908]: I0131 08:26:37.064725 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-64n6d"] Jan 31 08:26:37 crc kubenswrapper[4908]: I0131 08:26:37.951678 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f11a3cb-6f5b-47cf-83e2-0836ba0b740e" path="/var/lib/kubelet/pods/0f11a3cb-6f5b-47cf-83e2-0836ba0b740e/volumes" Jan 31 08:26:49 crc kubenswrapper[4908]: I0131 08:26:49.943338 4908 scope.go:117] "RemoveContainer" containerID="4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154" Jan 31 08:26:49 crc kubenswrapper[4908]: E0131 08:26:49.944025 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:27:02 crc kubenswrapper[4908]: I0131 08:27:02.940876 4908 scope.go:117] "RemoveContainer" containerID="4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154" Jan 31 08:27:02 crc kubenswrapper[4908]: E0131 08:27:02.941669 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:27:07 crc kubenswrapper[4908]: I0131 08:27:07.141592 4908 generic.go:334] "Generic (PLEG): container finished" podID="40735625-500c-4305-a02b-1ba667645b50" containerID="48d4d6d2b7f13476503aa5090fd95afb05905ce16129df02c046e568ed532f9a" exitCode=1 Jan 31 08:27:07 crc kubenswrapper[4908]: I0131 08:27:07.141784 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"40735625-500c-4305-a02b-1ba667645b50","Type":"ContainerDied","Data":"48d4d6d2b7f13476503aa5090fd95afb05905ce16129df02c046e568ed532f9a"} Jan 31 08:27:08 crc kubenswrapper[4908]: I0131 08:27:08.886818 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 31 08:27:08 crc kubenswrapper[4908]: I0131 08:27:08.904761 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:27:08 crc kubenswrapper[4908]: I0131 08:27:08.981323 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s01-single-test"] Jan 31 08:27:08 crc kubenswrapper[4908]: E0131 08:27:08.981844 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59a4fb9f-fba5-475b-b278-66ffa47541d6" containerName="extract-content" Jan 31 08:27:08 crc kubenswrapper[4908]: I0131 08:27:08.981863 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="59a4fb9f-fba5-475b-b278-66ffa47541d6" containerName="extract-content" Jan 31 08:27:08 crc kubenswrapper[4908]: E0131 08:27:08.981934 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59a4fb9f-fba5-475b-b278-66ffa47541d6" containerName="extract-utilities" Jan 31 08:27:08 crc kubenswrapper[4908]: I0131 08:27:08.981946 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="59a4fb9f-fba5-475b-b278-66ffa47541d6" containerName="extract-utilities" Jan 31 08:27:08 crc kubenswrapper[4908]: E0131 08:27:08.981969 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40735625-500c-4305-a02b-1ba667645b50" containerName="tempest-tests-tempest-tests-runner" Jan 31 08:27:08 crc kubenswrapper[4908]: I0131 08:27:08.982076 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="40735625-500c-4305-a02b-1ba667645b50" containerName="tempest-tests-tempest-tests-runner" Jan 31 08:27:08 crc kubenswrapper[4908]: E0131 08:27:08.982099 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59a4fb9f-fba5-475b-b278-66ffa47541d6" containerName="registry-server" Jan 31 08:27:08 crc kubenswrapper[4908]: I0131 08:27:08.982107 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="59a4fb9f-fba5-475b-b278-66ffa47541d6" containerName="registry-server" Jan 31 08:27:08 crc kubenswrapper[4908]: I0131 08:27:08.982550 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="40735625-500c-4305-a02b-1ba667645b50" containerName="tempest-tests-tempest-tests-runner" Jan 31 08:27:08 crc kubenswrapper[4908]: I0131 08:27:08.982629 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="59a4fb9f-fba5-475b-b278-66ffa47541d6" containerName="registry-server" Jan 31 08:27:08 crc kubenswrapper[4908]: I0131 08:27:08.983891 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:08 crc kubenswrapper[4908]: I0131 08:27:08.985687 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-test"] Jan 31 08:27:08 crc kubenswrapper[4908]: I0131 08:27:08.987862 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s1" Jan 31 08:27:08 crc kubenswrapper[4908]: I0131 08:27:08.988965 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s1" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.058743 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/40735625-500c-4305-a02b-1ba667645b50-openstack-config-secret\") pod \"40735625-500c-4305-a02b-1ba667645b50\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.058804 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/40735625-500c-4305-a02b-1ba667645b50-ca-certs\") pod \"40735625-500c-4305-a02b-1ba667645b50\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.058840 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/40735625-500c-4305-a02b-1ba667645b50-ceph\") pod \"40735625-500c-4305-a02b-1ba667645b50\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.058924 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/40735625-500c-4305-a02b-1ba667645b50-test-operator-ephemeral-workdir\") pod \"40735625-500c-4305-a02b-1ba667645b50\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.058962 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40735625-500c-4305-a02b-1ba667645b50-ssh-key\") pod \"40735625-500c-4305-a02b-1ba667645b50\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.059191 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/40735625-500c-4305-a02b-1ba667645b50-test-operator-ephemeral-temporary\") pod \"40735625-500c-4305-a02b-1ba667645b50\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.059215 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqzzd\" (UniqueName: \"kubernetes.io/projected/40735625-500c-4305-a02b-1ba667645b50-kube-api-access-vqzzd\") pod \"40735625-500c-4305-a02b-1ba667645b50\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.059264 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/40735625-500c-4305-a02b-1ba667645b50-config-data\") pod \"40735625-500c-4305-a02b-1ba667645b50\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.059313 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"40735625-500c-4305-a02b-1ba667645b50\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.059333 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/40735625-500c-4305-a02b-1ba667645b50-openstack-config\") pod \"40735625-500c-4305-a02b-1ba667645b50\" (UID: \"40735625-500c-4305-a02b-1ba667645b50\") " Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.060283 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40735625-500c-4305-a02b-1ba667645b50-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "40735625-500c-4305-a02b-1ba667645b50" (UID: "40735625-500c-4305-a02b-1ba667645b50"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.061264 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40735625-500c-4305-a02b-1ba667645b50-config-data" (OuterVolumeSpecName: "config-data") pod "40735625-500c-4305-a02b-1ba667645b50" (UID: "40735625-500c-4305-a02b-1ba667645b50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.065859 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "test-operator-logs") pod "40735625-500c-4305-a02b-1ba667645b50" (UID: "40735625-500c-4305-a02b-1ba667645b50"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.071566 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40735625-500c-4305-a02b-1ba667645b50-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "40735625-500c-4305-a02b-1ba667645b50" (UID: "40735625-500c-4305-a02b-1ba667645b50"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.073230 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40735625-500c-4305-a02b-1ba667645b50-ceph" (OuterVolumeSpecName: "ceph") pod "40735625-500c-4305-a02b-1ba667645b50" (UID: "40735625-500c-4305-a02b-1ba667645b50"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.080554 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40735625-500c-4305-a02b-1ba667645b50-kube-api-access-vqzzd" (OuterVolumeSpecName: "kube-api-access-vqzzd") pod "40735625-500c-4305-a02b-1ba667645b50" (UID: "40735625-500c-4305-a02b-1ba667645b50"). InnerVolumeSpecName "kube-api-access-vqzzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.092756 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40735625-500c-4305-a02b-1ba667645b50-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "40735625-500c-4305-a02b-1ba667645b50" (UID: "40735625-500c-4305-a02b-1ba667645b50"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.097117 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40735625-500c-4305-a02b-1ba667645b50-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "40735625-500c-4305-a02b-1ba667645b50" (UID: "40735625-500c-4305-a02b-1ba667645b50"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.106848 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40735625-500c-4305-a02b-1ba667645b50-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "40735625-500c-4305-a02b-1ba667645b50" (UID: "40735625-500c-4305-a02b-1ba667645b50"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.124883 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40735625-500c-4305-a02b-1ba667645b50-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "40735625-500c-4305-a02b-1ba667645b50" (UID: "40735625-500c-4305-a02b-1ba667645b50"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.161904 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4104d917-4c43-4e30-8d26-600b50a30e83-ssh-key\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.162025 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4104d917-4c43-4e30-8d26-600b50a30e83-openstack-config\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.162113 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4104d917-4c43-4e30-8d26-600b50a30e83-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.162164 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4104d917-4c43-4e30-8d26-600b50a30e83-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.162190 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4104d917-4c43-4e30-8d26-600b50a30e83-config-data\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.162215 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4104d917-4c43-4e30-8d26-600b50a30e83-ca-certs\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.162269 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s8sw\" (UniqueName: \"kubernetes.io/projected/4104d917-4c43-4e30-8d26-600b50a30e83-kube-api-access-2s8sw\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.162295 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4104d917-4c43-4e30-8d26-600b50a30e83-ceph\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.162327 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4104d917-4c43-4e30-8d26-600b50a30e83-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.162425 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.162485 4908 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/40735625-500c-4305-a02b-1ba667645b50-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.162497 4908 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40735625-500c-4305-a02b-1ba667645b50-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.162506 4908 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/40735625-500c-4305-a02b-1ba667645b50-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.162517 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqzzd\" (UniqueName: \"kubernetes.io/projected/40735625-500c-4305-a02b-1ba667645b50-kube-api-access-vqzzd\") on node \"crc\" DevicePath \"\"" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.162529 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/40735625-500c-4305-a02b-1ba667645b50-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.162538 4908 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/40735625-500c-4305-a02b-1ba667645b50-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.162547 4908 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/40735625-500c-4305-a02b-1ba667645b50-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.162555 4908 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/40735625-500c-4305-a02b-1ba667645b50-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.162566 4908 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/40735625-500c-4305-a02b-1ba667645b50-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.163184 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"40735625-500c-4305-a02b-1ba667645b50","Type":"ContainerDied","Data":"9fc7aec99c752acdd4f66a6b0184e89c4c7f2e76a44e3040c64a2883e256b83a"} Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.163218 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fc7aec99c752acdd4f66a6b0184e89c4c7f2e76a44e3040c64a2883e256b83a" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.163274 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.189631 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.265922 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4104d917-4c43-4e30-8d26-600b50a30e83-ceph\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.266086 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4104d917-4c43-4e30-8d26-600b50a30e83-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.266284 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4104d917-4c43-4e30-8d26-600b50a30e83-ssh-key\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.266390 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4104d917-4c43-4e30-8d26-600b50a30e83-openstack-config\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.266486 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4104d917-4c43-4e30-8d26-600b50a30e83-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.266569 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4104d917-4c43-4e30-8d26-600b50a30e83-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.266617 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4104d917-4c43-4e30-8d26-600b50a30e83-config-data\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.266649 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4104d917-4c43-4e30-8d26-600b50a30e83-ca-certs\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.266782 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s8sw\" (UniqueName: \"kubernetes.io/projected/4104d917-4c43-4e30-8d26-600b50a30e83-kube-api-access-2s8sw\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.267540 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4104d917-4c43-4e30-8d26-600b50a30e83-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.268090 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4104d917-4c43-4e30-8d26-600b50a30e83-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.268665 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4104d917-4c43-4e30-8d26-600b50a30e83-openstack-config\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.272324 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4104d917-4c43-4e30-8d26-600b50a30e83-ceph\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.273130 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4104d917-4c43-4e30-8d26-600b50a30e83-ssh-key\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.274508 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4104d917-4c43-4e30-8d26-600b50a30e83-ca-certs\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.277859 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4104d917-4c43-4e30-8d26-600b50a30e83-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.278072 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4104d917-4c43-4e30-8d26-600b50a30e83-config-data\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.291993 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s8sw\" (UniqueName: \"kubernetes.io/projected/4104d917-4c43-4e30-8d26-600b50a30e83-kube-api-access-2s8sw\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.303403 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:27:09 crc kubenswrapper[4908]: I0131 08:27:09.850257 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-test"] Jan 31 08:27:10 crc kubenswrapper[4908]: I0131 08:27:10.177891 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"4104d917-4c43-4e30-8d26-600b50a30e83","Type":"ContainerStarted","Data":"ad6b20b30882465a9b7456e06e218ba7943712aa62f21e4dd749fb1751185302"} Jan 31 08:27:11 crc kubenswrapper[4908]: I0131 08:27:11.189132 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"4104d917-4c43-4e30-8d26-600b50a30e83","Type":"ContainerStarted","Data":"dc6142cad961502989eefae1bbca5104927b4ab6110fcd22af2d0d75e4d7aad0"} Jan 31 08:27:11 crc kubenswrapper[4908]: I0131 08:27:11.209861 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s01-single-test" podStartSLOduration=3.209837229 podStartE2EDuration="3.209837229s" podCreationTimestamp="2026-01-31 08:27:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 08:27:11.203815657 +0000 UTC m=+3937.819760331" watchObservedRunningTime="2026-01-31 08:27:11.209837229 +0000 UTC m=+3937.825781903" Jan 31 08:27:14 crc kubenswrapper[4908]: I0131 08:27:14.941487 4908 scope.go:117] "RemoveContainer" containerID="4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154" Jan 31 08:27:14 crc kubenswrapper[4908]: E0131 08:27:14.942668 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:27:27 crc kubenswrapper[4908]: I0131 08:27:27.946858 4908 scope.go:117] "RemoveContainer" containerID="4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154" Jan 31 08:27:27 crc kubenswrapper[4908]: E0131 08:27:27.947593 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:27:31 crc kubenswrapper[4908]: I0131 08:27:31.678511 4908 scope.go:117] "RemoveContainer" containerID="2e247c096290f816c7b74d9eb63ec613034312fda0ff7ffb3269c9a71bc8d610" Jan 31 08:27:40 crc kubenswrapper[4908]: I0131 08:27:40.940697 4908 scope.go:117] "RemoveContainer" containerID="4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154" Jan 31 08:27:40 crc kubenswrapper[4908]: E0131 08:27:40.941519 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:27:53 crc kubenswrapper[4908]: I0131 08:27:53.940678 4908 scope.go:117] "RemoveContainer" containerID="4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154" Jan 31 08:27:53 crc kubenswrapper[4908]: E0131 08:27:53.942831 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:28:07 crc kubenswrapper[4908]: I0131 08:28:07.948113 4908 scope.go:117] "RemoveContainer" containerID="4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154" Jan 31 08:28:07 crc kubenswrapper[4908]: E0131 08:28:07.949694 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:28:18 crc kubenswrapper[4908]: I0131 08:28:18.940869 4908 scope.go:117] "RemoveContainer" containerID="4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154" Jan 31 08:28:18 crc kubenswrapper[4908]: E0131 08:28:18.941815 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:28:32 crc kubenswrapper[4908]: I0131 08:28:32.941691 4908 scope.go:117] "RemoveContainer" containerID="4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154" Jan 31 08:28:32 crc kubenswrapper[4908]: E0131 08:28:32.942591 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:28:46 crc kubenswrapper[4908]: I0131 08:28:46.941042 4908 scope.go:117] "RemoveContainer" containerID="4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154" Jan 31 08:28:46 crc kubenswrapper[4908]: E0131 08:28:46.941937 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:28:57 crc kubenswrapper[4908]: I0131 08:28:57.948368 4908 scope.go:117] "RemoveContainer" containerID="4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154" Jan 31 08:28:57 crc kubenswrapper[4908]: E0131 08:28:57.949148 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:29:10 crc kubenswrapper[4908]: I0131 08:29:10.939764 4908 scope.go:117] "RemoveContainer" containerID="4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154" Jan 31 08:29:11 crc kubenswrapper[4908]: I0131 08:29:11.239302 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerStarted","Data":"6350cada3698baef0e998420384ec2ecfb0222816ef3bafd9c38feec37875ce0"} Jan 31 08:30:00 crc kubenswrapper[4908]: I0131 08:30:00.185896 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497470-b7qww"] Jan 31 08:30:00 crc kubenswrapper[4908]: I0131 08:30:00.188229 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-b7qww" Jan 31 08:30:00 crc kubenswrapper[4908]: I0131 08:30:00.190825 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 08:30:00 crc kubenswrapper[4908]: I0131 08:30:00.191012 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 08:30:00 crc kubenswrapper[4908]: I0131 08:30:00.195579 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497470-b7qww"] Jan 31 08:30:00 crc kubenswrapper[4908]: I0131 08:30:00.227104 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx66k\" (UniqueName: \"kubernetes.io/projected/4a981af9-3983-4154-9b0a-766bdfe28f95-kube-api-access-nx66k\") pod \"collect-profiles-29497470-b7qww\" (UID: \"4a981af9-3983-4154-9b0a-766bdfe28f95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-b7qww" Jan 31 08:30:00 crc kubenswrapper[4908]: I0131 08:30:00.227246 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a981af9-3983-4154-9b0a-766bdfe28f95-config-volume\") pod \"collect-profiles-29497470-b7qww\" (UID: \"4a981af9-3983-4154-9b0a-766bdfe28f95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-b7qww" Jan 31 08:30:00 crc kubenswrapper[4908]: I0131 08:30:00.227296 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a981af9-3983-4154-9b0a-766bdfe28f95-secret-volume\") pod \"collect-profiles-29497470-b7qww\" (UID: \"4a981af9-3983-4154-9b0a-766bdfe28f95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-b7qww" Jan 31 08:30:00 crc kubenswrapper[4908]: I0131 08:30:00.328507 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx66k\" (UniqueName: \"kubernetes.io/projected/4a981af9-3983-4154-9b0a-766bdfe28f95-kube-api-access-nx66k\") pod \"collect-profiles-29497470-b7qww\" (UID: \"4a981af9-3983-4154-9b0a-766bdfe28f95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-b7qww" Jan 31 08:30:00 crc kubenswrapper[4908]: I0131 08:30:00.328935 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a981af9-3983-4154-9b0a-766bdfe28f95-config-volume\") pod \"collect-profiles-29497470-b7qww\" (UID: \"4a981af9-3983-4154-9b0a-766bdfe28f95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-b7qww" Jan 31 08:30:00 crc kubenswrapper[4908]: I0131 08:30:00.329047 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a981af9-3983-4154-9b0a-766bdfe28f95-secret-volume\") pod \"collect-profiles-29497470-b7qww\" (UID: \"4a981af9-3983-4154-9b0a-766bdfe28f95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-b7qww" Jan 31 08:30:00 crc kubenswrapper[4908]: I0131 08:30:00.330490 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a981af9-3983-4154-9b0a-766bdfe28f95-config-volume\") pod \"collect-profiles-29497470-b7qww\" (UID: \"4a981af9-3983-4154-9b0a-766bdfe28f95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-b7qww" Jan 31 08:30:00 crc kubenswrapper[4908]: I0131 08:30:00.338754 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a981af9-3983-4154-9b0a-766bdfe28f95-secret-volume\") pod \"collect-profiles-29497470-b7qww\" (UID: \"4a981af9-3983-4154-9b0a-766bdfe28f95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-b7qww" Jan 31 08:30:00 crc kubenswrapper[4908]: I0131 08:30:00.350718 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx66k\" (UniqueName: \"kubernetes.io/projected/4a981af9-3983-4154-9b0a-766bdfe28f95-kube-api-access-nx66k\") pod \"collect-profiles-29497470-b7qww\" (UID: \"4a981af9-3983-4154-9b0a-766bdfe28f95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-b7qww" Jan 31 08:30:00 crc kubenswrapper[4908]: I0131 08:30:00.521883 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-b7qww" Jan 31 08:30:01 crc kubenswrapper[4908]: I0131 08:30:01.049780 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497470-b7qww"] Jan 31 08:30:01 crc kubenswrapper[4908]: I0131 08:30:01.660377 4908 generic.go:334] "Generic (PLEG): container finished" podID="4a981af9-3983-4154-9b0a-766bdfe28f95" containerID="0dc5199e2d1a8006071da4603d4684167be40dfeadd94da94669c51fb699eecd" exitCode=0 Jan 31 08:30:01 crc kubenswrapper[4908]: I0131 08:30:01.660476 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-b7qww" event={"ID":"4a981af9-3983-4154-9b0a-766bdfe28f95","Type":"ContainerDied","Data":"0dc5199e2d1a8006071da4603d4684167be40dfeadd94da94669c51fb699eecd"} Jan 31 08:30:01 crc kubenswrapper[4908]: I0131 08:30:01.660683 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-b7qww" event={"ID":"4a981af9-3983-4154-9b0a-766bdfe28f95","Type":"ContainerStarted","Data":"94e8453b1be07b82789d3f12bfa165534dbe3681a4d57caee9f56433b69d3197"} Jan 31 08:30:03 crc kubenswrapper[4908]: I0131 08:30:03.067851 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-b7qww" Jan 31 08:30:03 crc kubenswrapper[4908]: I0131 08:30:03.208455 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx66k\" (UniqueName: \"kubernetes.io/projected/4a981af9-3983-4154-9b0a-766bdfe28f95-kube-api-access-nx66k\") pod \"4a981af9-3983-4154-9b0a-766bdfe28f95\" (UID: \"4a981af9-3983-4154-9b0a-766bdfe28f95\") " Jan 31 08:30:03 crc kubenswrapper[4908]: I0131 08:30:03.209484 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a981af9-3983-4154-9b0a-766bdfe28f95-secret-volume\") pod \"4a981af9-3983-4154-9b0a-766bdfe28f95\" (UID: \"4a981af9-3983-4154-9b0a-766bdfe28f95\") " Jan 31 08:30:03 crc kubenswrapper[4908]: I0131 08:30:03.209567 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a981af9-3983-4154-9b0a-766bdfe28f95-config-volume\") pod \"4a981af9-3983-4154-9b0a-766bdfe28f95\" (UID: \"4a981af9-3983-4154-9b0a-766bdfe28f95\") " Jan 31 08:30:03 crc kubenswrapper[4908]: I0131 08:30:03.210277 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a981af9-3983-4154-9b0a-766bdfe28f95-config-volume" (OuterVolumeSpecName: "config-volume") pod "4a981af9-3983-4154-9b0a-766bdfe28f95" (UID: "4a981af9-3983-4154-9b0a-766bdfe28f95"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:30:03 crc kubenswrapper[4908]: I0131 08:30:03.214222 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a981af9-3983-4154-9b0a-766bdfe28f95-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4a981af9-3983-4154-9b0a-766bdfe28f95" (UID: "4a981af9-3983-4154-9b0a-766bdfe28f95"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:30:03 crc kubenswrapper[4908]: I0131 08:30:03.214354 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a981af9-3983-4154-9b0a-766bdfe28f95-kube-api-access-nx66k" (OuterVolumeSpecName: "kube-api-access-nx66k") pod "4a981af9-3983-4154-9b0a-766bdfe28f95" (UID: "4a981af9-3983-4154-9b0a-766bdfe28f95"). InnerVolumeSpecName "kube-api-access-nx66k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:30:03 crc kubenswrapper[4908]: I0131 08:30:03.312513 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx66k\" (UniqueName: \"kubernetes.io/projected/4a981af9-3983-4154-9b0a-766bdfe28f95-kube-api-access-nx66k\") on node \"crc\" DevicePath \"\"" Jan 31 08:30:03 crc kubenswrapper[4908]: I0131 08:30:03.312560 4908 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a981af9-3983-4154-9b0a-766bdfe28f95-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 08:30:03 crc kubenswrapper[4908]: I0131 08:30:03.312573 4908 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a981af9-3983-4154-9b0a-766bdfe28f95-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 08:30:03 crc kubenswrapper[4908]: I0131 08:30:03.696065 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-b7qww" event={"ID":"4a981af9-3983-4154-9b0a-766bdfe28f95","Type":"ContainerDied","Data":"94e8453b1be07b82789d3f12bfa165534dbe3681a4d57caee9f56433b69d3197"} Jan 31 08:30:03 crc kubenswrapper[4908]: I0131 08:30:03.696540 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94e8453b1be07b82789d3f12bfa165534dbe3681a4d57caee9f56433b69d3197" Jan 31 08:30:03 crc kubenswrapper[4908]: I0131 08:30:03.696338 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-b7qww" Jan 31 08:30:04 crc kubenswrapper[4908]: I0131 08:30:04.144401 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497425-d2kxm"] Jan 31 08:30:04 crc kubenswrapper[4908]: I0131 08:30:04.152178 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497425-d2kxm"] Jan 31 08:30:05 crc kubenswrapper[4908]: I0131 08:30:05.950444 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53cb8abd-5768-468b-b2b1-2e2667692cf9" path="/var/lib/kubelet/pods/53cb8abd-5768-468b-b2b1-2e2667692cf9/volumes" Jan 31 08:30:31 crc kubenswrapper[4908]: I0131 08:30:31.827079 4908 scope.go:117] "RemoveContainer" containerID="0a35cfad048bb698539770c06b992e39a357d5db3b6918364c50c30c696a6270" Jan 31 08:30:56 crc kubenswrapper[4908]: I0131 08:30:56.564160 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f7l64"] Jan 31 08:30:56 crc kubenswrapper[4908]: E0131 08:30:56.565251 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a981af9-3983-4154-9b0a-766bdfe28f95" containerName="collect-profiles" Jan 31 08:30:56 crc kubenswrapper[4908]: I0131 08:30:56.565273 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a981af9-3983-4154-9b0a-766bdfe28f95" containerName="collect-profiles" Jan 31 08:30:56 crc kubenswrapper[4908]: I0131 08:30:56.565510 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a981af9-3983-4154-9b0a-766bdfe28f95" containerName="collect-profiles" Jan 31 08:30:56 crc kubenswrapper[4908]: I0131 08:30:56.566821 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f7l64" Jan 31 08:30:56 crc kubenswrapper[4908]: I0131 08:30:56.574965 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f7l64"] Jan 31 08:30:56 crc kubenswrapper[4908]: I0131 08:30:56.715864 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzvbg\" (UniqueName: \"kubernetes.io/projected/2ffac261-f2bf-4193-b869-126973eaaa2a-kube-api-access-zzvbg\") pod \"community-operators-f7l64\" (UID: \"2ffac261-f2bf-4193-b869-126973eaaa2a\") " pod="openshift-marketplace/community-operators-f7l64" Jan 31 08:30:56 crc kubenswrapper[4908]: I0131 08:30:56.715941 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ffac261-f2bf-4193-b869-126973eaaa2a-catalog-content\") pod \"community-operators-f7l64\" (UID: \"2ffac261-f2bf-4193-b869-126973eaaa2a\") " pod="openshift-marketplace/community-operators-f7l64" Jan 31 08:30:56 crc kubenswrapper[4908]: I0131 08:30:56.716040 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ffac261-f2bf-4193-b869-126973eaaa2a-utilities\") pod \"community-operators-f7l64\" (UID: \"2ffac261-f2bf-4193-b869-126973eaaa2a\") " pod="openshift-marketplace/community-operators-f7l64" Jan 31 08:30:56 crc kubenswrapper[4908]: I0131 08:30:56.818479 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzvbg\" (UniqueName: \"kubernetes.io/projected/2ffac261-f2bf-4193-b869-126973eaaa2a-kube-api-access-zzvbg\") pod \"community-operators-f7l64\" (UID: \"2ffac261-f2bf-4193-b869-126973eaaa2a\") " pod="openshift-marketplace/community-operators-f7l64" Jan 31 08:30:56 crc kubenswrapper[4908]: I0131 08:30:56.818573 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ffac261-f2bf-4193-b869-126973eaaa2a-catalog-content\") pod \"community-operators-f7l64\" (UID: \"2ffac261-f2bf-4193-b869-126973eaaa2a\") " pod="openshift-marketplace/community-operators-f7l64" Jan 31 08:30:56 crc kubenswrapper[4908]: I0131 08:30:56.818615 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ffac261-f2bf-4193-b869-126973eaaa2a-utilities\") pod \"community-operators-f7l64\" (UID: \"2ffac261-f2bf-4193-b869-126973eaaa2a\") " pod="openshift-marketplace/community-operators-f7l64" Jan 31 08:30:56 crc kubenswrapper[4908]: I0131 08:30:56.819101 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ffac261-f2bf-4193-b869-126973eaaa2a-utilities\") pod \"community-operators-f7l64\" (UID: \"2ffac261-f2bf-4193-b869-126973eaaa2a\") " pod="openshift-marketplace/community-operators-f7l64" Jan 31 08:30:56 crc kubenswrapper[4908]: I0131 08:30:56.819308 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ffac261-f2bf-4193-b869-126973eaaa2a-catalog-content\") pod \"community-operators-f7l64\" (UID: \"2ffac261-f2bf-4193-b869-126973eaaa2a\") " pod="openshift-marketplace/community-operators-f7l64" Jan 31 08:30:56 crc kubenswrapper[4908]: I0131 08:30:56.854086 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzvbg\" (UniqueName: \"kubernetes.io/projected/2ffac261-f2bf-4193-b869-126973eaaa2a-kube-api-access-zzvbg\") pod \"community-operators-f7l64\" (UID: \"2ffac261-f2bf-4193-b869-126973eaaa2a\") " pod="openshift-marketplace/community-operators-f7l64" Jan 31 08:30:56 crc kubenswrapper[4908]: I0131 08:30:56.889461 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f7l64" Jan 31 08:30:57 crc kubenswrapper[4908]: I0131 08:30:57.440002 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f7l64"] Jan 31 08:30:58 crc kubenswrapper[4908]: I0131 08:30:58.233820 4908 generic.go:334] "Generic (PLEG): container finished" podID="2ffac261-f2bf-4193-b869-126973eaaa2a" containerID="293f814122388cf6eb2de358b24f4939569b9df5c26adfa5f413f9f56f551477" exitCode=0 Jan 31 08:30:58 crc kubenswrapper[4908]: I0131 08:30:58.233906 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f7l64" event={"ID":"2ffac261-f2bf-4193-b869-126973eaaa2a","Type":"ContainerDied","Data":"293f814122388cf6eb2de358b24f4939569b9df5c26adfa5f413f9f56f551477"} Jan 31 08:30:58 crc kubenswrapper[4908]: I0131 08:30:58.234356 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f7l64" event={"ID":"2ffac261-f2bf-4193-b869-126973eaaa2a","Type":"ContainerStarted","Data":"f2c384173845a2be2fc680462f9f353e23f4ec792e62d40172a077e01a2fa2eb"} Jan 31 08:30:58 crc kubenswrapper[4908]: I0131 08:30:58.235952 4908 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 08:30:59 crc kubenswrapper[4908]: I0131 08:30:59.245054 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f7l64" event={"ID":"2ffac261-f2bf-4193-b869-126973eaaa2a","Type":"ContainerStarted","Data":"83ced788f3958417d7658dba3d3c54ebcd6ff67dc92d4cc2dc9d1c8a044a040c"} Jan 31 08:31:00 crc kubenswrapper[4908]: I0131 08:31:00.368353 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w2vgf"] Jan 31 08:31:00 crc kubenswrapper[4908]: I0131 08:31:00.375717 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w2vgf" Jan 31 08:31:00 crc kubenswrapper[4908]: I0131 08:31:00.377171 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w2vgf"] Jan 31 08:31:00 crc kubenswrapper[4908]: I0131 08:31:00.397068 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebb487bb-20cc-4b49-bb8e-5ffade4ada31-catalog-content\") pod \"redhat-marketplace-w2vgf\" (UID: \"ebb487bb-20cc-4b49-bb8e-5ffade4ada31\") " pod="openshift-marketplace/redhat-marketplace-w2vgf" Jan 31 08:31:00 crc kubenswrapper[4908]: I0131 08:31:00.397273 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jvt9\" (UniqueName: \"kubernetes.io/projected/ebb487bb-20cc-4b49-bb8e-5ffade4ada31-kube-api-access-6jvt9\") pod \"redhat-marketplace-w2vgf\" (UID: \"ebb487bb-20cc-4b49-bb8e-5ffade4ada31\") " pod="openshift-marketplace/redhat-marketplace-w2vgf" Jan 31 08:31:00 crc kubenswrapper[4908]: I0131 08:31:00.397338 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebb487bb-20cc-4b49-bb8e-5ffade4ada31-utilities\") pod \"redhat-marketplace-w2vgf\" (UID: \"ebb487bb-20cc-4b49-bb8e-5ffade4ada31\") " pod="openshift-marketplace/redhat-marketplace-w2vgf" Jan 31 08:31:00 crc kubenswrapper[4908]: I0131 08:31:00.499333 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebb487bb-20cc-4b49-bb8e-5ffade4ada31-catalog-content\") pod \"redhat-marketplace-w2vgf\" (UID: \"ebb487bb-20cc-4b49-bb8e-5ffade4ada31\") " pod="openshift-marketplace/redhat-marketplace-w2vgf" Jan 31 08:31:00 crc kubenswrapper[4908]: I0131 08:31:00.499408 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jvt9\" (UniqueName: \"kubernetes.io/projected/ebb487bb-20cc-4b49-bb8e-5ffade4ada31-kube-api-access-6jvt9\") pod \"redhat-marketplace-w2vgf\" (UID: \"ebb487bb-20cc-4b49-bb8e-5ffade4ada31\") " pod="openshift-marketplace/redhat-marketplace-w2vgf" Jan 31 08:31:00 crc kubenswrapper[4908]: I0131 08:31:00.499477 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebb487bb-20cc-4b49-bb8e-5ffade4ada31-utilities\") pod \"redhat-marketplace-w2vgf\" (UID: \"ebb487bb-20cc-4b49-bb8e-5ffade4ada31\") " pod="openshift-marketplace/redhat-marketplace-w2vgf" Jan 31 08:31:00 crc kubenswrapper[4908]: I0131 08:31:00.499883 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebb487bb-20cc-4b49-bb8e-5ffade4ada31-catalog-content\") pod \"redhat-marketplace-w2vgf\" (UID: \"ebb487bb-20cc-4b49-bb8e-5ffade4ada31\") " pod="openshift-marketplace/redhat-marketplace-w2vgf" Jan 31 08:31:00 crc kubenswrapper[4908]: I0131 08:31:00.500098 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebb487bb-20cc-4b49-bb8e-5ffade4ada31-utilities\") pod \"redhat-marketplace-w2vgf\" (UID: \"ebb487bb-20cc-4b49-bb8e-5ffade4ada31\") " pod="openshift-marketplace/redhat-marketplace-w2vgf" Jan 31 08:31:00 crc kubenswrapper[4908]: I0131 08:31:00.520528 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jvt9\" (UniqueName: \"kubernetes.io/projected/ebb487bb-20cc-4b49-bb8e-5ffade4ada31-kube-api-access-6jvt9\") pod \"redhat-marketplace-w2vgf\" (UID: \"ebb487bb-20cc-4b49-bb8e-5ffade4ada31\") " pod="openshift-marketplace/redhat-marketplace-w2vgf" Jan 31 08:31:00 crc kubenswrapper[4908]: I0131 08:31:00.696850 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w2vgf" Jan 31 08:31:01 crc kubenswrapper[4908]: W0131 08:31:01.234868 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebb487bb_20cc_4b49_bb8e_5ffade4ada31.slice/crio-9b15a2628dd946a7f533332afeb69580b0de84e431b54ba095d067ad64c09908 WatchSource:0}: Error finding container 9b15a2628dd946a7f533332afeb69580b0de84e431b54ba095d067ad64c09908: Status 404 returned error can't find the container with id 9b15a2628dd946a7f533332afeb69580b0de84e431b54ba095d067ad64c09908 Jan 31 08:31:01 crc kubenswrapper[4908]: I0131 08:31:01.245358 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w2vgf"] Jan 31 08:31:01 crc kubenswrapper[4908]: I0131 08:31:01.263223 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w2vgf" event={"ID":"ebb487bb-20cc-4b49-bb8e-5ffade4ada31","Type":"ContainerStarted","Data":"9b15a2628dd946a7f533332afeb69580b0de84e431b54ba095d067ad64c09908"} Jan 31 08:31:01 crc kubenswrapper[4908]: I0131 08:31:01.359920 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6j4hj"] Jan 31 08:31:01 crc kubenswrapper[4908]: I0131 08:31:01.361828 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6j4hj" Jan 31 08:31:01 crc kubenswrapper[4908]: I0131 08:31:01.380156 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6j4hj"] Jan 31 08:31:01 crc kubenswrapper[4908]: I0131 08:31:01.517704 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ec76775-757b-45c3-a872-f79fe5758421-catalog-content\") pod \"redhat-operators-6j4hj\" (UID: \"8ec76775-757b-45c3-a872-f79fe5758421\") " pod="openshift-marketplace/redhat-operators-6j4hj" Jan 31 08:31:01 crc kubenswrapper[4908]: I0131 08:31:01.517940 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ec76775-757b-45c3-a872-f79fe5758421-utilities\") pod \"redhat-operators-6j4hj\" (UID: \"8ec76775-757b-45c3-a872-f79fe5758421\") " pod="openshift-marketplace/redhat-operators-6j4hj" Jan 31 08:31:01 crc kubenswrapper[4908]: I0131 08:31:01.518086 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz8s6\" (UniqueName: \"kubernetes.io/projected/8ec76775-757b-45c3-a872-f79fe5758421-kube-api-access-sz8s6\") pod \"redhat-operators-6j4hj\" (UID: \"8ec76775-757b-45c3-a872-f79fe5758421\") " pod="openshift-marketplace/redhat-operators-6j4hj" Jan 31 08:31:01 crc kubenswrapper[4908]: I0131 08:31:01.620074 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ec76775-757b-45c3-a872-f79fe5758421-utilities\") pod \"redhat-operators-6j4hj\" (UID: \"8ec76775-757b-45c3-a872-f79fe5758421\") " pod="openshift-marketplace/redhat-operators-6j4hj" Jan 31 08:31:01 crc kubenswrapper[4908]: I0131 08:31:01.620168 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz8s6\" (UniqueName: \"kubernetes.io/projected/8ec76775-757b-45c3-a872-f79fe5758421-kube-api-access-sz8s6\") pod \"redhat-operators-6j4hj\" (UID: \"8ec76775-757b-45c3-a872-f79fe5758421\") " pod="openshift-marketplace/redhat-operators-6j4hj" Jan 31 08:31:01 crc kubenswrapper[4908]: I0131 08:31:01.620302 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ec76775-757b-45c3-a872-f79fe5758421-catalog-content\") pod \"redhat-operators-6j4hj\" (UID: \"8ec76775-757b-45c3-a872-f79fe5758421\") " pod="openshift-marketplace/redhat-operators-6j4hj" Jan 31 08:31:01 crc kubenswrapper[4908]: I0131 08:31:01.621213 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ec76775-757b-45c3-a872-f79fe5758421-catalog-content\") pod \"redhat-operators-6j4hj\" (UID: \"8ec76775-757b-45c3-a872-f79fe5758421\") " pod="openshift-marketplace/redhat-operators-6j4hj" Jan 31 08:31:01 crc kubenswrapper[4908]: I0131 08:31:01.621268 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ec76775-757b-45c3-a872-f79fe5758421-utilities\") pod \"redhat-operators-6j4hj\" (UID: \"8ec76775-757b-45c3-a872-f79fe5758421\") " pod="openshift-marketplace/redhat-operators-6j4hj" Jan 31 08:31:01 crc kubenswrapper[4908]: I0131 08:31:01.645121 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz8s6\" (UniqueName: \"kubernetes.io/projected/8ec76775-757b-45c3-a872-f79fe5758421-kube-api-access-sz8s6\") pod \"redhat-operators-6j4hj\" (UID: \"8ec76775-757b-45c3-a872-f79fe5758421\") " pod="openshift-marketplace/redhat-operators-6j4hj" Jan 31 08:31:01 crc kubenswrapper[4908]: I0131 08:31:01.698819 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6j4hj" Jan 31 08:31:02 crc kubenswrapper[4908]: I0131 08:31:02.471121 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6j4hj"] Jan 31 08:31:02 crc kubenswrapper[4908]: W0131 08:31:02.471677 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ec76775_757b_45c3_a872_f79fe5758421.slice/crio-cf911a21d78d78aec64c172d0a54815dfd6c74fffe6e2cef93896b6fe14a71fd WatchSource:0}: Error finding container cf911a21d78d78aec64c172d0a54815dfd6c74fffe6e2cef93896b6fe14a71fd: Status 404 returned error can't find the container with id cf911a21d78d78aec64c172d0a54815dfd6c74fffe6e2cef93896b6fe14a71fd Jan 31 08:31:03 crc kubenswrapper[4908]: I0131 08:31:03.283004 4908 generic.go:334] "Generic (PLEG): container finished" podID="8ec76775-757b-45c3-a872-f79fe5758421" containerID="7a62f8982eef412406fd12804441734e5fec2c47e5955666abf674955c9288ab" exitCode=0 Jan 31 08:31:03 crc kubenswrapper[4908]: I0131 08:31:03.283082 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6j4hj" event={"ID":"8ec76775-757b-45c3-a872-f79fe5758421","Type":"ContainerDied","Data":"7a62f8982eef412406fd12804441734e5fec2c47e5955666abf674955c9288ab"} Jan 31 08:31:03 crc kubenswrapper[4908]: I0131 08:31:03.283425 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6j4hj" event={"ID":"8ec76775-757b-45c3-a872-f79fe5758421","Type":"ContainerStarted","Data":"cf911a21d78d78aec64c172d0a54815dfd6c74fffe6e2cef93896b6fe14a71fd"} Jan 31 08:31:03 crc kubenswrapper[4908]: I0131 08:31:03.287144 4908 generic.go:334] "Generic (PLEG): container finished" podID="ebb487bb-20cc-4b49-bb8e-5ffade4ada31" containerID="7db41757cd5b61aee61ead077968af441cb61f7284b1097c636cb033d154eef0" exitCode=0 Jan 31 08:31:03 crc kubenswrapper[4908]: I0131 08:31:03.287263 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w2vgf" event={"ID":"ebb487bb-20cc-4b49-bb8e-5ffade4ada31","Type":"ContainerDied","Data":"7db41757cd5b61aee61ead077968af441cb61f7284b1097c636cb033d154eef0"} Jan 31 08:31:06 crc kubenswrapper[4908]: I0131 08:31:06.318851 4908 generic.go:334] "Generic (PLEG): container finished" podID="2ffac261-f2bf-4193-b869-126973eaaa2a" containerID="83ced788f3958417d7658dba3d3c54ebcd6ff67dc92d4cc2dc9d1c8a044a040c" exitCode=0 Jan 31 08:31:06 crc kubenswrapper[4908]: I0131 08:31:06.318911 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f7l64" event={"ID":"2ffac261-f2bf-4193-b869-126973eaaa2a","Type":"ContainerDied","Data":"83ced788f3958417d7658dba3d3c54ebcd6ff67dc92d4cc2dc9d1c8a044a040c"} Jan 31 08:31:08 crc kubenswrapper[4908]: I0131 08:31:08.346527 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w2vgf" event={"ID":"ebb487bb-20cc-4b49-bb8e-5ffade4ada31","Type":"ContainerStarted","Data":"fbe8cb6b5dc3f170e05b25433d3813310b2c0aec413fb3d701cbf524499a67d5"} Jan 31 08:31:08 crc kubenswrapper[4908]: I0131 08:31:08.350651 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6j4hj" event={"ID":"8ec76775-757b-45c3-a872-f79fe5758421","Type":"ContainerStarted","Data":"9938b4b63b216a34c03ae77479bdb083dd1cafa8f8ab14a31a0188d4309c1216"} Jan 31 08:31:09 crc kubenswrapper[4908]: I0131 08:31:09.363702 4908 generic.go:334] "Generic (PLEG): container finished" podID="ebb487bb-20cc-4b49-bb8e-5ffade4ada31" containerID="fbe8cb6b5dc3f170e05b25433d3813310b2c0aec413fb3d701cbf524499a67d5" exitCode=0 Jan 31 08:31:09 crc kubenswrapper[4908]: I0131 08:31:09.363855 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w2vgf" event={"ID":"ebb487bb-20cc-4b49-bb8e-5ffade4ada31","Type":"ContainerDied","Data":"fbe8cb6b5dc3f170e05b25433d3813310b2c0aec413fb3d701cbf524499a67d5"} Jan 31 08:31:10 crc kubenswrapper[4908]: I0131 08:31:10.373144 4908 generic.go:334] "Generic (PLEG): container finished" podID="8ec76775-757b-45c3-a872-f79fe5758421" containerID="9938b4b63b216a34c03ae77479bdb083dd1cafa8f8ab14a31a0188d4309c1216" exitCode=0 Jan 31 08:31:10 crc kubenswrapper[4908]: I0131 08:31:10.373187 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6j4hj" event={"ID":"8ec76775-757b-45c3-a872-f79fe5758421","Type":"ContainerDied","Data":"9938b4b63b216a34c03ae77479bdb083dd1cafa8f8ab14a31a0188d4309c1216"} Jan 31 08:31:19 crc kubenswrapper[4908]: I0131 08:31:19.457427 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f7l64" event={"ID":"2ffac261-f2bf-4193-b869-126973eaaa2a","Type":"ContainerStarted","Data":"ba196a16d657a22874a9b4dfbb5dddc625e163192efa21da7407fe53b72f729e"} Jan 31 08:31:20 crc kubenswrapper[4908]: I0131 08:31:20.489109 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f7l64" podStartSLOduration=5.615935448 podStartE2EDuration="24.489092706s" podCreationTimestamp="2026-01-31 08:30:56 +0000 UTC" firstStartedPulling="2026-01-31 08:30:58.235756803 +0000 UTC m=+4164.851701457" lastFinishedPulling="2026-01-31 08:31:17.108914061 +0000 UTC m=+4183.724858715" observedRunningTime="2026-01-31 08:31:20.483442013 +0000 UTC m=+4187.099386687" watchObservedRunningTime="2026-01-31 08:31:20.489092706 +0000 UTC m=+4187.105037360" Jan 31 08:31:22 crc kubenswrapper[4908]: I0131 08:31:22.484343 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w2vgf" event={"ID":"ebb487bb-20cc-4b49-bb8e-5ffade4ada31","Type":"ContainerStarted","Data":"e04f1c62c1fe936444ec75ad4c54fad5d22e96fb754573ef88ddfa836c9ebafa"} Jan 31 08:31:22 crc kubenswrapper[4908]: I0131 08:31:22.507551 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w2vgf" podStartSLOduration=4.33262916 podStartE2EDuration="22.507532468s" podCreationTimestamp="2026-01-31 08:31:00 +0000 UTC" firstStartedPulling="2026-01-31 08:31:03.288385454 +0000 UTC m=+4169.904330108" lastFinishedPulling="2026-01-31 08:31:21.463288762 +0000 UTC m=+4188.079233416" observedRunningTime="2026-01-31 08:31:22.505298201 +0000 UTC m=+4189.121242865" watchObservedRunningTime="2026-01-31 08:31:22.507532468 +0000 UTC m=+4189.123477122" Jan 31 08:31:23 crc kubenswrapper[4908]: I0131 08:31:23.494888 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6j4hj" event={"ID":"8ec76775-757b-45c3-a872-f79fe5758421","Type":"ContainerStarted","Data":"de70508f79ef43456a5fd272759bc4cf1739f2b987f75dbae012a30eec07755c"} Jan 31 08:31:23 crc kubenswrapper[4908]: I0131 08:31:23.521795 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6j4hj" podStartSLOduration=3.45047809 podStartE2EDuration="22.521777576s" podCreationTimestamp="2026-01-31 08:31:01 +0000 UTC" firstStartedPulling="2026-01-31 08:31:03.284546047 +0000 UTC m=+4169.900490701" lastFinishedPulling="2026-01-31 08:31:22.355845533 +0000 UTC m=+4188.971790187" observedRunningTime="2026-01-31 08:31:23.513362873 +0000 UTC m=+4190.129307557" watchObservedRunningTime="2026-01-31 08:31:23.521777576 +0000 UTC m=+4190.137722230" Jan 31 08:31:26 crc kubenswrapper[4908]: I0131 08:31:26.890645 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f7l64" Jan 31 08:31:26 crc kubenswrapper[4908]: I0131 08:31:26.891128 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f7l64" Jan 31 08:31:27 crc kubenswrapper[4908]: I0131 08:31:27.938511 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-f7l64" podUID="2ffac261-f2bf-4193-b869-126973eaaa2a" containerName="registry-server" probeResult="failure" output=< Jan 31 08:31:27 crc kubenswrapper[4908]: timeout: failed to connect service ":50051" within 1s Jan 31 08:31:27 crc kubenswrapper[4908]: > Jan 31 08:31:30 crc kubenswrapper[4908]: I0131 08:31:30.697445 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w2vgf" Jan 31 08:31:30 crc kubenswrapper[4908]: I0131 08:31:30.697924 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w2vgf" Jan 31 08:31:30 crc kubenswrapper[4908]: I0131 08:31:30.740110 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w2vgf" Jan 31 08:31:31 crc kubenswrapper[4908]: I0131 08:31:31.612940 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w2vgf" Jan 31 08:31:31 crc kubenswrapper[4908]: I0131 08:31:31.699899 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6j4hj" Jan 31 08:31:31 crc kubenswrapper[4908]: I0131 08:31:31.699944 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6j4hj" Jan 31 08:31:31 crc kubenswrapper[4908]: I0131 08:31:31.761188 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6j4hj" Jan 31 08:31:32 crc kubenswrapper[4908]: I0131 08:31:32.626174 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6j4hj" Jan 31 08:31:33 crc kubenswrapper[4908]: I0131 08:31:33.161850 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w2vgf"] Jan 31 08:31:33 crc kubenswrapper[4908]: I0131 08:31:33.586582 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w2vgf" podUID="ebb487bb-20cc-4b49-bb8e-5ffade4ada31" containerName="registry-server" containerID="cri-o://e04f1c62c1fe936444ec75ad4c54fad5d22e96fb754573ef88ddfa836c9ebafa" gracePeriod=2 Jan 31 08:31:34 crc kubenswrapper[4908]: I0131 08:31:34.078290 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w2vgf" Jan 31 08:31:34 crc kubenswrapper[4908]: I0131 08:31:34.162794 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6j4hj"] Jan 31 08:31:34 crc kubenswrapper[4908]: I0131 08:31:34.238409 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebb487bb-20cc-4b49-bb8e-5ffade4ada31-utilities\") pod \"ebb487bb-20cc-4b49-bb8e-5ffade4ada31\" (UID: \"ebb487bb-20cc-4b49-bb8e-5ffade4ada31\") " Jan 31 08:31:34 crc kubenswrapper[4908]: I0131 08:31:34.238601 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jvt9\" (UniqueName: \"kubernetes.io/projected/ebb487bb-20cc-4b49-bb8e-5ffade4ada31-kube-api-access-6jvt9\") pod \"ebb487bb-20cc-4b49-bb8e-5ffade4ada31\" (UID: \"ebb487bb-20cc-4b49-bb8e-5ffade4ada31\") " Jan 31 08:31:34 crc kubenswrapper[4908]: I0131 08:31:34.238647 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebb487bb-20cc-4b49-bb8e-5ffade4ada31-catalog-content\") pod \"ebb487bb-20cc-4b49-bb8e-5ffade4ada31\" (UID: \"ebb487bb-20cc-4b49-bb8e-5ffade4ada31\") " Jan 31 08:31:34 crc kubenswrapper[4908]: I0131 08:31:34.239739 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebb487bb-20cc-4b49-bb8e-5ffade4ada31-utilities" (OuterVolumeSpecName: "utilities") pod "ebb487bb-20cc-4b49-bb8e-5ffade4ada31" (UID: "ebb487bb-20cc-4b49-bb8e-5ffade4ada31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:31:34 crc kubenswrapper[4908]: I0131 08:31:34.245151 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebb487bb-20cc-4b49-bb8e-5ffade4ada31-kube-api-access-6jvt9" (OuterVolumeSpecName: "kube-api-access-6jvt9") pod "ebb487bb-20cc-4b49-bb8e-5ffade4ada31" (UID: "ebb487bb-20cc-4b49-bb8e-5ffade4ada31"). InnerVolumeSpecName "kube-api-access-6jvt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:31:34 crc kubenswrapper[4908]: I0131 08:31:34.267382 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebb487bb-20cc-4b49-bb8e-5ffade4ada31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ebb487bb-20cc-4b49-bb8e-5ffade4ada31" (UID: "ebb487bb-20cc-4b49-bb8e-5ffade4ada31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:31:34 crc kubenswrapper[4908]: I0131 08:31:34.340697 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebb487bb-20cc-4b49-bb8e-5ffade4ada31-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:31:34 crc kubenswrapper[4908]: I0131 08:31:34.341047 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jvt9\" (UniqueName: \"kubernetes.io/projected/ebb487bb-20cc-4b49-bb8e-5ffade4ada31-kube-api-access-6jvt9\") on node \"crc\" DevicePath \"\"" Jan 31 08:31:34 crc kubenswrapper[4908]: I0131 08:31:34.341059 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebb487bb-20cc-4b49-bb8e-5ffade4ada31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:31:34 crc kubenswrapper[4908]: I0131 08:31:34.595629 4908 generic.go:334] "Generic (PLEG): container finished" podID="ebb487bb-20cc-4b49-bb8e-5ffade4ada31" containerID="e04f1c62c1fe936444ec75ad4c54fad5d22e96fb754573ef88ddfa836c9ebafa" exitCode=0 Jan 31 08:31:34 crc kubenswrapper[4908]: I0131 08:31:34.595682 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w2vgf" Jan 31 08:31:34 crc kubenswrapper[4908]: I0131 08:31:34.595682 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w2vgf" event={"ID":"ebb487bb-20cc-4b49-bb8e-5ffade4ada31","Type":"ContainerDied","Data":"e04f1c62c1fe936444ec75ad4c54fad5d22e96fb754573ef88ddfa836c9ebafa"} Jan 31 08:31:34 crc kubenswrapper[4908]: I0131 08:31:34.595726 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w2vgf" event={"ID":"ebb487bb-20cc-4b49-bb8e-5ffade4ada31","Type":"ContainerDied","Data":"9b15a2628dd946a7f533332afeb69580b0de84e431b54ba095d067ad64c09908"} Jan 31 08:31:34 crc kubenswrapper[4908]: I0131 08:31:34.595750 4908 scope.go:117] "RemoveContainer" containerID="e04f1c62c1fe936444ec75ad4c54fad5d22e96fb754573ef88ddfa836c9ebafa" Jan 31 08:31:34 crc kubenswrapper[4908]: I0131 08:31:34.596154 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6j4hj" podUID="8ec76775-757b-45c3-a872-f79fe5758421" containerName="registry-server" containerID="cri-o://de70508f79ef43456a5fd272759bc4cf1739f2b987f75dbae012a30eec07755c" gracePeriod=2 Jan 31 08:31:34 crc kubenswrapper[4908]: I0131 08:31:34.629735 4908 scope.go:117] "RemoveContainer" containerID="fbe8cb6b5dc3f170e05b25433d3813310b2c0aec413fb3d701cbf524499a67d5" Jan 31 08:31:34 crc kubenswrapper[4908]: I0131 08:31:34.639403 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w2vgf"] Jan 31 08:31:34 crc kubenswrapper[4908]: I0131 08:31:34.648788 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w2vgf"] Jan 31 08:31:34 crc kubenswrapper[4908]: I0131 08:31:34.652791 4908 scope.go:117] "RemoveContainer" containerID="7db41757cd5b61aee61ead077968af441cb61f7284b1097c636cb033d154eef0" Jan 31 08:31:34 crc kubenswrapper[4908]: I0131 08:31:34.832988 4908 scope.go:117] "RemoveContainer" containerID="e04f1c62c1fe936444ec75ad4c54fad5d22e96fb754573ef88ddfa836c9ebafa" Jan 31 08:31:34 crc kubenswrapper[4908]: E0131 08:31:34.833624 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e04f1c62c1fe936444ec75ad4c54fad5d22e96fb754573ef88ddfa836c9ebafa\": container with ID starting with e04f1c62c1fe936444ec75ad4c54fad5d22e96fb754573ef88ddfa836c9ebafa not found: ID does not exist" containerID="e04f1c62c1fe936444ec75ad4c54fad5d22e96fb754573ef88ddfa836c9ebafa" Jan 31 08:31:34 crc kubenswrapper[4908]: I0131 08:31:34.833669 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e04f1c62c1fe936444ec75ad4c54fad5d22e96fb754573ef88ddfa836c9ebafa"} err="failed to get container status \"e04f1c62c1fe936444ec75ad4c54fad5d22e96fb754573ef88ddfa836c9ebafa\": rpc error: code = NotFound desc = could not find container \"e04f1c62c1fe936444ec75ad4c54fad5d22e96fb754573ef88ddfa836c9ebafa\": container with ID starting with e04f1c62c1fe936444ec75ad4c54fad5d22e96fb754573ef88ddfa836c9ebafa not found: ID does not exist" Jan 31 08:31:34 crc kubenswrapper[4908]: I0131 08:31:34.833696 4908 scope.go:117] "RemoveContainer" containerID="fbe8cb6b5dc3f170e05b25433d3813310b2c0aec413fb3d701cbf524499a67d5" Jan 31 08:31:34 crc kubenswrapper[4908]: E0131 08:31:34.834071 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbe8cb6b5dc3f170e05b25433d3813310b2c0aec413fb3d701cbf524499a67d5\": container with ID starting with fbe8cb6b5dc3f170e05b25433d3813310b2c0aec413fb3d701cbf524499a67d5 not found: ID does not exist" containerID="fbe8cb6b5dc3f170e05b25433d3813310b2c0aec413fb3d701cbf524499a67d5" Jan 31 08:31:34 crc kubenswrapper[4908]: I0131 08:31:34.834109 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbe8cb6b5dc3f170e05b25433d3813310b2c0aec413fb3d701cbf524499a67d5"} err="failed to get container status \"fbe8cb6b5dc3f170e05b25433d3813310b2c0aec413fb3d701cbf524499a67d5\": rpc error: code = NotFound desc = could not find container \"fbe8cb6b5dc3f170e05b25433d3813310b2c0aec413fb3d701cbf524499a67d5\": container with ID starting with fbe8cb6b5dc3f170e05b25433d3813310b2c0aec413fb3d701cbf524499a67d5 not found: ID does not exist" Jan 31 08:31:34 crc kubenswrapper[4908]: I0131 08:31:34.834134 4908 scope.go:117] "RemoveContainer" containerID="7db41757cd5b61aee61ead077968af441cb61f7284b1097c636cb033d154eef0" Jan 31 08:31:34 crc kubenswrapper[4908]: E0131 08:31:34.834497 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7db41757cd5b61aee61ead077968af441cb61f7284b1097c636cb033d154eef0\": container with ID starting with 7db41757cd5b61aee61ead077968af441cb61f7284b1097c636cb033d154eef0 not found: ID does not exist" containerID="7db41757cd5b61aee61ead077968af441cb61f7284b1097c636cb033d154eef0" Jan 31 08:31:34 crc kubenswrapper[4908]: I0131 08:31:34.834546 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7db41757cd5b61aee61ead077968af441cb61f7284b1097c636cb033d154eef0"} err="failed to get container status \"7db41757cd5b61aee61ead077968af441cb61f7284b1097c636cb033d154eef0\": rpc error: code = NotFound desc = could not find container \"7db41757cd5b61aee61ead077968af441cb61f7284b1097c636cb033d154eef0\": container with ID starting with 7db41757cd5b61aee61ead077968af441cb61f7284b1097c636cb033d154eef0 not found: ID does not exist" Jan 31 08:31:35 crc kubenswrapper[4908]: I0131 08:31:35.134862 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6j4hj" Jan 31 08:31:35 crc kubenswrapper[4908]: I0131 08:31:35.259179 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ec76775-757b-45c3-a872-f79fe5758421-utilities\") pod \"8ec76775-757b-45c3-a872-f79fe5758421\" (UID: \"8ec76775-757b-45c3-a872-f79fe5758421\") " Jan 31 08:31:35 crc kubenswrapper[4908]: I0131 08:31:35.259270 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ec76775-757b-45c3-a872-f79fe5758421-catalog-content\") pod \"8ec76775-757b-45c3-a872-f79fe5758421\" (UID: \"8ec76775-757b-45c3-a872-f79fe5758421\") " Jan 31 08:31:35 crc kubenswrapper[4908]: I0131 08:31:35.259505 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz8s6\" (UniqueName: \"kubernetes.io/projected/8ec76775-757b-45c3-a872-f79fe5758421-kube-api-access-sz8s6\") pod \"8ec76775-757b-45c3-a872-f79fe5758421\" (UID: \"8ec76775-757b-45c3-a872-f79fe5758421\") " Jan 31 08:31:35 crc kubenswrapper[4908]: I0131 08:31:35.260175 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ec76775-757b-45c3-a872-f79fe5758421-utilities" (OuterVolumeSpecName: "utilities") pod "8ec76775-757b-45c3-a872-f79fe5758421" (UID: "8ec76775-757b-45c3-a872-f79fe5758421"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:31:35 crc kubenswrapper[4908]: I0131 08:31:35.269301 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ec76775-757b-45c3-a872-f79fe5758421-kube-api-access-sz8s6" (OuterVolumeSpecName: "kube-api-access-sz8s6") pod "8ec76775-757b-45c3-a872-f79fe5758421" (UID: "8ec76775-757b-45c3-a872-f79fe5758421"). InnerVolumeSpecName "kube-api-access-sz8s6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:31:35 crc kubenswrapper[4908]: I0131 08:31:35.362465 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz8s6\" (UniqueName: \"kubernetes.io/projected/8ec76775-757b-45c3-a872-f79fe5758421-kube-api-access-sz8s6\") on node \"crc\" DevicePath \"\"" Jan 31 08:31:35 crc kubenswrapper[4908]: I0131 08:31:35.362509 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ec76775-757b-45c3-a872-f79fe5758421-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:31:35 crc kubenswrapper[4908]: I0131 08:31:35.399342 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ec76775-757b-45c3-a872-f79fe5758421-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ec76775-757b-45c3-a872-f79fe5758421" (UID: "8ec76775-757b-45c3-a872-f79fe5758421"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:31:35 crc kubenswrapper[4908]: I0131 08:31:35.464015 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ec76775-757b-45c3-a872-f79fe5758421-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:31:35 crc kubenswrapper[4908]: I0131 08:31:35.616664 4908 generic.go:334] "Generic (PLEG): container finished" podID="8ec76775-757b-45c3-a872-f79fe5758421" containerID="de70508f79ef43456a5fd272759bc4cf1739f2b987f75dbae012a30eec07755c" exitCode=0 Jan 31 08:31:35 crc kubenswrapper[4908]: I0131 08:31:35.616810 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6j4hj" Jan 31 08:31:35 crc kubenswrapper[4908]: I0131 08:31:35.617144 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6j4hj" event={"ID":"8ec76775-757b-45c3-a872-f79fe5758421","Type":"ContainerDied","Data":"de70508f79ef43456a5fd272759bc4cf1739f2b987f75dbae012a30eec07755c"} Jan 31 08:31:35 crc kubenswrapper[4908]: I0131 08:31:35.617244 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6j4hj" event={"ID":"8ec76775-757b-45c3-a872-f79fe5758421","Type":"ContainerDied","Data":"cf911a21d78d78aec64c172d0a54815dfd6c74fffe6e2cef93896b6fe14a71fd"} Jan 31 08:31:35 crc kubenswrapper[4908]: I0131 08:31:35.617319 4908 scope.go:117] "RemoveContainer" containerID="de70508f79ef43456a5fd272759bc4cf1739f2b987f75dbae012a30eec07755c" Jan 31 08:31:35 crc kubenswrapper[4908]: I0131 08:31:35.639301 4908 scope.go:117] "RemoveContainer" containerID="9938b4b63b216a34c03ae77479bdb083dd1cafa8f8ab14a31a0188d4309c1216" Jan 31 08:31:35 crc kubenswrapper[4908]: I0131 08:31:35.659946 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6j4hj"] Jan 31 08:31:35 crc kubenswrapper[4908]: I0131 08:31:35.668021 4908 scope.go:117] "RemoveContainer" containerID="7a62f8982eef412406fd12804441734e5fec2c47e5955666abf674955c9288ab" Jan 31 08:31:35 crc kubenswrapper[4908]: I0131 08:31:35.668243 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6j4hj"] Jan 31 08:31:35 crc kubenswrapper[4908]: I0131 08:31:35.694856 4908 scope.go:117] "RemoveContainer" containerID="de70508f79ef43456a5fd272759bc4cf1739f2b987f75dbae012a30eec07755c" Jan 31 08:31:35 crc kubenswrapper[4908]: E0131 08:31:35.695474 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de70508f79ef43456a5fd272759bc4cf1739f2b987f75dbae012a30eec07755c\": container with ID starting with de70508f79ef43456a5fd272759bc4cf1739f2b987f75dbae012a30eec07755c not found: ID does not exist" containerID="de70508f79ef43456a5fd272759bc4cf1739f2b987f75dbae012a30eec07755c" Jan 31 08:31:35 crc kubenswrapper[4908]: I0131 08:31:35.695535 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de70508f79ef43456a5fd272759bc4cf1739f2b987f75dbae012a30eec07755c"} err="failed to get container status \"de70508f79ef43456a5fd272759bc4cf1739f2b987f75dbae012a30eec07755c\": rpc error: code = NotFound desc = could not find container \"de70508f79ef43456a5fd272759bc4cf1739f2b987f75dbae012a30eec07755c\": container with ID starting with de70508f79ef43456a5fd272759bc4cf1739f2b987f75dbae012a30eec07755c not found: ID does not exist" Jan 31 08:31:35 crc kubenswrapper[4908]: I0131 08:31:35.695572 4908 scope.go:117] "RemoveContainer" containerID="9938b4b63b216a34c03ae77479bdb083dd1cafa8f8ab14a31a0188d4309c1216" Jan 31 08:31:35 crc kubenswrapper[4908]: E0131 08:31:35.695917 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9938b4b63b216a34c03ae77479bdb083dd1cafa8f8ab14a31a0188d4309c1216\": container with ID starting with 9938b4b63b216a34c03ae77479bdb083dd1cafa8f8ab14a31a0188d4309c1216 not found: ID does not exist" containerID="9938b4b63b216a34c03ae77479bdb083dd1cafa8f8ab14a31a0188d4309c1216" Jan 31 08:31:35 crc kubenswrapper[4908]: I0131 08:31:35.695963 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9938b4b63b216a34c03ae77479bdb083dd1cafa8f8ab14a31a0188d4309c1216"} err="failed to get container status \"9938b4b63b216a34c03ae77479bdb083dd1cafa8f8ab14a31a0188d4309c1216\": rpc error: code = NotFound desc = could not find container \"9938b4b63b216a34c03ae77479bdb083dd1cafa8f8ab14a31a0188d4309c1216\": container with ID starting with 9938b4b63b216a34c03ae77479bdb083dd1cafa8f8ab14a31a0188d4309c1216 not found: ID does not exist" Jan 31 08:31:35 crc kubenswrapper[4908]: I0131 08:31:35.696010 4908 scope.go:117] "RemoveContainer" containerID="7a62f8982eef412406fd12804441734e5fec2c47e5955666abf674955c9288ab" Jan 31 08:31:35 crc kubenswrapper[4908]: E0131 08:31:35.696491 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a62f8982eef412406fd12804441734e5fec2c47e5955666abf674955c9288ab\": container with ID starting with 7a62f8982eef412406fd12804441734e5fec2c47e5955666abf674955c9288ab not found: ID does not exist" containerID="7a62f8982eef412406fd12804441734e5fec2c47e5955666abf674955c9288ab" Jan 31 08:31:35 crc kubenswrapper[4908]: I0131 08:31:35.696525 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a62f8982eef412406fd12804441734e5fec2c47e5955666abf674955c9288ab"} err="failed to get container status \"7a62f8982eef412406fd12804441734e5fec2c47e5955666abf674955c9288ab\": rpc error: code = NotFound desc = could not find container \"7a62f8982eef412406fd12804441734e5fec2c47e5955666abf674955c9288ab\": container with ID starting with 7a62f8982eef412406fd12804441734e5fec2c47e5955666abf674955c9288ab not found: ID does not exist" Jan 31 08:31:35 crc kubenswrapper[4908]: I0131 08:31:35.950720 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ec76775-757b-45c3-a872-f79fe5758421" path="/var/lib/kubelet/pods/8ec76775-757b-45c3-a872-f79fe5758421/volumes" Jan 31 08:31:35 crc kubenswrapper[4908]: I0131 08:31:35.951727 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebb487bb-20cc-4b49-bb8e-5ffade4ada31" path="/var/lib/kubelet/pods/ebb487bb-20cc-4b49-bb8e-5ffade4ada31/volumes" Jan 31 08:31:36 crc kubenswrapper[4908]: I0131 08:31:36.942451 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f7l64" Jan 31 08:31:36 crc kubenswrapper[4908]: I0131 08:31:36.994761 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f7l64" Jan 31 08:31:38 crc kubenswrapper[4908]: I0131 08:31:38.963336 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f7l64"] Jan 31 08:31:38 crc kubenswrapper[4908]: I0131 08:31:38.963869 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f7l64" podUID="2ffac261-f2bf-4193-b869-126973eaaa2a" containerName="registry-server" containerID="cri-o://ba196a16d657a22874a9b4dfbb5dddc625e163192efa21da7407fe53b72f729e" gracePeriod=2 Jan 31 08:31:39 crc kubenswrapper[4908]: I0131 08:31:39.705479 4908 generic.go:334] "Generic (PLEG): container finished" podID="2ffac261-f2bf-4193-b869-126973eaaa2a" containerID="ba196a16d657a22874a9b4dfbb5dddc625e163192efa21da7407fe53b72f729e" exitCode=0 Jan 31 08:31:39 crc kubenswrapper[4908]: I0131 08:31:39.705826 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f7l64" event={"ID":"2ffac261-f2bf-4193-b869-126973eaaa2a","Type":"ContainerDied","Data":"ba196a16d657a22874a9b4dfbb5dddc625e163192efa21da7407fe53b72f729e"} Jan 31 08:31:40 crc kubenswrapper[4908]: I0131 08:31:40.121638 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f7l64" Jan 31 08:31:40 crc kubenswrapper[4908]: I0131 08:31:40.228079 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzvbg\" (UniqueName: \"kubernetes.io/projected/2ffac261-f2bf-4193-b869-126973eaaa2a-kube-api-access-zzvbg\") pod \"2ffac261-f2bf-4193-b869-126973eaaa2a\" (UID: \"2ffac261-f2bf-4193-b869-126973eaaa2a\") " Jan 31 08:31:40 crc kubenswrapper[4908]: I0131 08:31:40.228283 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ffac261-f2bf-4193-b869-126973eaaa2a-catalog-content\") pod \"2ffac261-f2bf-4193-b869-126973eaaa2a\" (UID: \"2ffac261-f2bf-4193-b869-126973eaaa2a\") " Jan 31 08:31:40 crc kubenswrapper[4908]: I0131 08:31:40.228410 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ffac261-f2bf-4193-b869-126973eaaa2a-utilities\") pod \"2ffac261-f2bf-4193-b869-126973eaaa2a\" (UID: \"2ffac261-f2bf-4193-b869-126973eaaa2a\") " Jan 31 08:31:40 crc kubenswrapper[4908]: I0131 08:31:40.229174 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ffac261-f2bf-4193-b869-126973eaaa2a-utilities" (OuterVolumeSpecName: "utilities") pod "2ffac261-f2bf-4193-b869-126973eaaa2a" (UID: "2ffac261-f2bf-4193-b869-126973eaaa2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:31:40 crc kubenswrapper[4908]: I0131 08:31:40.241517 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ffac261-f2bf-4193-b869-126973eaaa2a-kube-api-access-zzvbg" (OuterVolumeSpecName: "kube-api-access-zzvbg") pod "2ffac261-f2bf-4193-b869-126973eaaa2a" (UID: "2ffac261-f2bf-4193-b869-126973eaaa2a"). InnerVolumeSpecName "kube-api-access-zzvbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:31:40 crc kubenswrapper[4908]: I0131 08:31:40.295964 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ffac261-f2bf-4193-b869-126973eaaa2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ffac261-f2bf-4193-b869-126973eaaa2a" (UID: "2ffac261-f2bf-4193-b869-126973eaaa2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:31:40 crc kubenswrapper[4908]: I0131 08:31:40.329804 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ffac261-f2bf-4193-b869-126973eaaa2a-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:31:40 crc kubenswrapper[4908]: I0131 08:31:40.329833 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzvbg\" (UniqueName: \"kubernetes.io/projected/2ffac261-f2bf-4193-b869-126973eaaa2a-kube-api-access-zzvbg\") on node \"crc\" DevicePath \"\"" Jan 31 08:31:40 crc kubenswrapper[4908]: I0131 08:31:40.329843 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ffac261-f2bf-4193-b869-126973eaaa2a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:31:40 crc kubenswrapper[4908]: I0131 08:31:40.431649 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:31:40 crc kubenswrapper[4908]: I0131 08:31:40.431722 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:31:40 crc kubenswrapper[4908]: I0131 08:31:40.718262 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f7l64" event={"ID":"2ffac261-f2bf-4193-b869-126973eaaa2a","Type":"ContainerDied","Data":"f2c384173845a2be2fc680462f9f353e23f4ec792e62d40172a077e01a2fa2eb"} Jan 31 08:31:40 crc kubenswrapper[4908]: I0131 08:31:40.718418 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f7l64" Jan 31 08:31:40 crc kubenswrapper[4908]: I0131 08:31:40.718634 4908 scope.go:117] "RemoveContainer" containerID="ba196a16d657a22874a9b4dfbb5dddc625e163192efa21da7407fe53b72f729e" Jan 31 08:31:40 crc kubenswrapper[4908]: I0131 08:31:40.748215 4908 scope.go:117] "RemoveContainer" containerID="83ced788f3958417d7658dba3d3c54ebcd6ff67dc92d4cc2dc9d1c8a044a040c" Jan 31 08:31:40 crc kubenswrapper[4908]: I0131 08:31:40.752400 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f7l64"] Jan 31 08:31:40 crc kubenswrapper[4908]: I0131 08:31:40.761479 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f7l64"] Jan 31 08:31:40 crc kubenswrapper[4908]: I0131 08:31:40.785155 4908 scope.go:117] "RemoveContainer" containerID="293f814122388cf6eb2de358b24f4939569b9df5c26adfa5f413f9f56f551477" Jan 31 08:31:41 crc kubenswrapper[4908]: I0131 08:31:41.951007 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ffac261-f2bf-4193-b869-126973eaaa2a" path="/var/lib/kubelet/pods/2ffac261-f2bf-4193-b869-126973eaaa2a/volumes" Jan 31 08:32:10 crc kubenswrapper[4908]: I0131 08:32:10.431049 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:32:10 crc kubenswrapper[4908]: I0131 08:32:10.431549 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:32:40 crc kubenswrapper[4908]: I0131 08:32:40.431434 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:32:40 crc kubenswrapper[4908]: I0131 08:32:40.431910 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:32:40 crc kubenswrapper[4908]: I0131 08:32:40.431972 4908 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 08:32:40 crc kubenswrapper[4908]: I0131 08:32:40.432733 4908 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6350cada3698baef0e998420384ec2ecfb0222816ef3bafd9c38feec37875ce0"} pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 08:32:40 crc kubenswrapper[4908]: I0131 08:32:40.432792 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" containerID="cri-o://6350cada3698baef0e998420384ec2ecfb0222816ef3bafd9c38feec37875ce0" gracePeriod=600 Jan 31 08:32:41 crc kubenswrapper[4908]: I0131 08:32:41.233036 4908 generic.go:334] "Generic (PLEG): container finished" podID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerID="6350cada3698baef0e998420384ec2ecfb0222816ef3bafd9c38feec37875ce0" exitCode=0 Jan 31 08:32:41 crc kubenswrapper[4908]: I0131 08:32:41.233106 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerDied","Data":"6350cada3698baef0e998420384ec2ecfb0222816ef3bafd9c38feec37875ce0"} Jan 31 08:32:41 crc kubenswrapper[4908]: I0131 08:32:41.233518 4908 scope.go:117] "RemoveContainer" containerID="4ac786483640d40e34bd423d3a5062b61b28926a6487437eb7cccc434e5d9154" Jan 31 08:32:42 crc kubenswrapper[4908]: I0131 08:32:42.244626 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerStarted","Data":"8b5b36aa4f894414f6b920e58527965283567474e86702ecd8f3a15122644d3d"} Jan 31 08:34:41 crc kubenswrapper[4908]: I0131 08:34:41.316189 4908 generic.go:334] "Generic (PLEG): container finished" podID="4104d917-4c43-4e30-8d26-600b50a30e83" containerID="dc6142cad961502989eefae1bbca5104927b4ab6110fcd22af2d0d75e4d7aad0" exitCode=1 Jan 31 08:34:41 crc kubenswrapper[4908]: I0131 08:34:41.316274 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"4104d917-4c43-4e30-8d26-600b50a30e83","Type":"ContainerDied","Data":"dc6142cad961502989eefae1bbca5104927b4ab6110fcd22af2d0d75e4d7aad0"} Jan 31 08:34:42 crc kubenswrapper[4908]: I0131 08:34:42.830452 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.003390 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4104d917-4c43-4e30-8d26-600b50a30e83-test-operator-ephemeral-temporary\") pod \"4104d917-4c43-4e30-8d26-600b50a30e83\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.003542 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4104d917-4c43-4e30-8d26-600b50a30e83-ssh-key\") pod \"4104d917-4c43-4e30-8d26-600b50a30e83\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.003566 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4104d917-4c43-4e30-8d26-600b50a30e83-test-operator-ephemeral-workdir\") pod \"4104d917-4c43-4e30-8d26-600b50a30e83\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.004046 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4104d917-4c43-4e30-8d26-600b50a30e83-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "4104d917-4c43-4e30-8d26-600b50a30e83" (UID: "4104d917-4c43-4e30-8d26-600b50a30e83"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.004369 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s8sw\" (UniqueName: \"kubernetes.io/projected/4104d917-4c43-4e30-8d26-600b50a30e83-kube-api-access-2s8sw\") pod \"4104d917-4c43-4e30-8d26-600b50a30e83\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.004493 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4104d917-4c43-4e30-8d26-600b50a30e83-ceph\") pod \"4104d917-4c43-4e30-8d26-600b50a30e83\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.004552 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4104d917-4c43-4e30-8d26-600b50a30e83-config-data\") pod \"4104d917-4c43-4e30-8d26-600b50a30e83\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.004574 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4104d917-4c43-4e30-8d26-600b50a30e83-ca-certs\") pod \"4104d917-4c43-4e30-8d26-600b50a30e83\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.004614 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"4104d917-4c43-4e30-8d26-600b50a30e83\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.004644 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4104d917-4c43-4e30-8d26-600b50a30e83-openstack-config\") pod \"4104d917-4c43-4e30-8d26-600b50a30e83\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.004724 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4104d917-4c43-4e30-8d26-600b50a30e83-openstack-config-secret\") pod \"4104d917-4c43-4e30-8d26-600b50a30e83\" (UID: \"4104d917-4c43-4e30-8d26-600b50a30e83\") " Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.005337 4908 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4104d917-4c43-4e30-8d26-600b50a30e83-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.005370 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4104d917-4c43-4e30-8d26-600b50a30e83-config-data" (OuterVolumeSpecName: "config-data") pod "4104d917-4c43-4e30-8d26-600b50a30e83" (UID: "4104d917-4c43-4e30-8d26-600b50a30e83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.006069 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4104d917-4c43-4e30-8d26-600b50a30e83-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "4104d917-4c43-4e30-8d26-600b50a30e83" (UID: "4104d917-4c43-4e30-8d26-600b50a30e83"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.009126 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "test-operator-logs") pod "4104d917-4c43-4e30-8d26-600b50a30e83" (UID: "4104d917-4c43-4e30-8d26-600b50a30e83"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.017178 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4104d917-4c43-4e30-8d26-600b50a30e83-kube-api-access-2s8sw" (OuterVolumeSpecName: "kube-api-access-2s8sw") pod "4104d917-4c43-4e30-8d26-600b50a30e83" (UID: "4104d917-4c43-4e30-8d26-600b50a30e83"). InnerVolumeSpecName "kube-api-access-2s8sw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.028368 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4104d917-4c43-4e30-8d26-600b50a30e83-ceph" (OuterVolumeSpecName: "ceph") pod "4104d917-4c43-4e30-8d26-600b50a30e83" (UID: "4104d917-4c43-4e30-8d26-600b50a30e83"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.034173 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4104d917-4c43-4e30-8d26-600b50a30e83-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4104d917-4c43-4e30-8d26-600b50a30e83" (UID: "4104d917-4c43-4e30-8d26-600b50a30e83"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.035014 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4104d917-4c43-4e30-8d26-600b50a30e83-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "4104d917-4c43-4e30-8d26-600b50a30e83" (UID: "4104d917-4c43-4e30-8d26-600b50a30e83"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.036962 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4104d917-4c43-4e30-8d26-600b50a30e83-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "4104d917-4c43-4e30-8d26-600b50a30e83" (UID: "4104d917-4c43-4e30-8d26-600b50a30e83"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.052445 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4104d917-4c43-4e30-8d26-600b50a30e83-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "4104d917-4c43-4e30-8d26-600b50a30e83" (UID: "4104d917-4c43-4e30-8d26-600b50a30e83"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.107563 4908 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4104d917-4c43-4e30-8d26-600b50a30e83-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.107591 4908 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4104d917-4c43-4e30-8d26-600b50a30e83-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.107617 4908 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.107627 4908 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4104d917-4c43-4e30-8d26-600b50a30e83-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.107638 4908 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4104d917-4c43-4e30-8d26-600b50a30e83-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.107647 4908 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4104d917-4c43-4e30-8d26-600b50a30e83-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.107656 4908 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4104d917-4c43-4e30-8d26-600b50a30e83-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.107666 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s8sw\" (UniqueName: \"kubernetes.io/projected/4104d917-4c43-4e30-8d26-600b50a30e83-kube-api-access-2s8sw\") on node \"crc\" DevicePath \"\"" Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.107677 4908 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4104d917-4c43-4e30-8d26-600b50a30e83-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.126346 4908 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.211508 4908 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.338186 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"4104d917-4c43-4e30-8d26-600b50a30e83","Type":"ContainerDied","Data":"ad6b20b30882465a9b7456e06e218ba7943712aa62f21e4dd749fb1751185302"} Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.338249 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad6b20b30882465a9b7456e06e218ba7943712aa62f21e4dd749fb1751185302" Jan 31 08:34:43 crc kubenswrapper[4908]: I0131 08:34:43.338299 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:34:48 crc kubenswrapper[4908]: I0131 08:34:48.847834 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 31 08:34:48 crc kubenswrapper[4908]: E0131 08:34:48.848478 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ffac261-f2bf-4193-b869-126973eaaa2a" containerName="extract-utilities" Jan 31 08:34:48 crc kubenswrapper[4908]: I0131 08:34:48.848491 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ffac261-f2bf-4193-b869-126973eaaa2a" containerName="extract-utilities" Jan 31 08:34:48 crc kubenswrapper[4908]: E0131 08:34:48.848505 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ec76775-757b-45c3-a872-f79fe5758421" containerName="registry-server" Jan 31 08:34:48 crc kubenswrapper[4908]: I0131 08:34:48.848512 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ec76775-757b-45c3-a872-f79fe5758421" containerName="registry-server" Jan 31 08:34:48 crc kubenswrapper[4908]: E0131 08:34:48.848527 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb487bb-20cc-4b49-bb8e-5ffade4ada31" containerName="extract-content" Jan 31 08:34:48 crc kubenswrapper[4908]: I0131 08:34:48.848533 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb487bb-20cc-4b49-bb8e-5ffade4ada31" containerName="extract-content" Jan 31 08:34:48 crc kubenswrapper[4908]: E0131 08:34:48.848545 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb487bb-20cc-4b49-bb8e-5ffade4ada31" containerName="registry-server" Jan 31 08:34:48 crc kubenswrapper[4908]: I0131 08:34:48.848550 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb487bb-20cc-4b49-bb8e-5ffade4ada31" containerName="registry-server" Jan 31 08:34:48 crc kubenswrapper[4908]: E0131 08:34:48.848557 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ffac261-f2bf-4193-b869-126973eaaa2a" containerName="registry-server" Jan 31 08:34:48 crc kubenswrapper[4908]: I0131 08:34:48.848563 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ffac261-f2bf-4193-b869-126973eaaa2a" containerName="registry-server" Jan 31 08:34:48 crc kubenswrapper[4908]: E0131 08:34:48.848574 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb487bb-20cc-4b49-bb8e-5ffade4ada31" containerName="extract-utilities" Jan 31 08:34:48 crc kubenswrapper[4908]: I0131 08:34:48.848579 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb487bb-20cc-4b49-bb8e-5ffade4ada31" containerName="extract-utilities" Jan 31 08:34:48 crc kubenswrapper[4908]: E0131 08:34:48.848593 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ec76775-757b-45c3-a872-f79fe5758421" containerName="extract-content" Jan 31 08:34:48 crc kubenswrapper[4908]: I0131 08:34:48.848599 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ec76775-757b-45c3-a872-f79fe5758421" containerName="extract-content" Jan 31 08:34:48 crc kubenswrapper[4908]: E0131 08:34:48.848615 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ffac261-f2bf-4193-b869-126973eaaa2a" containerName="extract-content" Jan 31 08:34:48 crc kubenswrapper[4908]: I0131 08:34:48.848620 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ffac261-f2bf-4193-b869-126973eaaa2a" containerName="extract-content" Jan 31 08:34:48 crc kubenswrapper[4908]: E0131 08:34:48.848629 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4104d917-4c43-4e30-8d26-600b50a30e83" containerName="tempest-tests-tempest-tests-runner" Jan 31 08:34:48 crc kubenswrapper[4908]: I0131 08:34:48.848635 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="4104d917-4c43-4e30-8d26-600b50a30e83" containerName="tempest-tests-tempest-tests-runner" Jan 31 08:34:48 crc kubenswrapper[4908]: E0131 08:34:48.848645 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ec76775-757b-45c3-a872-f79fe5758421" containerName="extract-utilities" Jan 31 08:34:48 crc kubenswrapper[4908]: I0131 08:34:48.848651 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ec76775-757b-45c3-a872-f79fe5758421" containerName="extract-utilities" Jan 31 08:34:48 crc kubenswrapper[4908]: I0131 08:34:48.848816 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ffac261-f2bf-4193-b869-126973eaaa2a" containerName="registry-server" Jan 31 08:34:48 crc kubenswrapper[4908]: I0131 08:34:48.848832 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="4104d917-4c43-4e30-8d26-600b50a30e83" containerName="tempest-tests-tempest-tests-runner" Jan 31 08:34:48 crc kubenswrapper[4908]: I0131 08:34:48.848849 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ec76775-757b-45c3-a872-f79fe5758421" containerName="registry-server" Jan 31 08:34:48 crc kubenswrapper[4908]: I0131 08:34:48.848857 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebb487bb-20cc-4b49-bb8e-5ffade4ada31" containerName="registry-server" Jan 31 08:34:48 crc kubenswrapper[4908]: I0131 08:34:48.849454 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 08:34:48 crc kubenswrapper[4908]: I0131 08:34:48.853520 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-lgv6f" Jan 31 08:34:48 crc kubenswrapper[4908]: I0131 08:34:48.868906 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 31 08:34:49 crc kubenswrapper[4908]: I0131 08:34:49.041448 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"467d67cb-8713-4136-b0ac-9aa27649b944\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 08:34:49 crc kubenswrapper[4908]: I0131 08:34:49.041603 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgz97\" (UniqueName: \"kubernetes.io/projected/467d67cb-8713-4136-b0ac-9aa27649b944-kube-api-access-cgz97\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"467d67cb-8713-4136-b0ac-9aa27649b944\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 08:34:49 crc kubenswrapper[4908]: I0131 08:34:49.143496 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgz97\" (UniqueName: \"kubernetes.io/projected/467d67cb-8713-4136-b0ac-9aa27649b944-kube-api-access-cgz97\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"467d67cb-8713-4136-b0ac-9aa27649b944\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 08:34:49 crc kubenswrapper[4908]: I0131 08:34:49.143698 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"467d67cb-8713-4136-b0ac-9aa27649b944\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 08:34:49 crc kubenswrapper[4908]: I0131 08:34:49.144172 4908 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"467d67cb-8713-4136-b0ac-9aa27649b944\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 08:34:49 crc kubenswrapper[4908]: I0131 08:34:49.162861 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgz97\" (UniqueName: \"kubernetes.io/projected/467d67cb-8713-4136-b0ac-9aa27649b944-kube-api-access-cgz97\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"467d67cb-8713-4136-b0ac-9aa27649b944\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 08:34:49 crc kubenswrapper[4908]: I0131 08:34:49.169128 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"467d67cb-8713-4136-b0ac-9aa27649b944\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 08:34:49 crc kubenswrapper[4908]: I0131 08:34:49.465935 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 08:34:49 crc kubenswrapper[4908]: I0131 08:34:49.890968 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 31 08:34:49 crc kubenswrapper[4908]: W0131 08:34:49.905687 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod467d67cb_8713_4136_b0ac_9aa27649b944.slice/crio-63ef875a90fb7b7bb8f27b1804d1e12fb7b5e29a89c6f05decb7ea2c574ee38d WatchSource:0}: Error finding container 63ef875a90fb7b7bb8f27b1804d1e12fb7b5e29a89c6f05decb7ea2c574ee38d: Status 404 returned error can't find the container with id 63ef875a90fb7b7bb8f27b1804d1e12fb7b5e29a89c6f05decb7ea2c574ee38d Jan 31 08:34:50 crc kubenswrapper[4908]: I0131 08:34:50.410029 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"467d67cb-8713-4136-b0ac-9aa27649b944","Type":"ContainerStarted","Data":"63ef875a90fb7b7bb8f27b1804d1e12fb7b5e29a89c6f05decb7ea2c574ee38d"} Jan 31 08:34:52 crc kubenswrapper[4908]: I0131 08:34:52.426313 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"467d67cb-8713-4136-b0ac-9aa27649b944","Type":"ContainerStarted","Data":"f9ff4848f46ee440c4bc144d3cbc56676d1d09a747fbd57d7e87650f05d5beee"} Jan 31 08:34:52 crc kubenswrapper[4908]: I0131 08:34:52.444599 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.444109684 podStartE2EDuration="4.444579313s" podCreationTimestamp="2026-01-31 08:34:48 +0000 UTC" firstStartedPulling="2026-01-31 08:34:49.915656816 +0000 UTC m=+4396.531601480" lastFinishedPulling="2026-01-31 08:34:51.916126455 +0000 UTC m=+4398.532071109" observedRunningTime="2026-01-31 08:34:52.438391307 +0000 UTC m=+4399.054335961" watchObservedRunningTime="2026-01-31 08:34:52.444579313 +0000 UTC m=+4399.060523967" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.478377 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tobiko-tests-tobiko-s00-podified-functional"] Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.480175 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.483783 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"test-operator-clouds-config" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.484195 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tobiko-tests-tobikotobiko-public-key" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.484863 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tobiko-tests-tobikotobiko-private-key" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.485547 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"tobiko-secret" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.488143 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tobiko-tests-tobikotobiko-config" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.504181 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tobiko-tests-tobiko-s00-podified-functional"] Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.616180 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c13761ee-a71a-4670-bab8-904ed63f2b92-ca-certs\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.616865 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.617099 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/c13761ee-a71a-4670-bab8-904ed63f2b92-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.617322 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c13761ee-a71a-4670-bab8-904ed63f2b92-ceph\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.617498 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c13761ee-a71a-4670-bab8-904ed63f2b92-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.617714 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/c13761ee-a71a-4670-bab8-904ed63f2b92-tobiko-public-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.618238 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c13761ee-a71a-4670-bab8-904ed63f2b92-openstack-config-secret\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.618324 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c13761ee-a71a-4670-bab8-904ed63f2b92-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.618418 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/c13761ee-a71a-4670-bab8-904ed63f2b92-tobiko-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.618564 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47v9c\" (UniqueName: \"kubernetes.io/projected/c13761ee-a71a-4670-bab8-904ed63f2b92-kube-api-access-47v9c\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.618598 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/c13761ee-a71a-4670-bab8-904ed63f2b92-kubeconfig\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.618736 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/c13761ee-a71a-4670-bab8-904ed63f2b92-tobiko-private-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.721067 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c13761ee-a71a-4670-bab8-904ed63f2b92-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.721164 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/c13761ee-a71a-4670-bab8-904ed63f2b92-tobiko-public-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.721230 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c13761ee-a71a-4670-bab8-904ed63f2b92-openstack-config-secret\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.721256 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c13761ee-a71a-4670-bab8-904ed63f2b92-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.721284 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/c13761ee-a71a-4670-bab8-904ed63f2b92-tobiko-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.721330 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47v9c\" (UniqueName: \"kubernetes.io/projected/c13761ee-a71a-4670-bab8-904ed63f2b92-kube-api-access-47v9c\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.721359 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/c13761ee-a71a-4670-bab8-904ed63f2b92-kubeconfig\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.721401 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/c13761ee-a71a-4670-bab8-904ed63f2b92-tobiko-private-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.721426 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c13761ee-a71a-4670-bab8-904ed63f2b92-ca-certs\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.721456 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.721485 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/c13761ee-a71a-4670-bab8-904ed63f2b92-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.721516 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c13761ee-a71a-4670-bab8-904ed63f2b92-ceph\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.722325 4908 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.723056 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/c13761ee-a71a-4670-bab8-904ed63f2b92-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.723174 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/c13761ee-a71a-4670-bab8-904ed63f2b92-tobiko-private-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.723377 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c13761ee-a71a-4670-bab8-904ed63f2b92-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.723517 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/c13761ee-a71a-4670-bab8-904ed63f2b92-tobiko-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.723967 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c13761ee-a71a-4670-bab8-904ed63f2b92-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.724611 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/c13761ee-a71a-4670-bab8-904ed63f2b92-tobiko-public-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.727885 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c13761ee-a71a-4670-bab8-904ed63f2b92-ca-certs\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.728205 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c13761ee-a71a-4670-bab8-904ed63f2b92-openstack-config-secret\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.729300 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/c13761ee-a71a-4670-bab8-904ed63f2b92-kubeconfig\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.737572 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c13761ee-a71a-4670-bab8-904ed63f2b92-ceph\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.741150 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47v9c\" (UniqueName: \"kubernetes.io/projected/c13761ee-a71a-4670-bab8-904ed63f2b92-kube-api-access-47v9c\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.756752 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:08 crc kubenswrapper[4908]: I0131 08:35:08.818853 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:35:09 crc kubenswrapper[4908]: I0131 08:35:09.339860 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tobiko-tests-tobiko-s00-podified-functional"] Jan 31 08:35:09 crc kubenswrapper[4908]: I0131 08:35:09.574579 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" event={"ID":"c13761ee-a71a-4670-bab8-904ed63f2b92","Type":"ContainerStarted","Data":"ed81e0f32f6841cc114e3348e3e599a1987a3031235ba3bee7ef29968c5bd41a"} Jan 31 08:35:10 crc kubenswrapper[4908]: I0131 08:35:10.431029 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:35:10 crc kubenswrapper[4908]: I0131 08:35:10.431325 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:35:32 crc kubenswrapper[4908]: E0131 08:35:32.225781 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tobiko:current-podified" Jan 31 08:35:32 crc kubenswrapper[4908]: E0131 08:35:32.226562 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tobiko-tests-tobiko,Image:quay.io/podified-antelope-centos9/openstack-tobiko:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TOBIKO_DEBUG_MODE,Value:false,ValueFrom:nil,},EnvVar{Name:TOBIKO_KEYS_FOLDER,Value:/etc/test_operator,ValueFrom:nil,},EnvVar{Name:TOBIKO_LOGS_DIR_NAME,Value:tobiko-tests-tobiko-s00-podified-functional,ValueFrom:nil,},EnvVar{Name:TOBIKO_PREVENT_CREATE,Value:,ValueFrom:nil,},EnvVar{Name:TOBIKO_PYTEST_ADDOPTS,Value:,ValueFrom:nil,},EnvVar{Name:TOBIKO_TESTENV,Value:functional -- tobiko/tests/functional/podified/test_topology.py,ValueFrom:nil,},EnvVar{Name:TOBIKO_VERSION,Value:master,ValueFrom:nil,},EnvVar{Name:TOX_NUM_PROCESSES,Value:2,ValueFrom:nil,},EnvVar{Name:USE_EXTERNAL_FILES,Value:True,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{8 0} {} 8 DecimalSI},memory: {{8589934592 0} {} BinarySI},},Requests:ResourceList{cpu: {{4 0} {} 4 DecimalSI},memory: {{4294967296 0} {} 4Gi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tobiko,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tobiko/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-clouds-config,ReadOnly:true,MountPath:/var/lib/tobiko/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-clouds-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tobiko-config,ReadOnly:false,MountPath:/etc/tobiko/tobiko.conf,SubPath:tobiko.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ca-bundle.trust.crt,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tobiko-private-key,ReadOnly:true,MountPath:/etc/test_operator/id_ecdsa,SubPath:id_ecdsa,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tobiko-public-key,ReadOnly:true,MountPath:/etc/test_operator/id_ecdsa.pub,SubPath:id_ecdsa.pub,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kubeconfig,ReadOnly:true,MountPath:/var/lib/tobiko/.kube/config,SubPath:config,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceph,ReadOnly:true,MountPath:/etc/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-47v9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN NET_RAW],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42495,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42495,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tobiko-tests-tobiko-s00-podified-functional_openstack(c13761ee-a71a-4670-bab8-904ed63f2b92): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 08:35:32 crc kubenswrapper[4908]: E0131 08:35:32.227797 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tobiko-tests-tobiko\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" podUID="c13761ee-a71a-4670-bab8-904ed63f2b92" Jan 31 08:35:32 crc kubenswrapper[4908]: E0131 08:35:32.780586 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tobiko-tests-tobiko\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tobiko:current-podified\\\"\"" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" podUID="c13761ee-a71a-4670-bab8-904ed63f2b92" Jan 31 08:35:40 crc kubenswrapper[4908]: I0131 08:35:40.431211 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:35:40 crc kubenswrapper[4908]: I0131 08:35:40.431694 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:35:51 crc kubenswrapper[4908]: I0131 08:35:51.960734 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" event={"ID":"c13761ee-a71a-4670-bab8-904ed63f2b92","Type":"ContainerStarted","Data":"22d9b518c83cde51b59a9f9f11dff8cc7c09f957a2ab7f973da9651df3a9540f"} Jan 31 08:35:51 crc kubenswrapper[4908]: I0131 08:35:51.989672 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" podStartSLOduration=3.049957084 podStartE2EDuration="44.989651521s" podCreationTimestamp="2026-01-31 08:35:07 +0000 UTC" firstStartedPulling="2026-01-31 08:35:09.34999961 +0000 UTC m=+4415.965944264" lastFinishedPulling="2026-01-31 08:35:51.289694047 +0000 UTC m=+4457.905638701" observedRunningTime="2026-01-31 08:35:51.979256248 +0000 UTC m=+4458.595200912" watchObservedRunningTime="2026-01-31 08:35:51.989651521 +0000 UTC m=+4458.605596175" Jan 31 08:35:55 crc kubenswrapper[4908]: I0131 08:35:55.132714 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9hpzm"] Jan 31 08:35:55 crc kubenswrapper[4908]: I0131 08:35:55.135705 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9hpzm" Jan 31 08:35:55 crc kubenswrapper[4908]: I0131 08:35:55.149359 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9hpzm"] Jan 31 08:35:55 crc kubenswrapper[4908]: I0131 08:35:55.199780 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ca8df93-05f3-4619-ad31-f206a5bb698f-catalog-content\") pod \"certified-operators-9hpzm\" (UID: \"8ca8df93-05f3-4619-ad31-f206a5bb698f\") " pod="openshift-marketplace/certified-operators-9hpzm" Jan 31 08:35:55 crc kubenswrapper[4908]: I0131 08:35:55.199946 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ca8df93-05f3-4619-ad31-f206a5bb698f-utilities\") pod \"certified-operators-9hpzm\" (UID: \"8ca8df93-05f3-4619-ad31-f206a5bb698f\") " pod="openshift-marketplace/certified-operators-9hpzm" Jan 31 08:35:55 crc kubenswrapper[4908]: I0131 08:35:55.200536 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgrfr\" (UniqueName: \"kubernetes.io/projected/8ca8df93-05f3-4619-ad31-f206a5bb698f-kube-api-access-dgrfr\") pod \"certified-operators-9hpzm\" (UID: \"8ca8df93-05f3-4619-ad31-f206a5bb698f\") " pod="openshift-marketplace/certified-operators-9hpzm" Jan 31 08:35:55 crc kubenswrapper[4908]: I0131 08:35:55.304094 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ca8df93-05f3-4619-ad31-f206a5bb698f-utilities\") pod \"certified-operators-9hpzm\" (UID: \"8ca8df93-05f3-4619-ad31-f206a5bb698f\") " pod="openshift-marketplace/certified-operators-9hpzm" Jan 31 08:35:55 crc kubenswrapper[4908]: I0131 08:35:55.304313 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgrfr\" (UniqueName: \"kubernetes.io/projected/8ca8df93-05f3-4619-ad31-f206a5bb698f-kube-api-access-dgrfr\") pod \"certified-operators-9hpzm\" (UID: \"8ca8df93-05f3-4619-ad31-f206a5bb698f\") " pod="openshift-marketplace/certified-operators-9hpzm" Jan 31 08:35:55 crc kubenswrapper[4908]: I0131 08:35:55.304369 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ca8df93-05f3-4619-ad31-f206a5bb698f-catalog-content\") pod \"certified-operators-9hpzm\" (UID: \"8ca8df93-05f3-4619-ad31-f206a5bb698f\") " pod="openshift-marketplace/certified-operators-9hpzm" Jan 31 08:35:55 crc kubenswrapper[4908]: I0131 08:35:55.304803 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ca8df93-05f3-4619-ad31-f206a5bb698f-utilities\") pod \"certified-operators-9hpzm\" (UID: \"8ca8df93-05f3-4619-ad31-f206a5bb698f\") " pod="openshift-marketplace/certified-operators-9hpzm" Jan 31 08:35:55 crc kubenswrapper[4908]: I0131 08:35:55.304870 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ca8df93-05f3-4619-ad31-f206a5bb698f-catalog-content\") pod \"certified-operators-9hpzm\" (UID: \"8ca8df93-05f3-4619-ad31-f206a5bb698f\") " pod="openshift-marketplace/certified-operators-9hpzm" Jan 31 08:35:55 crc kubenswrapper[4908]: I0131 08:35:55.335298 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgrfr\" (UniqueName: \"kubernetes.io/projected/8ca8df93-05f3-4619-ad31-f206a5bb698f-kube-api-access-dgrfr\") pod \"certified-operators-9hpzm\" (UID: \"8ca8df93-05f3-4619-ad31-f206a5bb698f\") " pod="openshift-marketplace/certified-operators-9hpzm" Jan 31 08:35:55 crc kubenswrapper[4908]: I0131 08:35:55.457292 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9hpzm" Jan 31 08:35:56 crc kubenswrapper[4908]: I0131 08:35:56.175724 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9hpzm"] Jan 31 08:35:56 crc kubenswrapper[4908]: W0131 08:35:56.178610 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ca8df93_05f3_4619_ad31_f206a5bb698f.slice/crio-28124be2be45e5ad6d744e8b694e2fe7ab28f00a9852890ae772ebd23a640898 WatchSource:0}: Error finding container 28124be2be45e5ad6d744e8b694e2fe7ab28f00a9852890ae772ebd23a640898: Status 404 returned error can't find the container with id 28124be2be45e5ad6d744e8b694e2fe7ab28f00a9852890ae772ebd23a640898 Jan 31 08:35:57 crc kubenswrapper[4908]: I0131 08:35:57.007193 4908 generic.go:334] "Generic (PLEG): container finished" podID="8ca8df93-05f3-4619-ad31-f206a5bb698f" containerID="a2358582906b607dbc4bfff74b3050ed450832aaa6a717adf3b72c467049e600" exitCode=0 Jan 31 08:35:57 crc kubenswrapper[4908]: I0131 08:35:57.007247 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9hpzm" event={"ID":"8ca8df93-05f3-4619-ad31-f206a5bb698f","Type":"ContainerDied","Data":"a2358582906b607dbc4bfff74b3050ed450832aaa6a717adf3b72c467049e600"} Jan 31 08:35:57 crc kubenswrapper[4908]: I0131 08:35:57.007481 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9hpzm" event={"ID":"8ca8df93-05f3-4619-ad31-f206a5bb698f","Type":"ContainerStarted","Data":"28124be2be45e5ad6d744e8b694e2fe7ab28f00a9852890ae772ebd23a640898"} Jan 31 08:36:01 crc kubenswrapper[4908]: I0131 08:36:01.045388 4908 generic.go:334] "Generic (PLEG): container finished" podID="8ca8df93-05f3-4619-ad31-f206a5bb698f" containerID="7f46aa7394dd667ba4928a5b4f54e3522ee4c1219cfb8452af326429d117531e" exitCode=0 Jan 31 08:36:01 crc kubenswrapper[4908]: I0131 08:36:01.045464 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9hpzm" event={"ID":"8ca8df93-05f3-4619-ad31-f206a5bb698f","Type":"ContainerDied","Data":"7f46aa7394dd667ba4928a5b4f54e3522ee4c1219cfb8452af326429d117531e"} Jan 31 08:36:02 crc kubenswrapper[4908]: I0131 08:36:02.063883 4908 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 08:36:10 crc kubenswrapper[4908]: I0131 08:36:10.431528 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:36:10 crc kubenswrapper[4908]: I0131 08:36:10.432099 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:36:10 crc kubenswrapper[4908]: I0131 08:36:10.432156 4908 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 08:36:10 crc kubenswrapper[4908]: I0131 08:36:10.433084 4908 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8b5b36aa4f894414f6b920e58527965283567474e86702ecd8f3a15122644d3d"} pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 08:36:10 crc kubenswrapper[4908]: I0131 08:36:10.433157 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" containerID="cri-o://8b5b36aa4f894414f6b920e58527965283567474e86702ecd8f3a15122644d3d" gracePeriod=600 Jan 31 08:36:13 crc kubenswrapper[4908]: I0131 08:36:13.166796 4908 generic.go:334] "Generic (PLEG): container finished" podID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerID="8b5b36aa4f894414f6b920e58527965283567474e86702ecd8f3a15122644d3d" exitCode=0 Jan 31 08:36:13 crc kubenswrapper[4908]: I0131 08:36:13.166870 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerDied","Data":"8b5b36aa4f894414f6b920e58527965283567474e86702ecd8f3a15122644d3d"} Jan 31 08:36:13 crc kubenswrapper[4908]: I0131 08:36:13.167440 4908 scope.go:117] "RemoveContainer" containerID="6350cada3698baef0e998420384ec2ecfb0222816ef3bafd9c38feec37875ce0" Jan 31 08:36:15 crc kubenswrapper[4908]: E0131 08:36:15.561965 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:36:16 crc kubenswrapper[4908]: I0131 08:36:16.191739 4908 scope.go:117] "RemoveContainer" containerID="8b5b36aa4f894414f6b920e58527965283567474e86702ecd8f3a15122644d3d" Jan 31 08:36:16 crc kubenswrapper[4908]: E0131 08:36:16.192469 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:36:20 crc kubenswrapper[4908]: I0131 08:36:20.241415 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9hpzm" event={"ID":"8ca8df93-05f3-4619-ad31-f206a5bb698f","Type":"ContainerStarted","Data":"7d03f06375236cc912f9e50c4a4d03e2c454d0a256b7fb40239492d324f9fa5a"} Jan 31 08:36:21 crc kubenswrapper[4908]: I0131 08:36:21.283875 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9hpzm" podStartSLOduration=3.9579211560000003 podStartE2EDuration="26.283853113s" podCreationTimestamp="2026-01-31 08:35:55 +0000 UTC" firstStartedPulling="2026-01-31 08:35:57.009800951 +0000 UTC m=+4463.625745605" lastFinishedPulling="2026-01-31 08:36:19.335732878 +0000 UTC m=+4485.951677562" observedRunningTime="2026-01-31 08:36:21.271540892 +0000 UTC m=+4487.887485546" watchObservedRunningTime="2026-01-31 08:36:21.283853113 +0000 UTC m=+4487.899797787" Jan 31 08:36:25 crc kubenswrapper[4908]: I0131 08:36:25.457797 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9hpzm" Jan 31 08:36:25 crc kubenswrapper[4908]: I0131 08:36:25.459134 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9hpzm" Jan 31 08:36:25 crc kubenswrapper[4908]: I0131 08:36:25.546400 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9hpzm" Jan 31 08:36:26 crc kubenswrapper[4908]: I0131 08:36:26.353828 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9hpzm" Jan 31 08:36:26 crc kubenswrapper[4908]: I0131 08:36:26.413410 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9hpzm"] Jan 31 08:36:27 crc kubenswrapper[4908]: I0131 08:36:27.946359 4908 scope.go:117] "RemoveContainer" containerID="8b5b36aa4f894414f6b920e58527965283567474e86702ecd8f3a15122644d3d" Jan 31 08:36:27 crc kubenswrapper[4908]: E0131 08:36:27.946666 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:36:28 crc kubenswrapper[4908]: I0131 08:36:28.313686 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9hpzm" podUID="8ca8df93-05f3-4619-ad31-f206a5bb698f" containerName="registry-server" containerID="cri-o://7d03f06375236cc912f9e50c4a4d03e2c454d0a256b7fb40239492d324f9fa5a" gracePeriod=2 Jan 31 08:36:29 crc kubenswrapper[4908]: I0131 08:36:29.330617 4908 generic.go:334] "Generic (PLEG): container finished" podID="8ca8df93-05f3-4619-ad31-f206a5bb698f" containerID="7d03f06375236cc912f9e50c4a4d03e2c454d0a256b7fb40239492d324f9fa5a" exitCode=0 Jan 31 08:36:29 crc kubenswrapper[4908]: I0131 08:36:29.330694 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9hpzm" event={"ID":"8ca8df93-05f3-4619-ad31-f206a5bb698f","Type":"ContainerDied","Data":"7d03f06375236cc912f9e50c4a4d03e2c454d0a256b7fb40239492d324f9fa5a"} Jan 31 08:36:29 crc kubenswrapper[4908]: I0131 08:36:29.801895 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9hpzm" Jan 31 08:36:29 crc kubenswrapper[4908]: I0131 08:36:29.958526 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgrfr\" (UniqueName: \"kubernetes.io/projected/8ca8df93-05f3-4619-ad31-f206a5bb698f-kube-api-access-dgrfr\") pod \"8ca8df93-05f3-4619-ad31-f206a5bb698f\" (UID: \"8ca8df93-05f3-4619-ad31-f206a5bb698f\") " Jan 31 08:36:29 crc kubenswrapper[4908]: I0131 08:36:29.959059 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ca8df93-05f3-4619-ad31-f206a5bb698f-catalog-content\") pod \"8ca8df93-05f3-4619-ad31-f206a5bb698f\" (UID: \"8ca8df93-05f3-4619-ad31-f206a5bb698f\") " Jan 31 08:36:29 crc kubenswrapper[4908]: I0131 08:36:29.959259 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ca8df93-05f3-4619-ad31-f206a5bb698f-utilities\") pod \"8ca8df93-05f3-4619-ad31-f206a5bb698f\" (UID: \"8ca8df93-05f3-4619-ad31-f206a5bb698f\") " Jan 31 08:36:29 crc kubenswrapper[4908]: I0131 08:36:29.960073 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ca8df93-05f3-4619-ad31-f206a5bb698f-utilities" (OuterVolumeSpecName: "utilities") pod "8ca8df93-05f3-4619-ad31-f206a5bb698f" (UID: "8ca8df93-05f3-4619-ad31-f206a5bb698f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:36:29 crc kubenswrapper[4908]: I0131 08:36:29.964915 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ca8df93-05f3-4619-ad31-f206a5bb698f-kube-api-access-dgrfr" (OuterVolumeSpecName: "kube-api-access-dgrfr") pod "8ca8df93-05f3-4619-ad31-f206a5bb698f" (UID: "8ca8df93-05f3-4619-ad31-f206a5bb698f"). InnerVolumeSpecName "kube-api-access-dgrfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:36:30 crc kubenswrapper[4908]: I0131 08:36:30.000411 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ca8df93-05f3-4619-ad31-f206a5bb698f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ca8df93-05f3-4619-ad31-f206a5bb698f" (UID: "8ca8df93-05f3-4619-ad31-f206a5bb698f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:36:30 crc kubenswrapper[4908]: I0131 08:36:30.061574 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ca8df93-05f3-4619-ad31-f206a5bb698f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:36:30 crc kubenswrapper[4908]: I0131 08:36:30.061626 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ca8df93-05f3-4619-ad31-f206a5bb698f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:36:30 crc kubenswrapper[4908]: I0131 08:36:30.061638 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgrfr\" (UniqueName: \"kubernetes.io/projected/8ca8df93-05f3-4619-ad31-f206a5bb698f-kube-api-access-dgrfr\") on node \"crc\" DevicePath \"\"" Jan 31 08:36:30 crc kubenswrapper[4908]: I0131 08:36:30.349279 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9hpzm" event={"ID":"8ca8df93-05f3-4619-ad31-f206a5bb698f","Type":"ContainerDied","Data":"28124be2be45e5ad6d744e8b694e2fe7ab28f00a9852890ae772ebd23a640898"} Jan 31 08:36:30 crc kubenswrapper[4908]: I0131 08:36:30.350588 4908 scope.go:117] "RemoveContainer" containerID="7d03f06375236cc912f9e50c4a4d03e2c454d0a256b7fb40239492d324f9fa5a" Jan 31 08:36:30 crc kubenswrapper[4908]: I0131 08:36:30.349385 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9hpzm" Jan 31 08:36:30 crc kubenswrapper[4908]: I0131 08:36:30.395696 4908 scope.go:117] "RemoveContainer" containerID="7f46aa7394dd667ba4928a5b4f54e3522ee4c1219cfb8452af326429d117531e" Jan 31 08:36:30 crc kubenswrapper[4908]: I0131 08:36:30.399218 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9hpzm"] Jan 31 08:36:30 crc kubenswrapper[4908]: I0131 08:36:30.411956 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9hpzm"] Jan 31 08:36:30 crc kubenswrapper[4908]: I0131 08:36:30.433652 4908 scope.go:117] "RemoveContainer" containerID="a2358582906b607dbc4bfff74b3050ed450832aaa6a717adf3b72c467049e600" Jan 31 08:36:31 crc kubenswrapper[4908]: I0131 08:36:31.951370 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ca8df93-05f3-4619-ad31-f206a5bb698f" path="/var/lib/kubelet/pods/8ca8df93-05f3-4619-ad31-f206a5bb698f/volumes" Jan 31 08:36:38 crc kubenswrapper[4908]: I0131 08:36:38.940659 4908 scope.go:117] "RemoveContainer" containerID="8b5b36aa4f894414f6b920e58527965283567474e86702ecd8f3a15122644d3d" Jan 31 08:36:38 crc kubenswrapper[4908]: E0131 08:36:38.941504 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:36:52 crc kubenswrapper[4908]: I0131 08:36:52.940027 4908 scope.go:117] "RemoveContainer" containerID="8b5b36aa4f894414f6b920e58527965283567474e86702ecd8f3a15122644d3d" Jan 31 08:36:52 crc kubenswrapper[4908]: E0131 08:36:52.940870 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:37:03 crc kubenswrapper[4908]: I0131 08:37:03.945268 4908 scope.go:117] "RemoveContainer" containerID="8b5b36aa4f894414f6b920e58527965283567474e86702ecd8f3a15122644d3d" Jan 31 08:37:03 crc kubenswrapper[4908]: E0131 08:37:03.946208 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:37:17 crc kubenswrapper[4908]: I0131 08:37:17.950456 4908 scope.go:117] "RemoveContainer" containerID="8b5b36aa4f894414f6b920e58527965283567474e86702ecd8f3a15122644d3d" Jan 31 08:37:17 crc kubenswrapper[4908]: E0131 08:37:17.951685 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:37:29 crc kubenswrapper[4908]: I0131 08:37:29.940341 4908 scope.go:117] "RemoveContainer" containerID="8b5b36aa4f894414f6b920e58527965283567474e86702ecd8f3a15122644d3d" Jan 31 08:37:29 crc kubenswrapper[4908]: E0131 08:37:29.941277 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:37:44 crc kubenswrapper[4908]: I0131 08:37:44.940088 4908 scope.go:117] "RemoveContainer" containerID="8b5b36aa4f894414f6b920e58527965283567474e86702ecd8f3a15122644d3d" Jan 31 08:37:44 crc kubenswrapper[4908]: E0131 08:37:44.941046 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:37:59 crc kubenswrapper[4908]: I0131 08:37:59.941585 4908 scope.go:117] "RemoveContainer" containerID="8b5b36aa4f894414f6b920e58527965283567474e86702ecd8f3a15122644d3d" Jan 31 08:37:59 crc kubenswrapper[4908]: E0131 08:37:59.942302 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:38:11 crc kubenswrapper[4908]: I0131 08:38:11.940688 4908 scope.go:117] "RemoveContainer" containerID="8b5b36aa4f894414f6b920e58527965283567474e86702ecd8f3a15122644d3d" Jan 31 08:38:11 crc kubenswrapper[4908]: E0131 08:38:11.941432 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:38:24 crc kubenswrapper[4908]: I0131 08:38:24.940440 4908 scope.go:117] "RemoveContainer" containerID="8b5b36aa4f894414f6b920e58527965283567474e86702ecd8f3a15122644d3d" Jan 31 08:38:24 crc kubenswrapper[4908]: E0131 08:38:24.941322 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:38:35 crc kubenswrapper[4908]: I0131 08:38:35.426962 4908 generic.go:334] "Generic (PLEG): container finished" podID="c13761ee-a71a-4670-bab8-904ed63f2b92" containerID="22d9b518c83cde51b59a9f9f11dff8cc7c09f957a2ab7f973da9651df3a9540f" exitCode=0 Jan 31 08:38:35 crc kubenswrapper[4908]: I0131 08:38:35.427053 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" event={"ID":"c13761ee-a71a-4670-bab8-904ed63f2b92","Type":"ContainerDied","Data":"22d9b518c83cde51b59a9f9f11dff8cc7c09f957a2ab7f973da9651df3a9540f"} Jan 31 08:38:36 crc kubenswrapper[4908]: I0131 08:38:36.924887 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:38:36 crc kubenswrapper[4908]: I0131 08:38:36.996281 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tobiko-tests-tobiko-s01-sanity"] Jan 31 08:38:36 crc kubenswrapper[4908]: E0131 08:38:36.996675 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c13761ee-a71a-4670-bab8-904ed63f2b92" containerName="tobiko-tests-tobiko" Jan 31 08:38:36 crc kubenswrapper[4908]: I0131 08:38:36.996694 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="c13761ee-a71a-4670-bab8-904ed63f2b92" containerName="tobiko-tests-tobiko" Jan 31 08:38:36 crc kubenswrapper[4908]: E0131 08:38:36.996704 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ca8df93-05f3-4619-ad31-f206a5bb698f" containerName="extract-content" Jan 31 08:38:36 crc kubenswrapper[4908]: I0131 08:38:36.996710 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ca8df93-05f3-4619-ad31-f206a5bb698f" containerName="extract-content" Jan 31 08:38:36 crc kubenswrapper[4908]: E0131 08:38:36.996726 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ca8df93-05f3-4619-ad31-f206a5bb698f" containerName="extract-utilities" Jan 31 08:38:36 crc kubenswrapper[4908]: I0131 08:38:36.996734 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ca8df93-05f3-4619-ad31-f206a5bb698f" containerName="extract-utilities" Jan 31 08:38:36 crc kubenswrapper[4908]: E0131 08:38:36.996750 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ca8df93-05f3-4619-ad31-f206a5bb698f" containerName="registry-server" Jan 31 08:38:36 crc kubenswrapper[4908]: I0131 08:38:36.996757 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ca8df93-05f3-4619-ad31-f206a5bb698f" containerName="registry-server" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.001385 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ca8df93-05f3-4619-ad31-f206a5bb698f" containerName="registry-server" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.001443 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="c13761ee-a71a-4670-bab8-904ed63f2b92" containerName="tobiko-tests-tobiko" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.002156 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.008361 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tobiko-tests-tobiko-s01-sanity"] Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.110191 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/c13761ee-a71a-4670-bab8-904ed63f2b92-tobiko-private-key\") pod \"c13761ee-a71a-4670-bab8-904ed63f2b92\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.110544 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/c13761ee-a71a-4670-bab8-904ed63f2b92-tobiko-config\") pod \"c13761ee-a71a-4670-bab8-904ed63f2b92\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.110696 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/c13761ee-a71a-4670-bab8-904ed63f2b92-tobiko-public-key\") pod \"c13761ee-a71a-4670-bab8-904ed63f2b92\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.111059 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/c13761ee-a71a-4670-bab8-904ed63f2b92-kubeconfig\") pod \"c13761ee-a71a-4670-bab8-904ed63f2b92\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.111228 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/c13761ee-a71a-4670-bab8-904ed63f2b92-test-operator-clouds-config\") pod \"c13761ee-a71a-4670-bab8-904ed63f2b92\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.111318 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c13761ee-a71a-4670-bab8-904ed63f2b92-ceph\") pod \"c13761ee-a71a-4670-bab8-904ed63f2b92\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.111430 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c13761ee-a71a-4670-bab8-904ed63f2b92-openstack-config-secret\") pod \"c13761ee-a71a-4670-bab8-904ed63f2b92\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.111560 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c13761ee-a71a-4670-bab8-904ed63f2b92-test-operator-ephemeral-temporary\") pod \"c13761ee-a71a-4670-bab8-904ed63f2b92\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.111718 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"c13761ee-a71a-4670-bab8-904ed63f2b92\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.111863 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c13761ee-a71a-4670-bab8-904ed63f2b92-test-operator-ephemeral-workdir\") pod \"c13761ee-a71a-4670-bab8-904ed63f2b92\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.112025 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c13761ee-a71a-4670-bab8-904ed63f2b92-ca-certs\") pod \"c13761ee-a71a-4670-bab8-904ed63f2b92\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.112113 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47v9c\" (UniqueName: \"kubernetes.io/projected/c13761ee-a71a-4670-bab8-904ed63f2b92-kube-api-access-47v9c\") pod \"c13761ee-a71a-4670-bab8-904ed63f2b92\" (UID: \"c13761ee-a71a-4670-bab8-904ed63f2b92\") " Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.112384 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.112512 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.112592 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqlz5\" (UniqueName: \"kubernetes.io/projected/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-kube-api-access-bqlz5\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.112676 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.113057 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-tobiko-public-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.113198 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-kubeconfig\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.113273 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-openstack-config-secret\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.113406 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-ceph\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.113446 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-ca-certs\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.113606 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-tobiko-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.113637 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-tobiko-private-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.119427 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c13761ee-a71a-4670-bab8-904ed63f2b92-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "c13761ee-a71a-4670-bab8-904ed63f2b92" (UID: "c13761ee-a71a-4670-bab8-904ed63f2b92"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.120010 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c13761ee-a71a-4670-bab8-904ed63f2b92-kube-api-access-47v9c" (OuterVolumeSpecName: "kube-api-access-47v9c") pod "c13761ee-a71a-4670-bab8-904ed63f2b92" (UID: "c13761ee-a71a-4670-bab8-904ed63f2b92"). InnerVolumeSpecName "kube-api-access-47v9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.121801 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c13761ee-a71a-4670-bab8-904ed63f2b92-ceph" (OuterVolumeSpecName: "ceph") pod "c13761ee-a71a-4670-bab8-904ed63f2b92" (UID: "c13761ee-a71a-4670-bab8-904ed63f2b92"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.129724 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "c13761ee-a71a-4670-bab8-904ed63f2b92" (UID: "c13761ee-a71a-4670-bab8-904ed63f2b92"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.138363 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c13761ee-a71a-4670-bab8-904ed63f2b92-tobiko-public-key" (OuterVolumeSpecName: "tobiko-public-key") pod "c13761ee-a71a-4670-bab8-904ed63f2b92" (UID: "c13761ee-a71a-4670-bab8-904ed63f2b92"). InnerVolumeSpecName "tobiko-public-key". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.144429 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c13761ee-a71a-4670-bab8-904ed63f2b92-kubeconfig" (OuterVolumeSpecName: "kubeconfig") pod "c13761ee-a71a-4670-bab8-904ed63f2b92" (UID: "c13761ee-a71a-4670-bab8-904ed63f2b92"). InnerVolumeSpecName "kubeconfig". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.147691 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c13761ee-a71a-4670-bab8-904ed63f2b92-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "c13761ee-a71a-4670-bab8-904ed63f2b92" (UID: "c13761ee-a71a-4670-bab8-904ed63f2b92"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.152392 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c13761ee-a71a-4670-bab8-904ed63f2b92-tobiko-private-key" (OuterVolumeSpecName: "tobiko-private-key") pod "c13761ee-a71a-4670-bab8-904ed63f2b92" (UID: "c13761ee-a71a-4670-bab8-904ed63f2b92"). InnerVolumeSpecName "tobiko-private-key". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.156138 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c13761ee-a71a-4670-bab8-904ed63f2b92-tobiko-config" (OuterVolumeSpecName: "tobiko-config") pod "c13761ee-a71a-4670-bab8-904ed63f2b92" (UID: "c13761ee-a71a-4670-bab8-904ed63f2b92"). InnerVolumeSpecName "tobiko-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.177135 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c13761ee-a71a-4670-bab8-904ed63f2b92-test-operator-clouds-config" (OuterVolumeSpecName: "test-operator-clouds-config") pod "c13761ee-a71a-4670-bab8-904ed63f2b92" (UID: "c13761ee-a71a-4670-bab8-904ed63f2b92"). InnerVolumeSpecName "test-operator-clouds-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.179917 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c13761ee-a71a-4670-bab8-904ed63f2b92-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "c13761ee-a71a-4670-bab8-904ed63f2b92" (UID: "c13761ee-a71a-4670-bab8-904ed63f2b92"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.215320 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-openstack-config-secret\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.215418 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-ceph\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.215453 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-ca-certs\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.215524 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-tobiko-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.215552 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-tobiko-private-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.215597 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.215648 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.215685 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqlz5\" (UniqueName: \"kubernetes.io/projected/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-kube-api-access-bqlz5\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.215712 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.215741 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.215768 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-tobiko-public-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.215819 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-kubeconfig\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.215887 4908 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c13761ee-a71a-4670-bab8-904ed63f2b92-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.215902 4908 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c13761ee-a71a-4670-bab8-904ed63f2b92-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.215914 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47v9c\" (UniqueName: \"kubernetes.io/projected/c13761ee-a71a-4670-bab8-904ed63f2b92-kube-api-access-47v9c\") on node \"crc\" DevicePath \"\"" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.215927 4908 reconciler_common.go:293] "Volume detached for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/c13761ee-a71a-4670-bab8-904ed63f2b92-tobiko-private-key\") on node \"crc\" DevicePath \"\"" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.215938 4908 reconciler_common.go:293] "Volume detached for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/c13761ee-a71a-4670-bab8-904ed63f2b92-tobiko-config\") on node \"crc\" DevicePath \"\"" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.215950 4908 reconciler_common.go:293] "Volume detached for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/c13761ee-a71a-4670-bab8-904ed63f2b92-tobiko-public-key\") on node \"crc\" DevicePath \"\"" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.215961 4908 reconciler_common.go:293] "Volume detached for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/c13761ee-a71a-4670-bab8-904ed63f2b92-kubeconfig\") on node \"crc\" DevicePath \"\"" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.215972 4908 reconciler_common.go:293] "Volume detached for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/c13761ee-a71a-4670-bab8-904ed63f2b92-test-operator-clouds-config\") on node \"crc\" DevicePath \"\"" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.216005 4908 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c13761ee-a71a-4670-bab8-904ed63f2b92-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.216019 4908 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c13761ee-a71a-4670-bab8-904ed63f2b92-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.217313 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.217667 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.220485 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-tobiko-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.220691 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-tobiko-public-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.221265 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-ceph\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.221873 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-tobiko-private-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.221958 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-kubeconfig\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.222900 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-openstack-config-secret\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.223231 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.224136 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-ca-certs\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.234299 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqlz5\" (UniqueName: \"kubernetes.io/projected/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-kube-api-access-bqlz5\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.246901 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.318806 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.464846 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" event={"ID":"c13761ee-a71a-4670-bab8-904ed63f2b92","Type":"ContainerDied","Data":"ed81e0f32f6841cc114e3348e3e599a1987a3031235ba3bee7ef29968c5bd41a"} Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.465269 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed81e0f32f6841cc114e3348e3e599a1987a3031235ba3bee7ef29968c5bd41a" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.465331 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.914111 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tobiko-tests-tobiko-s01-sanity"] Jan 31 08:38:37 crc kubenswrapper[4908]: I0131 08:38:37.948219 4908 scope.go:117] "RemoveContainer" containerID="8b5b36aa4f894414f6b920e58527965283567474e86702ecd8f3a15122644d3d" Jan 31 08:38:37 crc kubenswrapper[4908]: E0131 08:38:37.948757 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:38:38 crc kubenswrapper[4908]: I0131 08:38:38.475419 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s01-sanity" event={"ID":"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf","Type":"ContainerStarted","Data":"086ecb89ae44fee09fe8aa102aa29bbfca156d6d4ebada4d96658ea3a3183609"} Jan 31 08:38:38 crc kubenswrapper[4908]: I0131 08:38:38.527835 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c13761ee-a71a-4670-bab8-904ed63f2b92-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "c13761ee-a71a-4670-bab8-904ed63f2b92" (UID: "c13761ee-a71a-4670-bab8-904ed63f2b92"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:38:38 crc kubenswrapper[4908]: I0131 08:38:38.548491 4908 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c13761ee-a71a-4670-bab8-904ed63f2b92-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 31 08:38:39 crc kubenswrapper[4908]: I0131 08:38:39.484794 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s01-sanity" event={"ID":"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf","Type":"ContainerStarted","Data":"c1d85315c0c237556259f3e6fbdac811294f8966f5499d5dcaa68ed88238bcc8"} Jan 31 08:38:39 crc kubenswrapper[4908]: I0131 08:38:39.510387 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tobiko-tests-tobiko-s01-sanity" podStartSLOduration=3.510357084 podStartE2EDuration="3.510357084s" podCreationTimestamp="2026-01-31 08:38:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 08:38:39.509032791 +0000 UTC m=+4626.124977465" watchObservedRunningTime="2026-01-31 08:38:39.510357084 +0000 UTC m=+4626.126301768" Jan 31 08:38:52 crc kubenswrapper[4908]: I0131 08:38:52.941364 4908 scope.go:117] "RemoveContainer" containerID="8b5b36aa4f894414f6b920e58527965283567474e86702ecd8f3a15122644d3d" Jan 31 08:38:52 crc kubenswrapper[4908]: E0131 08:38:52.943321 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:39:06 crc kubenswrapper[4908]: I0131 08:39:06.940362 4908 scope.go:117] "RemoveContainer" containerID="8b5b36aa4f894414f6b920e58527965283567474e86702ecd8f3a15122644d3d" Jan 31 08:39:06 crc kubenswrapper[4908]: E0131 08:39:06.941151 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:39:19 crc kubenswrapper[4908]: I0131 08:39:19.940433 4908 scope.go:117] "RemoveContainer" containerID="8b5b36aa4f894414f6b920e58527965283567474e86702ecd8f3a15122644d3d" Jan 31 08:39:19 crc kubenswrapper[4908]: E0131 08:39:19.941208 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:39:34 crc kubenswrapper[4908]: I0131 08:39:34.941321 4908 scope.go:117] "RemoveContainer" containerID="8b5b36aa4f894414f6b920e58527965283567474e86702ecd8f3a15122644d3d" Jan 31 08:39:34 crc kubenswrapper[4908]: E0131 08:39:34.942029 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:39:47 crc kubenswrapper[4908]: I0131 08:39:47.948795 4908 scope.go:117] "RemoveContainer" containerID="8b5b36aa4f894414f6b920e58527965283567474e86702ecd8f3a15122644d3d" Jan 31 08:39:47 crc kubenswrapper[4908]: E0131 08:39:47.949605 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:39:58 crc kubenswrapper[4908]: I0131 08:39:58.941281 4908 scope.go:117] "RemoveContainer" containerID="8b5b36aa4f894414f6b920e58527965283567474e86702ecd8f3a15122644d3d" Jan 31 08:39:58 crc kubenswrapper[4908]: E0131 08:39:58.942279 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:40:09 crc kubenswrapper[4908]: I0131 08:40:09.943169 4908 scope.go:117] "RemoveContainer" containerID="8b5b36aa4f894414f6b920e58527965283567474e86702ecd8f3a15122644d3d" Jan 31 08:40:09 crc kubenswrapper[4908]: E0131 08:40:09.944051 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:40:23 crc kubenswrapper[4908]: I0131 08:40:23.943509 4908 scope.go:117] "RemoveContainer" containerID="8b5b36aa4f894414f6b920e58527965283567474e86702ecd8f3a15122644d3d" Jan 31 08:40:23 crc kubenswrapper[4908]: E0131 08:40:23.944326 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:40:38 crc kubenswrapper[4908]: I0131 08:40:38.941030 4908 scope.go:117] "RemoveContainer" containerID="8b5b36aa4f894414f6b920e58527965283567474e86702ecd8f3a15122644d3d" Jan 31 08:40:38 crc kubenswrapper[4908]: E0131 08:40:38.942263 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:40:52 crc kubenswrapper[4908]: I0131 08:40:52.940104 4908 scope.go:117] "RemoveContainer" containerID="8b5b36aa4f894414f6b920e58527965283567474e86702ecd8f3a15122644d3d" Jan 31 08:40:52 crc kubenswrapper[4908]: E0131 08:40:52.941017 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:41:03 crc kubenswrapper[4908]: I0131 08:41:03.611173 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8hj56"] Jan 31 08:41:03 crc kubenswrapper[4908]: I0131 08:41:03.614064 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8hj56" Jan 31 08:41:03 crc kubenswrapper[4908]: I0131 08:41:03.629025 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8hj56"] Jan 31 08:41:03 crc kubenswrapper[4908]: I0131 08:41:03.693841 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtrfq\" (UniqueName: \"kubernetes.io/projected/76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9-kube-api-access-dtrfq\") pod \"redhat-operators-8hj56\" (UID: \"76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9\") " pod="openshift-marketplace/redhat-operators-8hj56" Jan 31 08:41:03 crc kubenswrapper[4908]: I0131 08:41:03.694445 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9-catalog-content\") pod \"redhat-operators-8hj56\" (UID: \"76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9\") " pod="openshift-marketplace/redhat-operators-8hj56" Jan 31 08:41:03 crc kubenswrapper[4908]: I0131 08:41:03.694486 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9-utilities\") pod \"redhat-operators-8hj56\" (UID: \"76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9\") " pod="openshift-marketplace/redhat-operators-8hj56" Jan 31 08:41:03 crc kubenswrapper[4908]: I0131 08:41:03.796493 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9-catalog-content\") pod \"redhat-operators-8hj56\" (UID: \"76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9\") " pod="openshift-marketplace/redhat-operators-8hj56" Jan 31 08:41:03 crc kubenswrapper[4908]: I0131 08:41:03.796553 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9-utilities\") pod \"redhat-operators-8hj56\" (UID: \"76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9\") " pod="openshift-marketplace/redhat-operators-8hj56" Jan 31 08:41:03 crc kubenswrapper[4908]: I0131 08:41:03.796675 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtrfq\" (UniqueName: \"kubernetes.io/projected/76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9-kube-api-access-dtrfq\") pod \"redhat-operators-8hj56\" (UID: \"76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9\") " pod="openshift-marketplace/redhat-operators-8hj56" Jan 31 08:41:03 crc kubenswrapper[4908]: I0131 08:41:03.796964 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9-catalog-content\") pod \"redhat-operators-8hj56\" (UID: \"76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9\") " pod="openshift-marketplace/redhat-operators-8hj56" Jan 31 08:41:03 crc kubenswrapper[4908]: I0131 08:41:03.797004 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9-utilities\") pod \"redhat-operators-8hj56\" (UID: \"76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9\") " pod="openshift-marketplace/redhat-operators-8hj56" Jan 31 08:41:03 crc kubenswrapper[4908]: I0131 08:41:03.818671 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtrfq\" (UniqueName: \"kubernetes.io/projected/76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9-kube-api-access-dtrfq\") pod \"redhat-operators-8hj56\" (UID: \"76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9\") " pod="openshift-marketplace/redhat-operators-8hj56" Jan 31 08:41:03 crc kubenswrapper[4908]: I0131 08:41:03.948451 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8hj56" Jan 31 08:41:04 crc kubenswrapper[4908]: I0131 08:41:04.447210 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8hj56"] Jan 31 08:41:04 crc kubenswrapper[4908]: I0131 08:41:04.800780 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8hj56" event={"ID":"76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9","Type":"ContainerStarted","Data":"35dd52e5857a4028c21759a1616af8bef8f61907e9595af2aebb4e76457bc4e6"} Jan 31 08:41:05 crc kubenswrapper[4908]: I0131 08:41:05.810771 4908 generic.go:334] "Generic (PLEG): container finished" podID="76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9" containerID="996c7c3922981fa8806c2b69d70a4967dbcfd51bff05a1197602a58ee80e492a" exitCode=0 Jan 31 08:41:05 crc kubenswrapper[4908]: I0131 08:41:05.811182 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8hj56" event={"ID":"76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9","Type":"ContainerDied","Data":"996c7c3922981fa8806c2b69d70a4967dbcfd51bff05a1197602a58ee80e492a"} Jan 31 08:41:05 crc kubenswrapper[4908]: I0131 08:41:05.813371 4908 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 08:41:06 crc kubenswrapper[4908]: I0131 08:41:06.940124 4908 scope.go:117] "RemoveContainer" containerID="8b5b36aa4f894414f6b920e58527965283567474e86702ecd8f3a15122644d3d" Jan 31 08:41:06 crc kubenswrapper[4908]: E0131 08:41:06.940683 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:41:07 crc kubenswrapper[4908]: I0131 08:41:07.585729 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lppfj"] Jan 31 08:41:07 crc kubenswrapper[4908]: I0131 08:41:07.588344 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lppfj" Jan 31 08:41:07 crc kubenswrapper[4908]: I0131 08:41:07.598510 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lppfj"] Jan 31 08:41:07 crc kubenswrapper[4908]: I0131 08:41:07.769946 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b83e4f2-90f6-47d3-a010-4886129eebfc-catalog-content\") pod \"redhat-marketplace-lppfj\" (UID: \"7b83e4f2-90f6-47d3-a010-4886129eebfc\") " pod="openshift-marketplace/redhat-marketplace-lppfj" Jan 31 08:41:07 crc kubenswrapper[4908]: I0131 08:41:07.770312 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5qdf\" (UniqueName: \"kubernetes.io/projected/7b83e4f2-90f6-47d3-a010-4886129eebfc-kube-api-access-l5qdf\") pod \"redhat-marketplace-lppfj\" (UID: \"7b83e4f2-90f6-47d3-a010-4886129eebfc\") " pod="openshift-marketplace/redhat-marketplace-lppfj" Jan 31 08:41:07 crc kubenswrapper[4908]: I0131 08:41:07.770384 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b83e4f2-90f6-47d3-a010-4886129eebfc-utilities\") pod \"redhat-marketplace-lppfj\" (UID: \"7b83e4f2-90f6-47d3-a010-4886129eebfc\") " pod="openshift-marketplace/redhat-marketplace-lppfj" Jan 31 08:41:07 crc kubenswrapper[4908]: I0131 08:41:07.874361 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b83e4f2-90f6-47d3-a010-4886129eebfc-catalog-content\") pod \"redhat-marketplace-lppfj\" (UID: \"7b83e4f2-90f6-47d3-a010-4886129eebfc\") " pod="openshift-marketplace/redhat-marketplace-lppfj" Jan 31 08:41:07 crc kubenswrapper[4908]: I0131 08:41:07.874482 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5qdf\" (UniqueName: \"kubernetes.io/projected/7b83e4f2-90f6-47d3-a010-4886129eebfc-kube-api-access-l5qdf\") pod \"redhat-marketplace-lppfj\" (UID: \"7b83e4f2-90f6-47d3-a010-4886129eebfc\") " pod="openshift-marketplace/redhat-marketplace-lppfj" Jan 31 08:41:07 crc kubenswrapper[4908]: I0131 08:41:07.874515 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b83e4f2-90f6-47d3-a010-4886129eebfc-utilities\") pod \"redhat-marketplace-lppfj\" (UID: \"7b83e4f2-90f6-47d3-a010-4886129eebfc\") " pod="openshift-marketplace/redhat-marketplace-lppfj" Jan 31 08:41:07 crc kubenswrapper[4908]: I0131 08:41:07.875087 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b83e4f2-90f6-47d3-a010-4886129eebfc-catalog-content\") pod \"redhat-marketplace-lppfj\" (UID: \"7b83e4f2-90f6-47d3-a010-4886129eebfc\") " pod="openshift-marketplace/redhat-marketplace-lppfj" Jan 31 08:41:07 crc kubenswrapper[4908]: I0131 08:41:07.876494 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b83e4f2-90f6-47d3-a010-4886129eebfc-utilities\") pod \"redhat-marketplace-lppfj\" (UID: \"7b83e4f2-90f6-47d3-a010-4886129eebfc\") " pod="openshift-marketplace/redhat-marketplace-lppfj" Jan 31 08:41:07 crc kubenswrapper[4908]: I0131 08:41:07.913183 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5qdf\" (UniqueName: \"kubernetes.io/projected/7b83e4f2-90f6-47d3-a010-4886129eebfc-kube-api-access-l5qdf\") pod \"redhat-marketplace-lppfj\" (UID: \"7b83e4f2-90f6-47d3-a010-4886129eebfc\") " pod="openshift-marketplace/redhat-marketplace-lppfj" Jan 31 08:41:08 crc kubenswrapper[4908]: I0131 08:41:08.211726 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lppfj" Jan 31 08:41:08 crc kubenswrapper[4908]: I0131 08:41:08.725256 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lppfj"] Jan 31 08:41:08 crc kubenswrapper[4908]: I0131 08:41:08.838356 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8hj56" event={"ID":"76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9","Type":"ContainerStarted","Data":"4a4ec6d6d99c25f3928a3cb7f0b4829bf2e051c364ee22a5570e21da2099fc7b"} Jan 31 08:41:08 crc kubenswrapper[4908]: W0131 08:41:08.902195 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b83e4f2_90f6_47d3_a010_4886129eebfc.slice/crio-2696fde8210dd0e9c2f31f2de5302a3ce5f9b0ba663b94666431ddd654019648 WatchSource:0}: Error finding container 2696fde8210dd0e9c2f31f2de5302a3ce5f9b0ba663b94666431ddd654019648: Status 404 returned error can't find the container with id 2696fde8210dd0e9c2f31f2de5302a3ce5f9b0ba663b94666431ddd654019648 Jan 31 08:41:09 crc kubenswrapper[4908]: I0131 08:41:09.848302 4908 generic.go:334] "Generic (PLEG): container finished" podID="7b83e4f2-90f6-47d3-a010-4886129eebfc" containerID="f63e7ec41f45c36c3212855adf7559739650a0852223e8b6f1416a012bb908a8" exitCode=0 Jan 31 08:41:09 crc kubenswrapper[4908]: I0131 08:41:09.848349 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lppfj" event={"ID":"7b83e4f2-90f6-47d3-a010-4886129eebfc","Type":"ContainerDied","Data":"f63e7ec41f45c36c3212855adf7559739650a0852223e8b6f1416a012bb908a8"} Jan 31 08:41:09 crc kubenswrapper[4908]: I0131 08:41:09.848606 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lppfj" event={"ID":"7b83e4f2-90f6-47d3-a010-4886129eebfc","Type":"ContainerStarted","Data":"2696fde8210dd0e9c2f31f2de5302a3ce5f9b0ba663b94666431ddd654019648"} Jan 31 08:41:09 crc kubenswrapper[4908]: I0131 08:41:09.851680 4908 generic.go:334] "Generic (PLEG): container finished" podID="76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9" containerID="4a4ec6d6d99c25f3928a3cb7f0b4829bf2e051c364ee22a5570e21da2099fc7b" exitCode=0 Jan 31 08:41:09 crc kubenswrapper[4908]: I0131 08:41:09.851729 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8hj56" event={"ID":"76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9","Type":"ContainerDied","Data":"4a4ec6d6d99c25f3928a3cb7f0b4829bf2e051c364ee22a5570e21da2099fc7b"} Jan 31 08:41:12 crc kubenswrapper[4908]: I0131 08:41:12.888751 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8hj56" event={"ID":"76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9","Type":"ContainerStarted","Data":"d5ff2e7f855bec4a9afd68e3fe5447bd7d3db7d2628c8014f9805f2386257d10"} Jan 31 08:41:12 crc kubenswrapper[4908]: I0131 08:41:12.913195 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8hj56" podStartSLOduration=4.066288126 podStartE2EDuration="9.913172777s" podCreationTimestamp="2026-01-31 08:41:03 +0000 UTC" firstStartedPulling="2026-01-31 08:41:05.813089291 +0000 UTC m=+4772.429033945" lastFinishedPulling="2026-01-31 08:41:11.659973942 +0000 UTC m=+4778.275918596" observedRunningTime="2026-01-31 08:41:12.909157936 +0000 UTC m=+4779.525102590" watchObservedRunningTime="2026-01-31 08:41:12.913172777 +0000 UTC m=+4779.529117431" Jan 31 08:41:13 crc kubenswrapper[4908]: I0131 08:41:13.950735 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8hj56" Jan 31 08:41:13 crc kubenswrapper[4908]: I0131 08:41:13.951297 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8hj56" Jan 31 08:41:14 crc kubenswrapper[4908]: I0131 08:41:14.907794 4908 generic.go:334] "Generic (PLEG): container finished" podID="7b83e4f2-90f6-47d3-a010-4886129eebfc" containerID="f2d53fcb533005554643595ffa7bbec6f97d3efac988f63cb9fa1025bd61845f" exitCode=0 Jan 31 08:41:14 crc kubenswrapper[4908]: I0131 08:41:14.909740 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lppfj" event={"ID":"7b83e4f2-90f6-47d3-a010-4886129eebfc","Type":"ContainerDied","Data":"f2d53fcb533005554643595ffa7bbec6f97d3efac988f63cb9fa1025bd61845f"} Jan 31 08:41:14 crc kubenswrapper[4908]: I0131 08:41:14.997519 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8hj56" podUID="76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9" containerName="registry-server" probeResult="failure" output=< Jan 31 08:41:14 crc kubenswrapper[4908]: timeout: failed to connect service ":50051" within 1s Jan 31 08:41:14 crc kubenswrapper[4908]: > Jan 31 08:41:16 crc kubenswrapper[4908]: I0131 08:41:16.932340 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lppfj" event={"ID":"7b83e4f2-90f6-47d3-a010-4886129eebfc","Type":"ContainerStarted","Data":"176fff099906422b81e71efcc20f27c8f77a797dd08a8ac31f382c44f14ee907"} Jan 31 08:41:16 crc kubenswrapper[4908]: I0131 08:41:16.956711 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lppfj" podStartSLOduration=3.380829782 podStartE2EDuration="9.956686254s" podCreationTimestamp="2026-01-31 08:41:07 +0000 UTC" firstStartedPulling="2026-01-31 08:41:09.854180338 +0000 UTC m=+4776.470124992" lastFinishedPulling="2026-01-31 08:41:16.43003682 +0000 UTC m=+4783.045981464" observedRunningTime="2026-01-31 08:41:16.95174705 +0000 UTC m=+4783.567691714" watchObservedRunningTime="2026-01-31 08:41:16.956686254 +0000 UTC m=+4783.572630908" Jan 31 08:41:18 crc kubenswrapper[4908]: I0131 08:41:18.212687 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lppfj" Jan 31 08:41:18 crc kubenswrapper[4908]: I0131 08:41:18.212995 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lppfj" Jan 31 08:41:18 crc kubenswrapper[4908]: I0131 08:41:18.266341 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lppfj" Jan 31 08:41:20 crc kubenswrapper[4908]: I0131 08:41:20.941042 4908 scope.go:117] "RemoveContainer" containerID="8b5b36aa4f894414f6b920e58527965283567474e86702ecd8f3a15122644d3d" Jan 31 08:41:22 crc kubenswrapper[4908]: I0131 08:41:22.986070 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerStarted","Data":"d5f30cf61c5b06459cc2a12e83525b01723e89cc4e3d8f64f0cb144ac620f50a"} Jan 31 08:41:24 crc kubenswrapper[4908]: I0131 08:41:24.645945 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8hj56" Jan 31 08:41:24 crc kubenswrapper[4908]: I0131 08:41:24.691857 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8hj56" Jan 31 08:41:24 crc kubenswrapper[4908]: I0131 08:41:24.890072 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8hj56"] Jan 31 08:41:26 crc kubenswrapper[4908]: I0131 08:41:26.012794 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8hj56" podUID="76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9" containerName="registry-server" containerID="cri-o://d5ff2e7f855bec4a9afd68e3fe5447bd7d3db7d2628c8014f9805f2386257d10" gracePeriod=2 Jan 31 08:41:27 crc kubenswrapper[4908]: I0131 08:41:27.966675 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8hj56" Jan 31 08:41:28 crc kubenswrapper[4908]: I0131 08:41:28.044402 4908 generic.go:334] "Generic (PLEG): container finished" podID="76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9" containerID="d5ff2e7f855bec4a9afd68e3fe5447bd7d3db7d2628c8014f9805f2386257d10" exitCode=0 Jan 31 08:41:28 crc kubenswrapper[4908]: I0131 08:41:28.044455 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8hj56" event={"ID":"76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9","Type":"ContainerDied","Data":"d5ff2e7f855bec4a9afd68e3fe5447bd7d3db7d2628c8014f9805f2386257d10"} Jan 31 08:41:28 crc kubenswrapper[4908]: I0131 08:41:28.044487 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8hj56" event={"ID":"76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9","Type":"ContainerDied","Data":"35dd52e5857a4028c21759a1616af8bef8f61907e9595af2aebb4e76457bc4e6"} Jan 31 08:41:28 crc kubenswrapper[4908]: I0131 08:41:28.044507 4908 scope.go:117] "RemoveContainer" containerID="d5ff2e7f855bec4a9afd68e3fe5447bd7d3db7d2628c8014f9805f2386257d10" Jan 31 08:41:28 crc kubenswrapper[4908]: I0131 08:41:28.044686 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8hj56" Jan 31 08:41:28 crc kubenswrapper[4908]: I0131 08:41:28.072122 4908 scope.go:117] "RemoveContainer" containerID="4a4ec6d6d99c25f3928a3cb7f0b4829bf2e051c364ee22a5570e21da2099fc7b" Jan 31 08:41:28 crc kubenswrapper[4908]: I0131 08:41:28.099933 4908 scope.go:117] "RemoveContainer" containerID="996c7c3922981fa8806c2b69d70a4967dbcfd51bff05a1197602a58ee80e492a" Jan 31 08:41:28 crc kubenswrapper[4908]: I0131 08:41:28.130225 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9-catalog-content\") pod \"76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9\" (UID: \"76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9\") " Jan 31 08:41:28 crc kubenswrapper[4908]: I0131 08:41:28.130302 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9-utilities\") pod \"76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9\" (UID: \"76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9\") " Jan 31 08:41:28 crc kubenswrapper[4908]: I0131 08:41:28.130398 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtrfq\" (UniqueName: \"kubernetes.io/projected/76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9-kube-api-access-dtrfq\") pod \"76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9\" (UID: \"76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9\") " Jan 31 08:41:28 crc kubenswrapper[4908]: I0131 08:41:28.132921 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9-utilities" (OuterVolumeSpecName: "utilities") pod "76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9" (UID: "76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:41:28 crc kubenswrapper[4908]: I0131 08:41:28.140391 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9-kube-api-access-dtrfq" (OuterVolumeSpecName: "kube-api-access-dtrfq") pod "76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9" (UID: "76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9"). InnerVolumeSpecName "kube-api-access-dtrfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:41:28 crc kubenswrapper[4908]: I0131 08:41:28.146283 4908 scope.go:117] "RemoveContainer" containerID="d5ff2e7f855bec4a9afd68e3fe5447bd7d3db7d2628c8014f9805f2386257d10" Jan 31 08:41:28 crc kubenswrapper[4908]: E0131 08:41:28.146879 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5ff2e7f855bec4a9afd68e3fe5447bd7d3db7d2628c8014f9805f2386257d10\": container with ID starting with d5ff2e7f855bec4a9afd68e3fe5447bd7d3db7d2628c8014f9805f2386257d10 not found: ID does not exist" containerID="d5ff2e7f855bec4a9afd68e3fe5447bd7d3db7d2628c8014f9805f2386257d10" Jan 31 08:41:28 crc kubenswrapper[4908]: I0131 08:41:28.146925 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5ff2e7f855bec4a9afd68e3fe5447bd7d3db7d2628c8014f9805f2386257d10"} err="failed to get container status \"d5ff2e7f855bec4a9afd68e3fe5447bd7d3db7d2628c8014f9805f2386257d10\": rpc error: code = NotFound desc = could not find container \"d5ff2e7f855bec4a9afd68e3fe5447bd7d3db7d2628c8014f9805f2386257d10\": container with ID starting with d5ff2e7f855bec4a9afd68e3fe5447bd7d3db7d2628c8014f9805f2386257d10 not found: ID does not exist" Jan 31 08:41:28 crc kubenswrapper[4908]: I0131 08:41:28.146955 4908 scope.go:117] "RemoveContainer" containerID="4a4ec6d6d99c25f3928a3cb7f0b4829bf2e051c364ee22a5570e21da2099fc7b" Jan 31 08:41:28 crc kubenswrapper[4908]: E0131 08:41:28.147726 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a4ec6d6d99c25f3928a3cb7f0b4829bf2e051c364ee22a5570e21da2099fc7b\": container with ID starting with 4a4ec6d6d99c25f3928a3cb7f0b4829bf2e051c364ee22a5570e21da2099fc7b not found: ID does not exist" containerID="4a4ec6d6d99c25f3928a3cb7f0b4829bf2e051c364ee22a5570e21da2099fc7b" Jan 31 08:41:28 crc kubenswrapper[4908]: I0131 08:41:28.147793 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a4ec6d6d99c25f3928a3cb7f0b4829bf2e051c364ee22a5570e21da2099fc7b"} err="failed to get container status \"4a4ec6d6d99c25f3928a3cb7f0b4829bf2e051c364ee22a5570e21da2099fc7b\": rpc error: code = NotFound desc = could not find container \"4a4ec6d6d99c25f3928a3cb7f0b4829bf2e051c364ee22a5570e21da2099fc7b\": container with ID starting with 4a4ec6d6d99c25f3928a3cb7f0b4829bf2e051c364ee22a5570e21da2099fc7b not found: ID does not exist" Jan 31 08:41:28 crc kubenswrapper[4908]: I0131 08:41:28.147835 4908 scope.go:117] "RemoveContainer" containerID="996c7c3922981fa8806c2b69d70a4967dbcfd51bff05a1197602a58ee80e492a" Jan 31 08:41:28 crc kubenswrapper[4908]: E0131 08:41:28.148235 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"996c7c3922981fa8806c2b69d70a4967dbcfd51bff05a1197602a58ee80e492a\": container with ID starting with 996c7c3922981fa8806c2b69d70a4967dbcfd51bff05a1197602a58ee80e492a not found: ID does not exist" containerID="996c7c3922981fa8806c2b69d70a4967dbcfd51bff05a1197602a58ee80e492a" Jan 31 08:41:28 crc kubenswrapper[4908]: I0131 08:41:28.148279 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"996c7c3922981fa8806c2b69d70a4967dbcfd51bff05a1197602a58ee80e492a"} err="failed to get container status \"996c7c3922981fa8806c2b69d70a4967dbcfd51bff05a1197602a58ee80e492a\": rpc error: code = NotFound desc = could not find container \"996c7c3922981fa8806c2b69d70a4967dbcfd51bff05a1197602a58ee80e492a\": container with ID starting with 996c7c3922981fa8806c2b69d70a4967dbcfd51bff05a1197602a58ee80e492a not found: ID does not exist" Jan 31 08:41:28 crc kubenswrapper[4908]: I0131 08:41:28.235705 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:41:28 crc kubenswrapper[4908]: I0131 08:41:28.236251 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtrfq\" (UniqueName: \"kubernetes.io/projected/76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9-kube-api-access-dtrfq\") on node \"crc\" DevicePath \"\"" Jan 31 08:41:28 crc kubenswrapper[4908]: I0131 08:41:28.266703 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lppfj" Jan 31 08:41:28 crc kubenswrapper[4908]: I0131 08:41:28.289559 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9" (UID: "76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:41:28 crc kubenswrapper[4908]: I0131 08:41:28.316922 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lppfj"] Jan 31 08:41:28 crc kubenswrapper[4908]: I0131 08:41:28.338391 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:41:28 crc kubenswrapper[4908]: I0131 08:41:28.396948 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8hj56"] Jan 31 08:41:28 crc kubenswrapper[4908]: I0131 08:41:28.409107 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8hj56"] Jan 31 08:41:29 crc kubenswrapper[4908]: I0131 08:41:29.056519 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lppfj" podUID="7b83e4f2-90f6-47d3-a010-4886129eebfc" containerName="registry-server" containerID="cri-o://176fff099906422b81e71efcc20f27c8f77a797dd08a8ac31f382c44f14ee907" gracePeriod=2 Jan 31 08:41:29 crc kubenswrapper[4908]: I0131 08:41:29.952946 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9" path="/var/lib/kubelet/pods/76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9/volumes" Jan 31 08:41:30 crc kubenswrapper[4908]: I0131 08:41:30.072450 4908 generic.go:334] "Generic (PLEG): container finished" podID="7b83e4f2-90f6-47d3-a010-4886129eebfc" containerID="176fff099906422b81e71efcc20f27c8f77a797dd08a8ac31f382c44f14ee907" exitCode=0 Jan 31 08:41:30 crc kubenswrapper[4908]: I0131 08:41:30.072499 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lppfj" event={"ID":"7b83e4f2-90f6-47d3-a010-4886129eebfc","Type":"ContainerDied","Data":"176fff099906422b81e71efcc20f27c8f77a797dd08a8ac31f382c44f14ee907"} Jan 31 08:41:30 crc kubenswrapper[4908]: I0131 08:41:30.072605 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lppfj" event={"ID":"7b83e4f2-90f6-47d3-a010-4886129eebfc","Type":"ContainerDied","Data":"2696fde8210dd0e9c2f31f2de5302a3ce5f9b0ba663b94666431ddd654019648"} Jan 31 08:41:30 crc kubenswrapper[4908]: I0131 08:41:30.072621 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2696fde8210dd0e9c2f31f2de5302a3ce5f9b0ba663b94666431ddd654019648" Jan 31 08:41:30 crc kubenswrapper[4908]: I0131 08:41:30.082015 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lppfj" Jan 31 08:41:30 crc kubenswrapper[4908]: I0131 08:41:30.276627 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5qdf\" (UniqueName: \"kubernetes.io/projected/7b83e4f2-90f6-47d3-a010-4886129eebfc-kube-api-access-l5qdf\") pod \"7b83e4f2-90f6-47d3-a010-4886129eebfc\" (UID: \"7b83e4f2-90f6-47d3-a010-4886129eebfc\") " Jan 31 08:41:30 crc kubenswrapper[4908]: I0131 08:41:30.277000 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b83e4f2-90f6-47d3-a010-4886129eebfc-utilities\") pod \"7b83e4f2-90f6-47d3-a010-4886129eebfc\" (UID: \"7b83e4f2-90f6-47d3-a010-4886129eebfc\") " Jan 31 08:41:30 crc kubenswrapper[4908]: I0131 08:41:30.277138 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b83e4f2-90f6-47d3-a010-4886129eebfc-catalog-content\") pod \"7b83e4f2-90f6-47d3-a010-4886129eebfc\" (UID: \"7b83e4f2-90f6-47d3-a010-4886129eebfc\") " Jan 31 08:41:30 crc kubenswrapper[4908]: I0131 08:41:30.277846 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b83e4f2-90f6-47d3-a010-4886129eebfc-utilities" (OuterVolumeSpecName: "utilities") pod "7b83e4f2-90f6-47d3-a010-4886129eebfc" (UID: "7b83e4f2-90f6-47d3-a010-4886129eebfc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:41:30 crc kubenswrapper[4908]: I0131 08:41:30.282999 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b83e4f2-90f6-47d3-a010-4886129eebfc-kube-api-access-l5qdf" (OuterVolumeSpecName: "kube-api-access-l5qdf") pod "7b83e4f2-90f6-47d3-a010-4886129eebfc" (UID: "7b83e4f2-90f6-47d3-a010-4886129eebfc"). InnerVolumeSpecName "kube-api-access-l5qdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:41:30 crc kubenswrapper[4908]: I0131 08:41:30.295170 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b83e4f2-90f6-47d3-a010-4886129eebfc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b83e4f2-90f6-47d3-a010-4886129eebfc" (UID: "7b83e4f2-90f6-47d3-a010-4886129eebfc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:41:30 crc kubenswrapper[4908]: I0131 08:41:30.379279 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5qdf\" (UniqueName: \"kubernetes.io/projected/7b83e4f2-90f6-47d3-a010-4886129eebfc-kube-api-access-l5qdf\") on node \"crc\" DevicePath \"\"" Jan 31 08:41:30 crc kubenswrapper[4908]: I0131 08:41:30.379308 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b83e4f2-90f6-47d3-a010-4886129eebfc-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:41:30 crc kubenswrapper[4908]: I0131 08:41:30.379318 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b83e4f2-90f6-47d3-a010-4886129eebfc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:41:31 crc kubenswrapper[4908]: I0131 08:41:31.079668 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lppfj" Jan 31 08:41:31 crc kubenswrapper[4908]: I0131 08:41:31.131643 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lppfj"] Jan 31 08:41:31 crc kubenswrapper[4908]: I0131 08:41:31.147558 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lppfj"] Jan 31 08:41:31 crc kubenswrapper[4908]: I0131 08:41:31.953122 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b83e4f2-90f6-47d3-a010-4886129eebfc" path="/var/lib/kubelet/pods/7b83e4f2-90f6-47d3-a010-4886129eebfc/volumes" Jan 31 08:41:57 crc kubenswrapper[4908]: I0131 08:41:57.058003 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nr4fc"] Jan 31 08:41:57 crc kubenswrapper[4908]: E0131 08:41:57.059048 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b83e4f2-90f6-47d3-a010-4886129eebfc" containerName="extract-utilities" Jan 31 08:41:57 crc kubenswrapper[4908]: I0131 08:41:57.059069 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b83e4f2-90f6-47d3-a010-4886129eebfc" containerName="extract-utilities" Jan 31 08:41:57 crc kubenswrapper[4908]: E0131 08:41:57.059090 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9" containerName="extract-utilities" Jan 31 08:41:57 crc kubenswrapper[4908]: I0131 08:41:57.059098 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9" containerName="extract-utilities" Jan 31 08:41:57 crc kubenswrapper[4908]: E0131 08:41:57.059119 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9" containerName="extract-content" Jan 31 08:41:57 crc kubenswrapper[4908]: I0131 08:41:57.059130 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9" containerName="extract-content" Jan 31 08:41:57 crc kubenswrapper[4908]: E0131 08:41:57.059152 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b83e4f2-90f6-47d3-a010-4886129eebfc" containerName="registry-server" Jan 31 08:41:57 crc kubenswrapper[4908]: I0131 08:41:57.059160 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b83e4f2-90f6-47d3-a010-4886129eebfc" containerName="registry-server" Jan 31 08:41:57 crc kubenswrapper[4908]: E0131 08:41:57.059176 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9" containerName="registry-server" Jan 31 08:41:57 crc kubenswrapper[4908]: I0131 08:41:57.059183 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9" containerName="registry-server" Jan 31 08:41:57 crc kubenswrapper[4908]: E0131 08:41:57.059197 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b83e4f2-90f6-47d3-a010-4886129eebfc" containerName="extract-content" Jan 31 08:41:57 crc kubenswrapper[4908]: I0131 08:41:57.059204 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b83e4f2-90f6-47d3-a010-4886129eebfc" containerName="extract-content" Jan 31 08:41:57 crc kubenswrapper[4908]: I0131 08:41:57.059430 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b83e4f2-90f6-47d3-a010-4886129eebfc" containerName="registry-server" Jan 31 08:41:57 crc kubenswrapper[4908]: I0131 08:41:57.059561 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="76ff9329-bc3d-40e5-b6a5-9a6dd32f0db9" containerName="registry-server" Jan 31 08:41:57 crc kubenswrapper[4908]: I0131 08:41:57.061596 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nr4fc" Jan 31 08:41:57 crc kubenswrapper[4908]: I0131 08:41:57.068101 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nr4fc"] Jan 31 08:41:57 crc kubenswrapper[4908]: I0131 08:41:57.107228 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a49fa184-4451-48ed-8c86-f4716f4f6a1d-catalog-content\") pod \"community-operators-nr4fc\" (UID: \"a49fa184-4451-48ed-8c86-f4716f4f6a1d\") " pod="openshift-marketplace/community-operators-nr4fc" Jan 31 08:41:57 crc kubenswrapper[4908]: I0131 08:41:57.107370 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a49fa184-4451-48ed-8c86-f4716f4f6a1d-utilities\") pod \"community-operators-nr4fc\" (UID: \"a49fa184-4451-48ed-8c86-f4716f4f6a1d\") " pod="openshift-marketplace/community-operators-nr4fc" Jan 31 08:41:57 crc kubenswrapper[4908]: I0131 08:41:57.107618 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttz4r\" (UniqueName: \"kubernetes.io/projected/a49fa184-4451-48ed-8c86-f4716f4f6a1d-kube-api-access-ttz4r\") pod \"community-operators-nr4fc\" (UID: \"a49fa184-4451-48ed-8c86-f4716f4f6a1d\") " pod="openshift-marketplace/community-operators-nr4fc" Jan 31 08:41:57 crc kubenswrapper[4908]: I0131 08:41:57.210173 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a49fa184-4451-48ed-8c86-f4716f4f6a1d-utilities\") pod \"community-operators-nr4fc\" (UID: \"a49fa184-4451-48ed-8c86-f4716f4f6a1d\") " pod="openshift-marketplace/community-operators-nr4fc" Jan 31 08:41:57 crc kubenswrapper[4908]: I0131 08:41:57.210380 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttz4r\" (UniqueName: \"kubernetes.io/projected/a49fa184-4451-48ed-8c86-f4716f4f6a1d-kube-api-access-ttz4r\") pod \"community-operators-nr4fc\" (UID: \"a49fa184-4451-48ed-8c86-f4716f4f6a1d\") " pod="openshift-marketplace/community-operators-nr4fc" Jan 31 08:41:57 crc kubenswrapper[4908]: I0131 08:41:57.210445 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a49fa184-4451-48ed-8c86-f4716f4f6a1d-catalog-content\") pod \"community-operators-nr4fc\" (UID: \"a49fa184-4451-48ed-8c86-f4716f4f6a1d\") " pod="openshift-marketplace/community-operators-nr4fc" Jan 31 08:41:57 crc kubenswrapper[4908]: I0131 08:41:57.210848 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a49fa184-4451-48ed-8c86-f4716f4f6a1d-utilities\") pod \"community-operators-nr4fc\" (UID: \"a49fa184-4451-48ed-8c86-f4716f4f6a1d\") " pod="openshift-marketplace/community-operators-nr4fc" Jan 31 08:41:57 crc kubenswrapper[4908]: I0131 08:41:57.211084 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a49fa184-4451-48ed-8c86-f4716f4f6a1d-catalog-content\") pod \"community-operators-nr4fc\" (UID: \"a49fa184-4451-48ed-8c86-f4716f4f6a1d\") " pod="openshift-marketplace/community-operators-nr4fc" Jan 31 08:41:57 crc kubenswrapper[4908]: I0131 08:41:57.228761 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttz4r\" (UniqueName: \"kubernetes.io/projected/a49fa184-4451-48ed-8c86-f4716f4f6a1d-kube-api-access-ttz4r\") pod \"community-operators-nr4fc\" (UID: \"a49fa184-4451-48ed-8c86-f4716f4f6a1d\") " pod="openshift-marketplace/community-operators-nr4fc" Jan 31 08:41:57 crc kubenswrapper[4908]: I0131 08:41:57.395557 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nr4fc" Jan 31 08:41:57 crc kubenswrapper[4908]: W0131 08:41:57.936696 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda49fa184_4451_48ed_8c86_f4716f4f6a1d.slice/crio-7446f3955b52c484b597d91b475d1912cc7eaf81dc2c96f2db4c4e1c8e717310 WatchSource:0}: Error finding container 7446f3955b52c484b597d91b475d1912cc7eaf81dc2c96f2db4c4e1c8e717310: Status 404 returned error can't find the container with id 7446f3955b52c484b597d91b475d1912cc7eaf81dc2c96f2db4c4e1c8e717310 Jan 31 08:41:57 crc kubenswrapper[4908]: I0131 08:41:57.952691 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nr4fc"] Jan 31 08:41:58 crc kubenswrapper[4908]: I0131 08:41:58.301127 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nr4fc" event={"ID":"a49fa184-4451-48ed-8c86-f4716f4f6a1d","Type":"ContainerStarted","Data":"7446f3955b52c484b597d91b475d1912cc7eaf81dc2c96f2db4c4e1c8e717310"} Jan 31 08:41:59 crc kubenswrapper[4908]: I0131 08:41:59.311687 4908 generic.go:334] "Generic (PLEG): container finished" podID="a49fa184-4451-48ed-8c86-f4716f4f6a1d" containerID="aa7a69c856408a518f621926f2652105ee22b2f91a5272dcc8cc489661fd4dd2" exitCode=0 Jan 31 08:41:59 crc kubenswrapper[4908]: I0131 08:41:59.311734 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nr4fc" event={"ID":"a49fa184-4451-48ed-8c86-f4716f4f6a1d","Type":"ContainerDied","Data":"aa7a69c856408a518f621926f2652105ee22b2f91a5272dcc8cc489661fd4dd2"} Jan 31 08:42:02 crc kubenswrapper[4908]: I0131 08:42:02.358127 4908 generic.go:334] "Generic (PLEG): container finished" podID="e9c28ea9-f878-43bb-bce2-fe9eebbf2baf" containerID="c1d85315c0c237556259f3e6fbdac811294f8966f5499d5dcaa68ed88238bcc8" exitCode=0 Jan 31 08:42:02 crc kubenswrapper[4908]: I0131 08:42:02.358243 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s01-sanity" event={"ID":"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf","Type":"ContainerDied","Data":"c1d85315c0c237556259f3e6fbdac811294f8966f5499d5dcaa68ed88238bcc8"} Jan 31 08:42:02 crc kubenswrapper[4908]: I0131 08:42:02.363342 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nr4fc" event={"ID":"a49fa184-4451-48ed-8c86-f4716f4f6a1d","Type":"ContainerStarted","Data":"fc1499f9f4efdb7b8f26500f83781a6de88df08e28e4a7afa159113208d9e3fd"} Jan 31 08:42:03 crc kubenswrapper[4908]: I0131 08:42:03.980287 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.173903 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-tobiko-public-key\") pod \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.174590 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-ceph\") pod \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.174774 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-test-operator-clouds-config\") pod \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.174904 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-test-operator-ephemeral-workdir\") pod \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.175048 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-ca-certs\") pod \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.175178 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqlz5\" (UniqueName: \"kubernetes.io/projected/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-kube-api-access-bqlz5\") pod \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.175335 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-kubeconfig\") pod \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.175543 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-tobiko-config\") pod \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.175826 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.175963 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-tobiko-private-key\") pod \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.176667 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-openstack-config-secret\") pod \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.176859 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-test-operator-ephemeral-temporary\") pod \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\" (UID: \"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf\") " Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.178658 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "e9c28ea9-f878-43bb-bce2-fe9eebbf2baf" (UID: "e9c28ea9-f878-43bb-bce2-fe9eebbf2baf"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.281751 4908 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.382467 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s01-sanity" event={"ID":"e9c28ea9-f878-43bb-bce2-fe9eebbf2baf","Type":"ContainerDied","Data":"086ecb89ae44fee09fe8aa102aa29bbfca156d6d4ebada4d96658ea3a3183609"} Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.382522 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="086ecb89ae44fee09fe8aa102aa29bbfca156d6d4ebada4d96658ea3a3183609" Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.382587 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.507965 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-ceph" (OuterVolumeSpecName: "ceph") pod "e9c28ea9-f878-43bb-bce2-fe9eebbf2baf" (UID: "e9c28ea9-f878-43bb-bce2-fe9eebbf2baf"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.512607 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "e9c28ea9-f878-43bb-bce2-fe9eebbf2baf" (UID: "e9c28ea9-f878-43bb-bce2-fe9eebbf2baf"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.512713 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-kube-api-access-bqlz5" (OuterVolumeSpecName: "kube-api-access-bqlz5") pod "e9c28ea9-f878-43bb-bce2-fe9eebbf2baf" (UID: "e9c28ea9-f878-43bb-bce2-fe9eebbf2baf"). InnerVolumeSpecName "kube-api-access-bqlz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.529064 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-tobiko-private-key" (OuterVolumeSpecName: "tobiko-private-key") pod "e9c28ea9-f878-43bb-bce2-fe9eebbf2baf" (UID: "e9c28ea9-f878-43bb-bce2-fe9eebbf2baf"). InnerVolumeSpecName "tobiko-private-key". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.537740 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-tobiko-public-key" (OuterVolumeSpecName: "tobiko-public-key") pod "e9c28ea9-f878-43bb-bce2-fe9eebbf2baf" (UID: "e9c28ea9-f878-43bb-bce2-fe9eebbf2baf"). InnerVolumeSpecName "tobiko-public-key". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.552725 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-kubeconfig" (OuterVolumeSpecName: "kubeconfig") pod "e9c28ea9-f878-43bb-bce2-fe9eebbf2baf" (UID: "e9c28ea9-f878-43bb-bce2-fe9eebbf2baf"). InnerVolumeSpecName "kubeconfig". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.555747 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "e9c28ea9-f878-43bb-bce2-fe9eebbf2baf" (UID: "e9c28ea9-f878-43bb-bce2-fe9eebbf2baf"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.570596 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-tobiko-config" (OuterVolumeSpecName: "tobiko-config") pod "e9c28ea9-f878-43bb-bce2-fe9eebbf2baf" (UID: "e9c28ea9-f878-43bb-bce2-fe9eebbf2baf"). InnerVolumeSpecName "tobiko-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.586139 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-test-operator-clouds-config" (OuterVolumeSpecName: "test-operator-clouds-config") pod "e9c28ea9-f878-43bb-bce2-fe9eebbf2baf" (UID: "e9c28ea9-f878-43bb-bce2-fe9eebbf2baf"). InnerVolumeSpecName "test-operator-clouds-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.586474 4908 reconciler_common.go:293] "Volume detached for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-tobiko-config\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.586524 4908 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.586539 4908 reconciler_common.go:293] "Volume detached for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-tobiko-private-key\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.586551 4908 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.586565 4908 reconciler_common.go:293] "Volume detached for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-tobiko-public-key\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.586577 4908 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.586589 4908 reconciler_common.go:293] "Volume detached for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-test-operator-clouds-config\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.586603 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqlz5\" (UniqueName: \"kubernetes.io/projected/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-kube-api-access-bqlz5\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.586617 4908 reconciler_common.go:293] "Volume detached for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-kubeconfig\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.587124 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "e9c28ea9-f878-43bb-bce2-fe9eebbf2baf" (UID: "e9c28ea9-f878-43bb-bce2-fe9eebbf2baf"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.606764 4908 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.688181 4908 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:04 crc kubenswrapper[4908]: I0131 08:42:04.688216 4908 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:05 crc kubenswrapper[4908]: I0131 08:42:05.479042 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "e9c28ea9-f878-43bb-bce2-fe9eebbf2baf" (UID: "e9c28ea9-f878-43bb-bce2-fe9eebbf2baf"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:42:05 crc kubenswrapper[4908]: I0131 08:42:05.501538 4908 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e9c28ea9-f878-43bb-bce2-fe9eebbf2baf-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:07 crc kubenswrapper[4908]: I0131 08:42:07.407399 4908 generic.go:334] "Generic (PLEG): container finished" podID="a49fa184-4451-48ed-8c86-f4716f4f6a1d" containerID="fc1499f9f4efdb7b8f26500f83781a6de88df08e28e4a7afa159113208d9e3fd" exitCode=0 Jan 31 08:42:07 crc kubenswrapper[4908]: I0131 08:42:07.407526 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nr4fc" event={"ID":"a49fa184-4451-48ed-8c86-f4716f4f6a1d","Type":"ContainerDied","Data":"fc1499f9f4efdb7b8f26500f83781a6de88df08e28e4a7afa159113208d9e3fd"} Jan 31 08:42:08 crc kubenswrapper[4908]: I0131 08:42:08.418494 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nr4fc" event={"ID":"a49fa184-4451-48ed-8c86-f4716f4f6a1d","Type":"ContainerStarted","Data":"290254b51834d57d9a8aca94dd3a16cc5ff5afb577debafb2846d1baedd63f16"} Jan 31 08:42:08 crc kubenswrapper[4908]: I0131 08:42:08.447371 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nr4fc" podStartSLOduration=2.6950899230000003 podStartE2EDuration="11.447351108s" podCreationTimestamp="2026-01-31 08:41:57 +0000 UTC" firstStartedPulling="2026-01-31 08:41:59.314314687 +0000 UTC m=+4825.930259341" lastFinishedPulling="2026-01-31 08:42:08.066575872 +0000 UTC m=+4834.682520526" observedRunningTime="2026-01-31 08:42:08.436764122 +0000 UTC m=+4835.052708786" watchObservedRunningTime="2026-01-31 08:42:08.447351108 +0000 UTC m=+4835.063295752" Jan 31 08:42:13 crc kubenswrapper[4908]: I0131 08:42:13.461523 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko"] Jan 31 08:42:13 crc kubenswrapper[4908]: E0131 08:42:13.462310 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c28ea9-f878-43bb-bce2-fe9eebbf2baf" containerName="tobiko-tests-tobiko" Jan 31 08:42:13 crc kubenswrapper[4908]: I0131 08:42:13.462325 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c28ea9-f878-43bb-bce2-fe9eebbf2baf" containerName="tobiko-tests-tobiko" Jan 31 08:42:13 crc kubenswrapper[4908]: I0131 08:42:13.462696 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9c28ea9-f878-43bb-bce2-fe9eebbf2baf" containerName="tobiko-tests-tobiko" Jan 31 08:42:13 crc kubenswrapper[4908]: I0131 08:42:13.463356 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Jan 31 08:42:13 crc kubenswrapper[4908]: I0131 08:42:13.471555 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko"] Jan 31 08:42:13 crc kubenswrapper[4908]: I0131 08:42:13.478656 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6mxz\" (UniqueName: \"kubernetes.io/projected/06e3656a-1b5e-404c-86dc-95842de093cc-kube-api-access-v6mxz\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"06e3656a-1b5e-404c-86dc-95842de093cc\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Jan 31 08:42:13 crc kubenswrapper[4908]: I0131 08:42:13.478696 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"06e3656a-1b5e-404c-86dc-95842de093cc\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Jan 31 08:42:13 crc kubenswrapper[4908]: I0131 08:42:13.580396 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6mxz\" (UniqueName: \"kubernetes.io/projected/06e3656a-1b5e-404c-86dc-95842de093cc-kube-api-access-v6mxz\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"06e3656a-1b5e-404c-86dc-95842de093cc\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Jan 31 08:42:13 crc kubenswrapper[4908]: I0131 08:42:13.580436 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"06e3656a-1b5e-404c-86dc-95842de093cc\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Jan 31 08:42:13 crc kubenswrapper[4908]: I0131 08:42:13.580836 4908 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"06e3656a-1b5e-404c-86dc-95842de093cc\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Jan 31 08:42:13 crc kubenswrapper[4908]: I0131 08:42:13.607335 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6mxz\" (UniqueName: \"kubernetes.io/projected/06e3656a-1b5e-404c-86dc-95842de093cc-kube-api-access-v6mxz\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"06e3656a-1b5e-404c-86dc-95842de093cc\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Jan 31 08:42:13 crc kubenswrapper[4908]: I0131 08:42:13.615173 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"06e3656a-1b5e-404c-86dc-95842de093cc\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Jan 31 08:42:13 crc kubenswrapper[4908]: I0131 08:42:13.796069 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Jan 31 08:42:14 crc kubenswrapper[4908]: I0131 08:42:14.460633 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko"] Jan 31 08:42:14 crc kubenswrapper[4908]: I0131 08:42:14.474652 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" event={"ID":"06e3656a-1b5e-404c-86dc-95842de093cc","Type":"ContainerStarted","Data":"a51b63592d050ee9481856b181b101b41547f58ed34341ba08f0030b78c557d5"} Jan 31 08:42:16 crc kubenswrapper[4908]: I0131 08:42:16.496900 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" event={"ID":"06e3656a-1b5e-404c-86dc-95842de093cc","Type":"ContainerStarted","Data":"667e9df5ba2427ed52c278c9bff6f852b7c819274869c06335b8b42705e8da70"} Jan 31 08:42:16 crc kubenswrapper[4908]: I0131 08:42:16.513676 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" podStartSLOduration=2.56159421 podStartE2EDuration="3.513658043s" podCreationTimestamp="2026-01-31 08:42:13 +0000 UTC" firstStartedPulling="2026-01-31 08:42:14.468302795 +0000 UTC m=+4841.084247449" lastFinishedPulling="2026-01-31 08:42:15.420366618 +0000 UTC m=+4842.036311282" observedRunningTime="2026-01-31 08:42:16.511019626 +0000 UTC m=+4843.126964280" watchObservedRunningTime="2026-01-31 08:42:16.513658043 +0000 UTC m=+4843.129602697" Jan 31 08:42:17 crc kubenswrapper[4908]: I0131 08:42:17.395967 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nr4fc" Jan 31 08:42:17 crc kubenswrapper[4908]: I0131 08:42:17.396316 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nr4fc" Jan 31 08:42:17 crc kubenswrapper[4908]: I0131 08:42:17.441077 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nr4fc" Jan 31 08:42:17 crc kubenswrapper[4908]: I0131 08:42:17.544180 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nr4fc" Jan 31 08:42:17 crc kubenswrapper[4908]: I0131 08:42:17.672912 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nr4fc"] Jan 31 08:42:19 crc kubenswrapper[4908]: I0131 08:42:19.522545 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nr4fc" podUID="a49fa184-4451-48ed-8c86-f4716f4f6a1d" containerName="registry-server" containerID="cri-o://290254b51834d57d9a8aca94dd3a16cc5ff5afb577debafb2846d1baedd63f16" gracePeriod=2 Jan 31 08:42:20 crc kubenswrapper[4908]: I0131 08:42:20.538918 4908 generic.go:334] "Generic (PLEG): container finished" podID="a49fa184-4451-48ed-8c86-f4716f4f6a1d" containerID="290254b51834d57d9a8aca94dd3a16cc5ff5afb577debafb2846d1baedd63f16" exitCode=0 Jan 31 08:42:20 crc kubenswrapper[4908]: I0131 08:42:20.539009 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nr4fc" event={"ID":"a49fa184-4451-48ed-8c86-f4716f4f6a1d","Type":"ContainerDied","Data":"290254b51834d57d9a8aca94dd3a16cc5ff5afb577debafb2846d1baedd63f16"} Jan 31 08:42:20 crc kubenswrapper[4908]: I0131 08:42:20.952222 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nr4fc" Jan 31 08:42:21 crc kubenswrapper[4908]: I0131 08:42:21.028815 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a49fa184-4451-48ed-8c86-f4716f4f6a1d-catalog-content\") pod \"a49fa184-4451-48ed-8c86-f4716f4f6a1d\" (UID: \"a49fa184-4451-48ed-8c86-f4716f4f6a1d\") " Jan 31 08:42:21 crc kubenswrapper[4908]: I0131 08:42:21.029099 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a49fa184-4451-48ed-8c86-f4716f4f6a1d-utilities\") pod \"a49fa184-4451-48ed-8c86-f4716f4f6a1d\" (UID: \"a49fa184-4451-48ed-8c86-f4716f4f6a1d\") " Jan 31 08:42:21 crc kubenswrapper[4908]: I0131 08:42:21.029173 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttz4r\" (UniqueName: \"kubernetes.io/projected/a49fa184-4451-48ed-8c86-f4716f4f6a1d-kube-api-access-ttz4r\") pod \"a49fa184-4451-48ed-8c86-f4716f4f6a1d\" (UID: \"a49fa184-4451-48ed-8c86-f4716f4f6a1d\") " Jan 31 08:42:21 crc kubenswrapper[4908]: I0131 08:42:21.029942 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a49fa184-4451-48ed-8c86-f4716f4f6a1d-utilities" (OuterVolumeSpecName: "utilities") pod "a49fa184-4451-48ed-8c86-f4716f4f6a1d" (UID: "a49fa184-4451-48ed-8c86-f4716f4f6a1d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:42:21 crc kubenswrapper[4908]: I0131 08:42:21.034425 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a49fa184-4451-48ed-8c86-f4716f4f6a1d-kube-api-access-ttz4r" (OuterVolumeSpecName: "kube-api-access-ttz4r") pod "a49fa184-4451-48ed-8c86-f4716f4f6a1d" (UID: "a49fa184-4451-48ed-8c86-f4716f4f6a1d"). InnerVolumeSpecName "kube-api-access-ttz4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:42:21 crc kubenswrapper[4908]: I0131 08:42:21.118259 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a49fa184-4451-48ed-8c86-f4716f4f6a1d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a49fa184-4451-48ed-8c86-f4716f4f6a1d" (UID: "a49fa184-4451-48ed-8c86-f4716f4f6a1d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:42:21 crc kubenswrapper[4908]: I0131 08:42:21.132047 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a49fa184-4451-48ed-8c86-f4716f4f6a1d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:21 crc kubenswrapper[4908]: I0131 08:42:21.132083 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttz4r\" (UniqueName: \"kubernetes.io/projected/a49fa184-4451-48ed-8c86-f4716f4f6a1d-kube-api-access-ttz4r\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:21 crc kubenswrapper[4908]: I0131 08:42:21.132092 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a49fa184-4451-48ed-8c86-f4716f4f6a1d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:21 crc kubenswrapper[4908]: I0131 08:42:21.556426 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nr4fc" event={"ID":"a49fa184-4451-48ed-8c86-f4716f4f6a1d","Type":"ContainerDied","Data":"7446f3955b52c484b597d91b475d1912cc7eaf81dc2c96f2db4c4e1c8e717310"} Jan 31 08:42:21 crc kubenswrapper[4908]: I0131 08:42:21.556680 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nr4fc" Jan 31 08:42:21 crc kubenswrapper[4908]: I0131 08:42:21.556756 4908 scope.go:117] "RemoveContainer" containerID="290254b51834d57d9a8aca94dd3a16cc5ff5afb577debafb2846d1baedd63f16" Jan 31 08:42:21 crc kubenswrapper[4908]: I0131 08:42:21.575823 4908 scope.go:117] "RemoveContainer" containerID="fc1499f9f4efdb7b8f26500f83781a6de88df08e28e4a7afa159113208d9e3fd" Jan 31 08:42:21 crc kubenswrapper[4908]: I0131 08:42:21.588242 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nr4fc"] Jan 31 08:42:21 crc kubenswrapper[4908]: I0131 08:42:21.598502 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nr4fc"] Jan 31 08:42:21 crc kubenswrapper[4908]: I0131 08:42:21.613254 4908 scope.go:117] "RemoveContainer" containerID="aa7a69c856408a518f621926f2652105ee22b2f91a5272dcc8cc489661fd4dd2" Jan 31 08:42:21 crc kubenswrapper[4908]: I0131 08:42:21.953048 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a49fa184-4451-48ed-8c86-f4716f4f6a1d" path="/var/lib/kubelet/pods/a49fa184-4451-48ed-8c86-f4716f4f6a1d/volumes" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.065503 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ansibletest-ansibletest"] Jan 31 08:42:29 crc kubenswrapper[4908]: E0131 08:42:29.066389 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a49fa184-4451-48ed-8c86-f4716f4f6a1d" containerName="extract-content" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.066404 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="a49fa184-4451-48ed-8c86-f4716f4f6a1d" containerName="extract-content" Jan 31 08:42:29 crc kubenswrapper[4908]: E0131 08:42:29.066426 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a49fa184-4451-48ed-8c86-f4716f4f6a1d" containerName="registry-server" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.066432 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="a49fa184-4451-48ed-8c86-f4716f4f6a1d" containerName="registry-server" Jan 31 08:42:29 crc kubenswrapper[4908]: E0131 08:42:29.066449 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a49fa184-4451-48ed-8c86-f4716f4f6a1d" containerName="extract-utilities" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.066455 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="a49fa184-4451-48ed-8c86-f4716f4f6a1d" containerName="extract-utilities" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.066646 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="a49fa184-4451-48ed-8c86-f4716f4f6a1d" containerName="registry-server" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.067441 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.070165 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.070488 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.077132 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ansibletest-ansibletest"] Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.202817 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d91c231b-3805-4477-bb16-cdd3a3c087b3-ca-certs\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.203156 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d91c231b-3805-4477-bb16-cdd3a3c087b3-ceph\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.203217 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d91c231b-3805-4477-bb16-cdd3a3c087b3-test-operator-ephemeral-workdir\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.203423 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmr5s\" (UniqueName: \"kubernetes.io/projected/d91c231b-3805-4477-bb16-cdd3a3c087b3-kube-api-access-lmr5s\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.203463 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d91c231b-3805-4477-bb16-cdd3a3c087b3-openstack-config-secret\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.203502 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d91c231b-3805-4477-bb16-cdd3a3c087b3-test-operator-ephemeral-temporary\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.203768 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"workload-ssh-secret\" (UniqueName: \"kubernetes.io/secret/d91c231b-3805-4477-bb16-cdd3a3c087b3-workload-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.203803 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.203867 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"compute-ssh-secret\" (UniqueName: \"kubernetes.io/secret/d91c231b-3805-4477-bb16-cdd3a3c087b3-compute-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.203950 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d91c231b-3805-4477-bb16-cdd3a3c087b3-openstack-config\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.306360 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d91c231b-3805-4477-bb16-cdd3a3c087b3-ceph\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.306670 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d91c231b-3805-4477-bb16-cdd3a3c087b3-test-operator-ephemeral-workdir\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.306895 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmr5s\" (UniqueName: \"kubernetes.io/projected/d91c231b-3805-4477-bb16-cdd3a3c087b3-kube-api-access-lmr5s\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.307352 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d91c231b-3805-4477-bb16-cdd3a3c087b3-openstack-config-secret\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.307476 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d91c231b-3805-4477-bb16-cdd3a3c087b3-test-operator-ephemeral-temporary\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.307565 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d91c231b-3805-4477-bb16-cdd3a3c087b3-test-operator-ephemeral-workdir\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.307779 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"workload-ssh-secret\" (UniqueName: \"kubernetes.io/secret/d91c231b-3805-4477-bb16-cdd3a3c087b3-workload-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.307904 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.308051 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"compute-ssh-secret\" (UniqueName: \"kubernetes.io/secret/d91c231b-3805-4477-bb16-cdd3a3c087b3-compute-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.308166 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d91c231b-3805-4477-bb16-cdd3a3c087b3-test-operator-ephemeral-temporary\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.309154 4908 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.309389 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d91c231b-3805-4477-bb16-cdd3a3c087b3-openstack-config\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.309566 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d91c231b-3805-4477-bb16-cdd3a3c087b3-openstack-config\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.309762 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d91c231b-3805-4477-bb16-cdd3a3c087b3-ca-certs\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.313703 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"compute-ssh-secret\" (UniqueName: \"kubernetes.io/secret/d91c231b-3805-4477-bb16-cdd3a3c087b3-compute-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.315244 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"workload-ssh-secret\" (UniqueName: \"kubernetes.io/secret/d91c231b-3805-4477-bb16-cdd3a3c087b3-workload-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.315564 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d91c231b-3805-4477-bb16-cdd3a3c087b3-ceph\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.317026 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d91c231b-3805-4477-bb16-cdd3a3c087b3-openstack-config-secret\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.325172 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d91c231b-3805-4477-bb16-cdd3a3c087b3-ca-certs\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.332346 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmr5s\" (UniqueName: \"kubernetes.io/projected/d91c231b-3805-4477-bb16-cdd3a3c087b3-kube-api-access-lmr5s\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.346124 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ansibletest-ansibletest\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.395769 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ansibletest-ansibletest" Jan 31 08:42:29 crc kubenswrapper[4908]: I0131 08:42:29.848647 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ansibletest-ansibletest"] Jan 31 08:42:29 crc kubenswrapper[4908]: W0131 08:42:29.849558 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd91c231b_3805_4477_bb16_cdd3a3c087b3.slice/crio-8da2a9d6272442dc987e0eba8ce46c74b1298a7021c647d33df48805b80f7a0b WatchSource:0}: Error finding container 8da2a9d6272442dc987e0eba8ce46c74b1298a7021c647d33df48805b80f7a0b: Status 404 returned error can't find the container with id 8da2a9d6272442dc987e0eba8ce46c74b1298a7021c647d33df48805b80f7a0b Jan 31 08:42:30 crc kubenswrapper[4908]: I0131 08:42:30.630427 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ansibletest-ansibletest" event={"ID":"d91c231b-3805-4477-bb16-cdd3a3c087b3","Type":"ContainerStarted","Data":"8da2a9d6272442dc987e0eba8ce46c74b1298a7021c647d33df48805b80f7a0b"} Jan 31 08:43:05 crc kubenswrapper[4908]: E0131 08:43:05.446178 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ansible-tests:current-podified" Jan 31 08:43:05 crc kubenswrapper[4908]: E0131 08:43:05.446863 4908 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 31 08:43:05 crc kubenswrapper[4908]: container &Container{Name:ansibletest-ansibletest,Image:quay.io/podified-antelope-centos9/openstack-ansible-tests:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:POD_ANSIBLE_EXTRA_VARS,Value:-e manual_run=false,ValueFrom:nil,},EnvVar{Name:POD_ANSIBLE_FILE_EXTRA_VARS,Value:--- Jan 31 08:43:05 crc kubenswrapper[4908]: foo: bar Jan 31 08:43:05 crc kubenswrapper[4908]: ,ValueFrom:nil,},EnvVar{Name:POD_ANSIBLE_GIT_BRANCH,Value:,ValueFrom:nil,},EnvVar{Name:POD_ANSIBLE_GIT_REPO,Value:https://github.com/ansible/test-playbooks,ValueFrom:nil,},EnvVar{Name:POD_ANSIBLE_INVENTORY,Value:localhost ansible_connection=local ansible_python_interpreter=python3 Jan 31 08:43:05 crc kubenswrapper[4908]: ,ValueFrom:nil,},EnvVar{Name:POD_ANSIBLE_PLAYBOOK,Value:./debug.yml,ValueFrom:nil,},EnvVar{Name:POD_DEBUG,Value:false,ValueFrom:nil,},EnvVar{Name:POD_INSTALL_COLLECTIONS,Value:,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{4 0} {} 4 DecimalSI},memory: {{4294967296 0} {} 4Gi BinarySI},},Requests:ResourceList{cpu: {{2 0} {} 2 DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/ansible,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/AnsibleTests/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/ansible/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/var/lib/ansible/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ca-bundle.trust.crt,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:compute-ssh-secret,ReadOnly:true,MountPath:/var/lib/ansible/.ssh/compute_id,SubPath:ssh-privatekey,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:workload-ssh-secret,ReadOnly:true,MountPath:/var/lib/ansible/test_keypair.key,SubPath:ssh-privatekey,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceph,ReadOnly:true,MountPath:/etc/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lmr5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN NET_RAW],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*227,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*227,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ansibletest-ansibletest_openstack(d91c231b-3805-4477-bb16-cdd3a3c087b3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 31 08:43:05 crc kubenswrapper[4908]: > logger="UnhandledError" Jan 31 08:43:05 crc kubenswrapper[4908]: E0131 08:43:05.448035 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ansibletest-ansibletest\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ansibletest-ansibletest" podUID="d91c231b-3805-4477-bb16-cdd3a3c087b3" Jan 31 08:43:05 crc kubenswrapper[4908]: E0131 08:43:05.997892 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ansibletest-ansibletest\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ansible-tests:current-podified\\\"\"" pod="openstack/ansibletest-ansibletest" podUID="d91c231b-3805-4477-bb16-cdd3a3c087b3" Jan 31 08:43:40 crc kubenswrapper[4908]: I0131 08:43:40.303571 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ansibletest-ansibletest" event={"ID":"d91c231b-3805-4477-bb16-cdd3a3c087b3","Type":"ContainerStarted","Data":"0798afd5544958b679198ebabcf92d1dbd3ebbc8ce9e56a5302aaf1b6919aa92"} Jan 31 08:43:40 crc kubenswrapper[4908]: I0131 08:43:40.323910 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ansibletest-ansibletest" podStartSLOduration=3.6278051380000003 podStartE2EDuration="1m12.323890429s" podCreationTimestamp="2026-01-31 08:42:28 +0000 UTC" firstStartedPulling="2026-01-31 08:42:29.852223005 +0000 UTC m=+4856.468167649" lastFinishedPulling="2026-01-31 08:43:38.548308286 +0000 UTC m=+4925.164252940" observedRunningTime="2026-01-31 08:43:40.321689563 +0000 UTC m=+4926.937634227" watchObservedRunningTime="2026-01-31 08:43:40.323890429 +0000 UTC m=+4926.939835093" Jan 31 08:43:40 crc kubenswrapper[4908]: I0131 08:43:40.431223 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:43:40 crc kubenswrapper[4908]: I0131 08:43:40.431290 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:43:42 crc kubenswrapper[4908]: I0131 08:43:42.325273 4908 generic.go:334] "Generic (PLEG): container finished" podID="d91c231b-3805-4477-bb16-cdd3a3c087b3" containerID="0798afd5544958b679198ebabcf92d1dbd3ebbc8ce9e56a5302aaf1b6919aa92" exitCode=0 Jan 31 08:43:42 crc kubenswrapper[4908]: I0131 08:43:42.325341 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ansibletest-ansibletest" event={"ID":"d91c231b-3805-4477-bb16-cdd3a3c087b3","Type":"ContainerDied","Data":"0798afd5544958b679198ebabcf92d1dbd3ebbc8ce9e56a5302aaf1b6919aa92"} Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.111081 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ansibletest-ansibletest" Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.296314 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d91c231b-3805-4477-bb16-cdd3a3c087b3-ceph\") pod \"d91c231b-3805-4477-bb16-cdd3a3c087b3\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.296413 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d91c231b-3805-4477-bb16-cdd3a3c087b3-test-operator-ephemeral-workdir\") pod \"d91c231b-3805-4477-bb16-cdd3a3c087b3\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.296444 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d91c231b-3805-4477-bb16-cdd3a3c087b3-test-operator-ephemeral-temporary\") pod \"d91c231b-3805-4477-bb16-cdd3a3c087b3\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.296483 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmr5s\" (UniqueName: \"kubernetes.io/projected/d91c231b-3805-4477-bb16-cdd3a3c087b3-kube-api-access-lmr5s\") pod \"d91c231b-3805-4477-bb16-cdd3a3c087b3\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.296508 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d91c231b-3805-4477-bb16-cdd3a3c087b3-openstack-config\") pod \"d91c231b-3805-4477-bb16-cdd3a3c087b3\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.296589 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"compute-ssh-secret\" (UniqueName: \"kubernetes.io/secret/d91c231b-3805-4477-bb16-cdd3a3c087b3-compute-ssh-secret\") pod \"d91c231b-3805-4477-bb16-cdd3a3c087b3\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.296637 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d91c231b-3805-4477-bb16-cdd3a3c087b3-openstack-config-secret\") pod \"d91c231b-3805-4477-bb16-cdd3a3c087b3\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.296711 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"workload-ssh-secret\" (UniqueName: \"kubernetes.io/secret/d91c231b-3805-4477-bb16-cdd3a3c087b3-workload-ssh-secret\") pod \"d91c231b-3805-4477-bb16-cdd3a3c087b3\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.296772 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d91c231b-3805-4477-bb16-cdd3a3c087b3-ca-certs\") pod \"d91c231b-3805-4477-bb16-cdd3a3c087b3\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.296805 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"d91c231b-3805-4477-bb16-cdd3a3c087b3\" (UID: \"d91c231b-3805-4477-bb16-cdd3a3c087b3\") " Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.296959 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d91c231b-3805-4477-bb16-cdd3a3c087b3-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "d91c231b-3805-4477-bb16-cdd3a3c087b3" (UID: "d91c231b-3805-4477-bb16-cdd3a3c087b3"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.297501 4908 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d91c231b-3805-4477-bb16-cdd3a3c087b3-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.303301 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "test-operator-logs") pod "d91c231b-3805-4477-bb16-cdd3a3c087b3" (UID: "d91c231b-3805-4477-bb16-cdd3a3c087b3"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.303408 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d91c231b-3805-4477-bb16-cdd3a3c087b3-kube-api-access-lmr5s" (OuterVolumeSpecName: "kube-api-access-lmr5s") pod "d91c231b-3805-4477-bb16-cdd3a3c087b3" (UID: "d91c231b-3805-4477-bb16-cdd3a3c087b3"). InnerVolumeSpecName "kube-api-access-lmr5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.304115 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d91c231b-3805-4477-bb16-cdd3a3c087b3-ceph" (OuterVolumeSpecName: "ceph") pod "d91c231b-3805-4477-bb16-cdd3a3c087b3" (UID: "d91c231b-3805-4477-bb16-cdd3a3c087b3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.311630 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d91c231b-3805-4477-bb16-cdd3a3c087b3-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "d91c231b-3805-4477-bb16-cdd3a3c087b3" (UID: "d91c231b-3805-4477-bb16-cdd3a3c087b3"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.329317 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d91c231b-3805-4477-bb16-cdd3a3c087b3-workload-ssh-secret" (OuterVolumeSpecName: "workload-ssh-secret") pod "d91c231b-3805-4477-bb16-cdd3a3c087b3" (UID: "d91c231b-3805-4477-bb16-cdd3a3c087b3"). InnerVolumeSpecName "workload-ssh-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.331906 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d91c231b-3805-4477-bb16-cdd3a3c087b3-compute-ssh-secret" (OuterVolumeSpecName: "compute-ssh-secret") pod "d91c231b-3805-4477-bb16-cdd3a3c087b3" (UID: "d91c231b-3805-4477-bb16-cdd3a3c087b3"). InnerVolumeSpecName "compute-ssh-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.332368 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d91c231b-3805-4477-bb16-cdd3a3c087b3-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "d91c231b-3805-4477-bb16-cdd3a3c087b3" (UID: "d91c231b-3805-4477-bb16-cdd3a3c087b3"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.345113 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ansibletest-ansibletest" event={"ID":"d91c231b-3805-4477-bb16-cdd3a3c087b3","Type":"ContainerDied","Data":"8da2a9d6272442dc987e0eba8ce46c74b1298a7021c647d33df48805b80f7a0b"} Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.345150 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8da2a9d6272442dc987e0eba8ce46c74b1298a7021c647d33df48805b80f7a0b" Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.345195 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ansibletest-ansibletest" Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.357671 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d91c231b-3805-4477-bb16-cdd3a3c087b3-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "d91c231b-3805-4477-bb16-cdd3a3c087b3" (UID: "d91c231b-3805-4477-bb16-cdd3a3c087b3"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.358825 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d91c231b-3805-4477-bb16-cdd3a3c087b3-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "d91c231b-3805-4477-bb16-cdd3a3c087b3" (UID: "d91c231b-3805-4477-bb16-cdd3a3c087b3"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.399612 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmr5s\" (UniqueName: \"kubernetes.io/projected/d91c231b-3805-4477-bb16-cdd3a3c087b3-kube-api-access-lmr5s\") on node \"crc\" DevicePath \"\"" Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.400302 4908 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d91c231b-3805-4477-bb16-cdd3a3c087b3-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.400327 4908 reconciler_common.go:293] "Volume detached for volume \"compute-ssh-secret\" (UniqueName: \"kubernetes.io/secret/d91c231b-3805-4477-bb16-cdd3a3c087b3-compute-ssh-secret\") on node \"crc\" DevicePath \"\"" Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.400338 4908 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d91c231b-3805-4477-bb16-cdd3a3c087b3-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.400373 4908 reconciler_common.go:293] "Volume detached for volume \"workload-ssh-secret\" (UniqueName: \"kubernetes.io/secret/d91c231b-3805-4477-bb16-cdd3a3c087b3-workload-ssh-secret\") on node \"crc\" DevicePath \"\"" Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.400386 4908 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d91c231b-3805-4477-bb16-cdd3a3c087b3-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.400425 4908 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.400460 4908 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d91c231b-3805-4477-bb16-cdd3a3c087b3-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.400475 4908 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d91c231b-3805-4477-bb16-cdd3a3c087b3-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.423794 4908 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 31 08:43:44 crc kubenswrapper[4908]: I0131 08:43:44.502587 4908 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 31 08:43:47 crc kubenswrapper[4908]: I0131 08:43:47.647168 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest"] Jan 31 08:43:47 crc kubenswrapper[4908]: E0131 08:43:47.648436 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d91c231b-3805-4477-bb16-cdd3a3c087b3" containerName="ansibletest-ansibletest" Jan 31 08:43:47 crc kubenswrapper[4908]: I0131 08:43:47.648459 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="d91c231b-3805-4477-bb16-cdd3a3c087b3" containerName="ansibletest-ansibletest" Jan 31 08:43:47 crc kubenswrapper[4908]: I0131 08:43:47.649069 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="d91c231b-3805-4477-bb16-cdd3a3c087b3" containerName="ansibletest-ansibletest" Jan 31 08:43:47 crc kubenswrapper[4908]: I0131 08:43:47.650302 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Jan 31 08:43:47 crc kubenswrapper[4908]: I0131 08:43:47.668917 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest"] Jan 31 08:43:47 crc kubenswrapper[4908]: I0131 08:43:47.770954 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"5d1dd36d-b9dd-4f86-bfed-d63e1d1a2359\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Jan 31 08:43:47 crc kubenswrapper[4908]: I0131 08:43:47.771073 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5d9l\" (UniqueName: \"kubernetes.io/projected/5d1dd36d-b9dd-4f86-bfed-d63e1d1a2359-kube-api-access-s5d9l\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"5d1dd36d-b9dd-4f86-bfed-d63e1d1a2359\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Jan 31 08:43:47 crc kubenswrapper[4908]: I0131 08:43:47.874116 4908 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"5d1dd36d-b9dd-4f86-bfed-d63e1d1a2359\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Jan 31 08:43:47 crc kubenswrapper[4908]: I0131 08:43:47.874222 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"5d1dd36d-b9dd-4f86-bfed-d63e1d1a2359\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Jan 31 08:43:47 crc kubenswrapper[4908]: I0131 08:43:47.874339 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5d9l\" (UniqueName: \"kubernetes.io/projected/5d1dd36d-b9dd-4f86-bfed-d63e1d1a2359-kube-api-access-s5d9l\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"5d1dd36d-b9dd-4f86-bfed-d63e1d1a2359\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Jan 31 08:43:47 crc kubenswrapper[4908]: I0131 08:43:47.900939 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5d9l\" (UniqueName: \"kubernetes.io/projected/5d1dd36d-b9dd-4f86-bfed-d63e1d1a2359-kube-api-access-s5d9l\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"5d1dd36d-b9dd-4f86-bfed-d63e1d1a2359\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Jan 31 08:43:47 crc kubenswrapper[4908]: I0131 08:43:47.901908 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"5d1dd36d-b9dd-4f86-bfed-d63e1d1a2359\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Jan 31 08:43:47 crc kubenswrapper[4908]: I0131 08:43:47.980835 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Jan 31 08:43:48 crc kubenswrapper[4908]: I0131 08:43:48.446247 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest"] Jan 31 08:43:49 crc kubenswrapper[4908]: I0131 08:43:49.391686 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" event={"ID":"5d1dd36d-b9dd-4f86-bfed-d63e1d1a2359","Type":"ContainerStarted","Data":"0ddbd7c069a5c4393bf2fb0c558821c61d5e50d5c28d23b73f8ce9e16c1f7a97"} Jan 31 08:43:50 crc kubenswrapper[4908]: I0131 08:43:50.401968 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" event={"ID":"5d1dd36d-b9dd-4f86-bfed-d63e1d1a2359","Type":"ContainerStarted","Data":"51783c0f835a920bfc2ff9043221278b458262ded6e3050cadeaf525173cf026"} Jan 31 08:43:50 crc kubenswrapper[4908]: I0131 08:43:50.421957 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" podStartSLOduration=2.639115281 podStartE2EDuration="3.421929667s" podCreationTimestamp="2026-01-31 08:43:47 +0000 UTC" firstStartedPulling="2026-01-31 08:43:48.450302365 +0000 UTC m=+4935.066247019" lastFinishedPulling="2026-01-31 08:43:49.233116751 +0000 UTC m=+4935.849061405" observedRunningTime="2026-01-31 08:43:50.415808133 +0000 UTC m=+4937.031752787" watchObservedRunningTime="2026-01-31 08:43:50.421929667 +0000 UTC m=+4937.037874321" Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.274328 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizontest-tests-horizontest"] Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.276702 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizontest-tests-horizontest" Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.279537 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizontest-tests-horizontesthorizontest-config" Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.280236 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"test-operator-clouds-config" Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.296971 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizontest-tests-horizontest"] Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.393751 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9c78150a-2975-44c5-88d5-41c2980b07d3-test-operator-ephemeral-temporary\") pod \"horizontest-tests-horizontest\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.393826 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9c78150a-2975-44c5-88d5-41c2980b07d3-ca-certs\") pod \"horizontest-tests-horizontest\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.393866 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9c78150a-2975-44c5-88d5-41c2980b07d3-test-operator-ephemeral-workdir\") pod \"horizontest-tests-horizontest\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.394071 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9c78150a-2975-44c5-88d5-41c2980b07d3-openstack-config-secret\") pod \"horizontest-tests-horizontest\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.394205 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"horizontest-tests-horizontest\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.394252 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9c78150a-2975-44c5-88d5-41c2980b07d3-ceph\") pod \"horizontest-tests-horizontest\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.394298 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4bbd\" (UniqueName: \"kubernetes.io/projected/9c78150a-2975-44c5-88d5-41c2980b07d3-kube-api-access-g4bbd\") pod \"horizontest-tests-horizontest\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.394368 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/9c78150a-2975-44c5-88d5-41c2980b07d3-test-operator-clouds-config\") pod \"horizontest-tests-horizontest\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.496464 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9c78150a-2975-44c5-88d5-41c2980b07d3-test-operator-ephemeral-workdir\") pod \"horizontest-tests-horizontest\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.496548 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9c78150a-2975-44c5-88d5-41c2980b07d3-openstack-config-secret\") pod \"horizontest-tests-horizontest\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.496607 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"horizontest-tests-horizontest\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.496640 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9c78150a-2975-44c5-88d5-41c2980b07d3-ceph\") pod \"horizontest-tests-horizontest\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.496668 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4bbd\" (UniqueName: \"kubernetes.io/projected/9c78150a-2975-44c5-88d5-41c2980b07d3-kube-api-access-g4bbd\") pod \"horizontest-tests-horizontest\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.496697 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/9c78150a-2975-44c5-88d5-41c2980b07d3-test-operator-clouds-config\") pod \"horizontest-tests-horizontest\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.496797 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9c78150a-2975-44c5-88d5-41c2980b07d3-test-operator-ephemeral-temporary\") pod \"horizontest-tests-horizontest\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.496898 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9c78150a-2975-44c5-88d5-41c2980b07d3-ca-certs\") pod \"horizontest-tests-horizontest\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.498525 4908 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"horizontest-tests-horizontest\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/horizontest-tests-horizontest" Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.498759 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9c78150a-2975-44c5-88d5-41c2980b07d3-test-operator-ephemeral-temporary\") pod \"horizontest-tests-horizontest\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.499265 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9c78150a-2975-44c5-88d5-41c2980b07d3-test-operator-ephemeral-workdir\") pod \"horizontest-tests-horizontest\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.499367 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/9c78150a-2975-44c5-88d5-41c2980b07d3-test-operator-clouds-config\") pod \"horizontest-tests-horizontest\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.510163 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9c78150a-2975-44c5-88d5-41c2980b07d3-ceph\") pod \"horizontest-tests-horizontest\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.513123 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9c78150a-2975-44c5-88d5-41c2980b07d3-ca-certs\") pod \"horizontest-tests-horizontest\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.514441 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9c78150a-2975-44c5-88d5-41c2980b07d3-openstack-config-secret\") pod \"horizontest-tests-horizontest\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.519321 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4bbd\" (UniqueName: \"kubernetes.io/projected/9c78150a-2975-44c5-88d5-41c2980b07d3-kube-api-access-g4bbd\") pod \"horizontest-tests-horizontest\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.536630 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"horizontest-tests-horizontest\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:44:03 crc kubenswrapper[4908]: I0131 08:44:03.605377 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizontest-tests-horizontest" Jan 31 08:44:04 crc kubenswrapper[4908]: W0131 08:44:04.046708 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c78150a_2975_44c5_88d5_41c2980b07d3.slice/crio-4d2506cccfc8fe5f62ec6e6831f2a13d132123ee5ccb9e894c9198a3d421fcc4 WatchSource:0}: Error finding container 4d2506cccfc8fe5f62ec6e6831f2a13d132123ee5ccb9e894c9198a3d421fcc4: Status 404 returned error can't find the container with id 4d2506cccfc8fe5f62ec6e6831f2a13d132123ee5ccb9e894c9198a3d421fcc4 Jan 31 08:44:04 crc kubenswrapper[4908]: I0131 08:44:04.047760 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizontest-tests-horizontest"] Jan 31 08:44:04 crc kubenswrapper[4908]: I0131 08:44:04.543796 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizontest-tests-horizontest" event={"ID":"9c78150a-2975-44c5-88d5-41c2980b07d3","Type":"ContainerStarted","Data":"4d2506cccfc8fe5f62ec6e6831f2a13d132123ee5ccb9e894c9198a3d421fcc4"} Jan 31 08:44:10 crc kubenswrapper[4908]: I0131 08:44:10.430896 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:44:10 crc kubenswrapper[4908]: I0131 08:44:10.431516 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:44:39 crc kubenswrapper[4908]: E0131 08:44:39.490065 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizontest:current-podified" Jan 31 08:44:39 crc kubenswrapper[4908]: E0131 08:44:39.490963 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizontest-tests-horizontest,Image:quay.io/podified-antelope-centos9/openstack-horizontest:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADMIN_PASSWORD,Value:12345678,ValueFrom:nil,},EnvVar{Name:ADMIN_USERNAME,Value:admin,ValueFrom:nil,},EnvVar{Name:AUTH_URL,Value:https://keystone-public-openstack.apps-crc.testing,ValueFrom:nil,},EnvVar{Name:DASHBOARD_URL,Value:https://horizon-openstack.apps-crc.testing/,ValueFrom:nil,},EnvVar{Name:EXTRA_FLAG,Value:not pagination and test_users.py,ValueFrom:nil,},EnvVar{Name:FLAVOR_NAME,Value:m1.tiny,ValueFrom:nil,},EnvVar{Name:HORIZONTEST_DEBUG_MODE,Value:false,ValueFrom:nil,},EnvVar{Name:HORIZON_KEYS_FOLDER,Value:/etc/test_operator,ValueFrom:nil,},EnvVar{Name:HORIZON_LOGS_DIR_NAME,Value:horizon,ValueFrom:nil,},EnvVar{Name:HORIZON_REPO_BRANCH,Value:master,ValueFrom:nil,},EnvVar{Name:IMAGE_FILE,Value:/var/lib/horizontest/cirros-0.6.2-x86_64-disk.img,ValueFrom:nil,},EnvVar{Name:IMAGE_FILE_NAME,Value:cirros-0.6.2-x86_64-disk,ValueFrom:nil,},EnvVar{Name:IMAGE_URL,Value:http://download.cirros-cloud.net/0.6.2/cirros-0.6.2-x86_64-disk.img,ValueFrom:nil,},EnvVar{Name:PASSWORD,Value:horizontest,ValueFrom:nil,},EnvVar{Name:PROJECT_NAME,Value:horizontest,ValueFrom:nil,},EnvVar{Name:PROJECT_NAME_XPATH,Value://*[@class=\"context-project\"]//ancestor::ul,ValueFrom:nil,},EnvVar{Name:REPO_URL,Value:https://review.opendev.org/openstack/horizon,ValueFrom:nil,},EnvVar{Name:USER_NAME,Value:horizontest,ValueFrom:nil,},EnvVar{Name:USE_EXTERNAL_FILES,Value:True,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{2 0} {} 2 DecimalSI},memory: {{4294967296 0} {} 4Gi BinarySI},},Requests:ResourceList{cpu: {{1 0} {} 1 DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/horizontest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/horizontest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-clouds-config,ReadOnly:true,MountPath:/var/lib/horizontest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-clouds-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ca-bundle.trust.crt,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceph,ReadOnly:true,MountPath:/etc/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g4bbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN NET_RAW],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42455,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42455,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizontest-tests-horizontest_openstack(9c78150a-2975-44c5-88d5-41c2980b07d3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 08:44:39 crc kubenswrapper[4908]: E0131 08:44:39.492151 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizontest-tests-horizontest\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/horizontest-tests-horizontest" podUID="9c78150a-2975-44c5-88d5-41c2980b07d3" Jan 31 08:44:39 crc kubenswrapper[4908]: E0131 08:44:39.912107 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizontest-tests-horizontest\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizontest:current-podified\\\"\"" pod="openstack/horizontest-tests-horizontest" podUID="9c78150a-2975-44c5-88d5-41c2980b07d3" Jan 31 08:44:40 crc kubenswrapper[4908]: I0131 08:44:40.514351 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:44:40 crc kubenswrapper[4908]: I0131 08:44:40.514475 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:44:40 crc kubenswrapper[4908]: I0131 08:44:40.514558 4908 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 08:44:40 crc kubenswrapper[4908]: I0131 08:44:40.515642 4908 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d5f30cf61c5b06459cc2a12e83525b01723e89cc4e3d8f64f0cb144ac620f50a"} pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 08:44:40 crc kubenswrapper[4908]: I0131 08:44:40.515724 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" containerID="cri-o://d5f30cf61c5b06459cc2a12e83525b01723e89cc4e3d8f64f0cb144ac620f50a" gracePeriod=600 Jan 31 08:44:40 crc kubenswrapper[4908]: I0131 08:44:40.927905 4908 generic.go:334] "Generic (PLEG): container finished" podID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerID="d5f30cf61c5b06459cc2a12e83525b01723e89cc4e3d8f64f0cb144ac620f50a" exitCode=0 Jan 31 08:44:40 crc kubenswrapper[4908]: I0131 08:44:40.928028 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerDied","Data":"d5f30cf61c5b06459cc2a12e83525b01723e89cc4e3d8f64f0cb144ac620f50a"} Jan 31 08:44:40 crc kubenswrapper[4908]: I0131 08:44:40.928330 4908 scope.go:117] "RemoveContainer" containerID="8b5b36aa4f894414f6b920e58527965283567474e86702ecd8f3a15122644d3d" Jan 31 08:44:41 crc kubenswrapper[4908]: I0131 08:44:41.971879 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerStarted","Data":"c31afd0aae9fdd20c6d396f2a0969c95b69e611fce07660e6aa9842abd69892e"} Jan 31 08:44:57 crc kubenswrapper[4908]: I0131 08:44:57.100229 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizontest-tests-horizontest" event={"ID":"9c78150a-2975-44c5-88d5-41c2980b07d3","Type":"ContainerStarted","Data":"9c7f7d1b0ba2568d4e9789de67262762b6bb5f67f18d9e6c9ec07bc0e10d3f43"} Jan 31 08:44:57 crc kubenswrapper[4908]: I0131 08:44:57.128144 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizontest-tests-horizontest" podStartSLOduration=3.846703364 podStartE2EDuration="55.128123615s" podCreationTimestamp="2026-01-31 08:44:02 +0000 UTC" firstStartedPulling="2026-01-31 08:44:04.048925193 +0000 UTC m=+4950.664869847" lastFinishedPulling="2026-01-31 08:44:55.330345444 +0000 UTC m=+5001.946290098" observedRunningTime="2026-01-31 08:44:57.120712649 +0000 UTC m=+5003.736657303" watchObservedRunningTime="2026-01-31 08:44:57.128123615 +0000 UTC m=+5003.744068269" Jan 31 08:45:00 crc kubenswrapper[4908]: I0131 08:45:00.149138 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497485-4xnc2"] Jan 31 08:45:00 crc kubenswrapper[4908]: I0131 08:45:00.151357 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-4xnc2" Jan 31 08:45:00 crc kubenswrapper[4908]: I0131 08:45:00.154534 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 08:45:00 crc kubenswrapper[4908]: I0131 08:45:00.155089 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 08:45:00 crc kubenswrapper[4908]: I0131 08:45:00.165976 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497485-4xnc2"] Jan 31 08:45:00 crc kubenswrapper[4908]: I0131 08:45:00.248874 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp2ds\" (UniqueName: \"kubernetes.io/projected/e831b2ef-a7cd-4ea8-9135-f3ee27f0a836-kube-api-access-wp2ds\") pod \"collect-profiles-29497485-4xnc2\" (UID: \"e831b2ef-a7cd-4ea8-9135-f3ee27f0a836\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-4xnc2" Jan 31 08:45:00 crc kubenswrapper[4908]: I0131 08:45:00.249413 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e831b2ef-a7cd-4ea8-9135-f3ee27f0a836-config-volume\") pod \"collect-profiles-29497485-4xnc2\" (UID: \"e831b2ef-a7cd-4ea8-9135-f3ee27f0a836\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-4xnc2" Jan 31 08:45:00 crc kubenswrapper[4908]: I0131 08:45:00.249481 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e831b2ef-a7cd-4ea8-9135-f3ee27f0a836-secret-volume\") pod \"collect-profiles-29497485-4xnc2\" (UID: \"e831b2ef-a7cd-4ea8-9135-f3ee27f0a836\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-4xnc2" Jan 31 08:45:00 crc kubenswrapper[4908]: I0131 08:45:00.351904 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp2ds\" (UniqueName: \"kubernetes.io/projected/e831b2ef-a7cd-4ea8-9135-f3ee27f0a836-kube-api-access-wp2ds\") pod \"collect-profiles-29497485-4xnc2\" (UID: \"e831b2ef-a7cd-4ea8-9135-f3ee27f0a836\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-4xnc2" Jan 31 08:45:00 crc kubenswrapper[4908]: I0131 08:45:00.352033 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e831b2ef-a7cd-4ea8-9135-f3ee27f0a836-config-volume\") pod \"collect-profiles-29497485-4xnc2\" (UID: \"e831b2ef-a7cd-4ea8-9135-f3ee27f0a836\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-4xnc2" Jan 31 08:45:00 crc kubenswrapper[4908]: I0131 08:45:00.352092 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e831b2ef-a7cd-4ea8-9135-f3ee27f0a836-secret-volume\") pod \"collect-profiles-29497485-4xnc2\" (UID: \"e831b2ef-a7cd-4ea8-9135-f3ee27f0a836\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-4xnc2" Jan 31 08:45:00 crc kubenswrapper[4908]: I0131 08:45:00.353998 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e831b2ef-a7cd-4ea8-9135-f3ee27f0a836-config-volume\") pod \"collect-profiles-29497485-4xnc2\" (UID: \"e831b2ef-a7cd-4ea8-9135-f3ee27f0a836\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-4xnc2" Jan 31 08:45:00 crc kubenswrapper[4908]: I0131 08:45:00.369643 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e831b2ef-a7cd-4ea8-9135-f3ee27f0a836-secret-volume\") pod \"collect-profiles-29497485-4xnc2\" (UID: \"e831b2ef-a7cd-4ea8-9135-f3ee27f0a836\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-4xnc2" Jan 31 08:45:00 crc kubenswrapper[4908]: I0131 08:45:00.369735 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp2ds\" (UniqueName: \"kubernetes.io/projected/e831b2ef-a7cd-4ea8-9135-f3ee27f0a836-kube-api-access-wp2ds\") pod \"collect-profiles-29497485-4xnc2\" (UID: \"e831b2ef-a7cd-4ea8-9135-f3ee27f0a836\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-4xnc2" Jan 31 08:45:00 crc kubenswrapper[4908]: I0131 08:45:00.496014 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-4xnc2" Jan 31 08:45:00 crc kubenswrapper[4908]: I0131 08:45:00.937313 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497485-4xnc2"] Jan 31 08:45:01 crc kubenswrapper[4908]: I0131 08:45:01.148616 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-4xnc2" event={"ID":"e831b2ef-a7cd-4ea8-9135-f3ee27f0a836","Type":"ContainerStarted","Data":"c45939d230a815aa4302b92da8e3348dd762d622726e59c0f9695b66a79e78be"} Jan 31 08:45:04 crc kubenswrapper[4908]: I0131 08:45:04.180211 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-4xnc2" event={"ID":"e831b2ef-a7cd-4ea8-9135-f3ee27f0a836","Type":"ContainerStarted","Data":"412a3eecd57d1d3566cff79d87e64a9c84724533ea79c28580255f7fb16abb7c"} Jan 31 08:45:04 crc kubenswrapper[4908]: I0131 08:45:04.204350 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-4xnc2" podStartSLOduration=4.20432072 podStartE2EDuration="4.20432072s" podCreationTimestamp="2026-01-31 08:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 08:45:04.198815451 +0000 UTC m=+5010.814760105" watchObservedRunningTime="2026-01-31 08:45:04.20432072 +0000 UTC m=+5010.820265374" Jan 31 08:45:07 crc kubenswrapper[4908]: I0131 08:45:07.207756 4908 generic.go:334] "Generic (PLEG): container finished" podID="e831b2ef-a7cd-4ea8-9135-f3ee27f0a836" containerID="412a3eecd57d1d3566cff79d87e64a9c84724533ea79c28580255f7fb16abb7c" exitCode=0 Jan 31 08:45:07 crc kubenswrapper[4908]: I0131 08:45:07.207939 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-4xnc2" event={"ID":"e831b2ef-a7cd-4ea8-9135-f3ee27f0a836","Type":"ContainerDied","Data":"412a3eecd57d1d3566cff79d87e64a9c84724533ea79c28580255f7fb16abb7c"} Jan 31 08:45:08 crc kubenswrapper[4908]: I0131 08:45:08.552914 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-4xnc2" Jan 31 08:45:08 crc kubenswrapper[4908]: I0131 08:45:08.619175 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e831b2ef-a7cd-4ea8-9135-f3ee27f0a836-secret-volume\") pod \"e831b2ef-a7cd-4ea8-9135-f3ee27f0a836\" (UID: \"e831b2ef-a7cd-4ea8-9135-f3ee27f0a836\") " Jan 31 08:45:08 crc kubenswrapper[4908]: I0131 08:45:08.619371 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e831b2ef-a7cd-4ea8-9135-f3ee27f0a836-config-volume\") pod \"e831b2ef-a7cd-4ea8-9135-f3ee27f0a836\" (UID: \"e831b2ef-a7cd-4ea8-9135-f3ee27f0a836\") " Jan 31 08:45:08 crc kubenswrapper[4908]: I0131 08:45:08.619484 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp2ds\" (UniqueName: \"kubernetes.io/projected/e831b2ef-a7cd-4ea8-9135-f3ee27f0a836-kube-api-access-wp2ds\") pod \"e831b2ef-a7cd-4ea8-9135-f3ee27f0a836\" (UID: \"e831b2ef-a7cd-4ea8-9135-f3ee27f0a836\") " Jan 31 08:45:08 crc kubenswrapper[4908]: I0131 08:45:08.620436 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e831b2ef-a7cd-4ea8-9135-f3ee27f0a836-config-volume" (OuterVolumeSpecName: "config-volume") pod "e831b2ef-a7cd-4ea8-9135-f3ee27f0a836" (UID: "e831b2ef-a7cd-4ea8-9135-f3ee27f0a836"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:45:08 crc kubenswrapper[4908]: I0131 08:45:08.632195 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e831b2ef-a7cd-4ea8-9135-f3ee27f0a836-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e831b2ef-a7cd-4ea8-9135-f3ee27f0a836" (UID: "e831b2ef-a7cd-4ea8-9135-f3ee27f0a836"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:45:08 crc kubenswrapper[4908]: I0131 08:45:08.633038 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e831b2ef-a7cd-4ea8-9135-f3ee27f0a836-kube-api-access-wp2ds" (OuterVolumeSpecName: "kube-api-access-wp2ds") pod "e831b2ef-a7cd-4ea8-9135-f3ee27f0a836" (UID: "e831b2ef-a7cd-4ea8-9135-f3ee27f0a836"). InnerVolumeSpecName "kube-api-access-wp2ds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:45:08 crc kubenswrapper[4908]: I0131 08:45:08.722339 4908 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e831b2ef-a7cd-4ea8-9135-f3ee27f0a836-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 08:45:08 crc kubenswrapper[4908]: I0131 08:45:08.722374 4908 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e831b2ef-a7cd-4ea8-9135-f3ee27f0a836-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 08:45:08 crc kubenswrapper[4908]: I0131 08:45:08.722385 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp2ds\" (UniqueName: \"kubernetes.io/projected/e831b2ef-a7cd-4ea8-9135-f3ee27f0a836-kube-api-access-wp2ds\") on node \"crc\" DevicePath \"\"" Jan 31 08:45:09 crc kubenswrapper[4908]: I0131 08:45:09.227855 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-4xnc2" event={"ID":"e831b2ef-a7cd-4ea8-9135-f3ee27f0a836","Type":"ContainerDied","Data":"c45939d230a815aa4302b92da8e3348dd762d622726e59c0f9695b66a79e78be"} Jan 31 08:45:09 crc kubenswrapper[4908]: I0131 08:45:09.227913 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c45939d230a815aa4302b92da8e3348dd762d622726e59c0f9695b66a79e78be" Jan 31 08:45:09 crc kubenswrapper[4908]: I0131 08:45:09.227931 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-4xnc2" Jan 31 08:45:09 crc kubenswrapper[4908]: I0131 08:45:09.300623 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497440-zrh8b"] Jan 31 08:45:09 crc kubenswrapper[4908]: I0131 08:45:09.311027 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497440-zrh8b"] Jan 31 08:45:09 crc kubenswrapper[4908]: I0131 08:45:09.953780 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="705e20d7-875b-4674-9d26-463b7b47e9a7" path="/var/lib/kubelet/pods/705e20d7-875b-4674-9d26-463b7b47e9a7/volumes" Jan 31 08:45:32 crc kubenswrapper[4908]: I0131 08:45:32.251193 4908 scope.go:117] "RemoveContainer" containerID="d68382d098de1613d80dd8bae23151a2588d0996705c258c153095b41d7d72fb" Jan 31 08:46:30 crc kubenswrapper[4908]: I0131 08:46:30.186814 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xs7m5"] Jan 31 08:46:30 crc kubenswrapper[4908]: E0131 08:46:30.187879 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e831b2ef-a7cd-4ea8-9135-f3ee27f0a836" containerName="collect-profiles" Jan 31 08:46:30 crc kubenswrapper[4908]: I0131 08:46:30.187897 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="e831b2ef-a7cd-4ea8-9135-f3ee27f0a836" containerName="collect-profiles" Jan 31 08:46:30 crc kubenswrapper[4908]: I0131 08:46:30.188133 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="e831b2ef-a7cd-4ea8-9135-f3ee27f0a836" containerName="collect-profiles" Jan 31 08:46:30 crc kubenswrapper[4908]: I0131 08:46:30.189947 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xs7m5" Jan 31 08:46:30 crc kubenswrapper[4908]: I0131 08:46:30.200834 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xs7m5"] Jan 31 08:46:30 crc kubenswrapper[4908]: I0131 08:46:30.286149 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4bc583d-0b6a-4fe8-825d-267d69de6906-catalog-content\") pod \"certified-operators-xs7m5\" (UID: \"c4bc583d-0b6a-4fe8-825d-267d69de6906\") " pod="openshift-marketplace/certified-operators-xs7m5" Jan 31 08:46:30 crc kubenswrapper[4908]: I0131 08:46:30.286199 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb28s\" (UniqueName: \"kubernetes.io/projected/c4bc583d-0b6a-4fe8-825d-267d69de6906-kube-api-access-vb28s\") pod \"certified-operators-xs7m5\" (UID: \"c4bc583d-0b6a-4fe8-825d-267d69de6906\") " pod="openshift-marketplace/certified-operators-xs7m5" Jan 31 08:46:30 crc kubenswrapper[4908]: I0131 08:46:30.286278 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4bc583d-0b6a-4fe8-825d-267d69de6906-utilities\") pod \"certified-operators-xs7m5\" (UID: \"c4bc583d-0b6a-4fe8-825d-267d69de6906\") " pod="openshift-marketplace/certified-operators-xs7m5" Jan 31 08:46:30 crc kubenswrapper[4908]: I0131 08:46:30.387802 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4bc583d-0b6a-4fe8-825d-267d69de6906-utilities\") pod \"certified-operators-xs7m5\" (UID: \"c4bc583d-0b6a-4fe8-825d-267d69de6906\") " pod="openshift-marketplace/certified-operators-xs7m5" Jan 31 08:46:30 crc kubenswrapper[4908]: I0131 08:46:30.387970 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4bc583d-0b6a-4fe8-825d-267d69de6906-catalog-content\") pod \"certified-operators-xs7m5\" (UID: \"c4bc583d-0b6a-4fe8-825d-267d69de6906\") " pod="openshift-marketplace/certified-operators-xs7m5" Jan 31 08:46:30 crc kubenswrapper[4908]: I0131 08:46:30.388014 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb28s\" (UniqueName: \"kubernetes.io/projected/c4bc583d-0b6a-4fe8-825d-267d69de6906-kube-api-access-vb28s\") pod \"certified-operators-xs7m5\" (UID: \"c4bc583d-0b6a-4fe8-825d-267d69de6906\") " pod="openshift-marketplace/certified-operators-xs7m5" Jan 31 08:46:30 crc kubenswrapper[4908]: I0131 08:46:30.388350 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4bc583d-0b6a-4fe8-825d-267d69de6906-utilities\") pod \"certified-operators-xs7m5\" (UID: \"c4bc583d-0b6a-4fe8-825d-267d69de6906\") " pod="openshift-marketplace/certified-operators-xs7m5" Jan 31 08:46:30 crc kubenswrapper[4908]: I0131 08:46:30.388441 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4bc583d-0b6a-4fe8-825d-267d69de6906-catalog-content\") pod \"certified-operators-xs7m5\" (UID: \"c4bc583d-0b6a-4fe8-825d-267d69de6906\") " pod="openshift-marketplace/certified-operators-xs7m5" Jan 31 08:46:30 crc kubenswrapper[4908]: I0131 08:46:30.507962 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb28s\" (UniqueName: \"kubernetes.io/projected/c4bc583d-0b6a-4fe8-825d-267d69de6906-kube-api-access-vb28s\") pod \"certified-operators-xs7m5\" (UID: \"c4bc583d-0b6a-4fe8-825d-267d69de6906\") " pod="openshift-marketplace/certified-operators-xs7m5" Jan 31 08:46:30 crc kubenswrapper[4908]: I0131 08:46:30.508431 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xs7m5" Jan 31 08:46:30 crc kubenswrapper[4908]: I0131 08:46:30.893554 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xs7m5"] Jan 31 08:46:30 crc kubenswrapper[4908]: I0131 08:46:30.980804 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xs7m5" event={"ID":"c4bc583d-0b6a-4fe8-825d-267d69de6906","Type":"ContainerStarted","Data":"7de2e55d1401b139a1df1c44d9a318b2ff94c146fa36f30e78729f1937ecddf2"} Jan 31 08:46:31 crc kubenswrapper[4908]: I0131 08:46:31.990339 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xs7m5" event={"ID":"c4bc583d-0b6a-4fe8-825d-267d69de6906","Type":"ContainerStarted","Data":"079bd55ad1baf60db594f31d93c3cf24a4fba9fcf81a7e78517b7c5979e79315"} Jan 31 08:46:33 crc kubenswrapper[4908]: I0131 08:46:33.000605 4908 generic.go:334] "Generic (PLEG): container finished" podID="c4bc583d-0b6a-4fe8-825d-267d69de6906" containerID="079bd55ad1baf60db594f31d93c3cf24a4fba9fcf81a7e78517b7c5979e79315" exitCode=0 Jan 31 08:46:33 crc kubenswrapper[4908]: I0131 08:46:33.000676 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xs7m5" event={"ID":"c4bc583d-0b6a-4fe8-825d-267d69de6906","Type":"ContainerDied","Data":"079bd55ad1baf60db594f31d93c3cf24a4fba9fcf81a7e78517b7c5979e79315"} Jan 31 08:46:34 crc kubenswrapper[4908]: I0131 08:46:34.013609 4908 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 08:46:40 crc kubenswrapper[4908]: I0131 08:46:40.062380 4908 generic.go:334] "Generic (PLEG): container finished" podID="c4bc583d-0b6a-4fe8-825d-267d69de6906" containerID="151d54a054612f9661a76d42db5d1693a07002f47ce3b6850d244ce1ed86f2f3" exitCode=0 Jan 31 08:46:40 crc kubenswrapper[4908]: I0131 08:46:40.062418 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xs7m5" event={"ID":"c4bc583d-0b6a-4fe8-825d-267d69de6906","Type":"ContainerDied","Data":"151d54a054612f9661a76d42db5d1693a07002f47ce3b6850d244ce1ed86f2f3"} Jan 31 08:46:52 crc kubenswrapper[4908]: I0131 08:46:52.175085 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xs7m5" event={"ID":"c4bc583d-0b6a-4fe8-825d-267d69de6906","Type":"ContainerStarted","Data":"fff39a82eb6b698da83de310fa81636ef4d6b71056cf420628ad7643d6f9f582"} Jan 31 08:46:52 crc kubenswrapper[4908]: I0131 08:46:52.192574 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xs7m5" podStartSLOduration=5.12628144 podStartE2EDuration="22.19255438s" podCreationTimestamp="2026-01-31 08:46:30 +0000 UTC" firstStartedPulling="2026-01-31 08:46:34.013342569 +0000 UTC m=+5100.629287223" lastFinishedPulling="2026-01-31 08:46:51.079615509 +0000 UTC m=+5117.695560163" observedRunningTime="2026-01-31 08:46:52.190775496 +0000 UTC m=+5118.806720160" watchObservedRunningTime="2026-01-31 08:46:52.19255438 +0000 UTC m=+5118.808499034" Jan 31 08:47:00 crc kubenswrapper[4908]: I0131 08:47:00.510067 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xs7m5" Jan 31 08:47:00 crc kubenswrapper[4908]: I0131 08:47:00.510637 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xs7m5" Jan 31 08:47:00 crc kubenswrapper[4908]: I0131 08:47:00.552622 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xs7m5" Jan 31 08:47:01 crc kubenswrapper[4908]: I0131 08:47:01.299577 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xs7m5" Jan 31 08:47:01 crc kubenswrapper[4908]: I0131 08:47:01.655585 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xs7m5"] Jan 31 08:47:03 crc kubenswrapper[4908]: I0131 08:47:03.269899 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xs7m5" podUID="c4bc583d-0b6a-4fe8-825d-267d69de6906" containerName="registry-server" containerID="cri-o://fff39a82eb6b698da83de310fa81636ef4d6b71056cf420628ad7643d6f9f582" gracePeriod=2 Jan 31 08:47:04 crc kubenswrapper[4908]: I0131 08:47:04.281796 4908 generic.go:334] "Generic (PLEG): container finished" podID="c4bc583d-0b6a-4fe8-825d-267d69de6906" containerID="fff39a82eb6b698da83de310fa81636ef4d6b71056cf420628ad7643d6f9f582" exitCode=0 Jan 31 08:47:04 crc kubenswrapper[4908]: I0131 08:47:04.281872 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xs7m5" event={"ID":"c4bc583d-0b6a-4fe8-825d-267d69de6906","Type":"ContainerDied","Data":"fff39a82eb6b698da83de310fa81636ef4d6b71056cf420628ad7643d6f9f582"} Jan 31 08:47:05 crc kubenswrapper[4908]: I0131 08:47:05.228934 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xs7m5" Jan 31 08:47:05 crc kubenswrapper[4908]: I0131 08:47:05.292349 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xs7m5" event={"ID":"c4bc583d-0b6a-4fe8-825d-267d69de6906","Type":"ContainerDied","Data":"7de2e55d1401b139a1df1c44d9a318b2ff94c146fa36f30e78729f1937ecddf2"} Jan 31 08:47:05 crc kubenswrapper[4908]: I0131 08:47:05.292411 4908 scope.go:117] "RemoveContainer" containerID="fff39a82eb6b698da83de310fa81636ef4d6b71056cf420628ad7643d6f9f582" Jan 31 08:47:05 crc kubenswrapper[4908]: I0131 08:47:05.292414 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xs7m5" Jan 31 08:47:05 crc kubenswrapper[4908]: I0131 08:47:05.314179 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4bc583d-0b6a-4fe8-825d-267d69de6906-catalog-content\") pod \"c4bc583d-0b6a-4fe8-825d-267d69de6906\" (UID: \"c4bc583d-0b6a-4fe8-825d-267d69de6906\") " Jan 31 08:47:05 crc kubenswrapper[4908]: I0131 08:47:05.314248 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4bc583d-0b6a-4fe8-825d-267d69de6906-utilities\") pod \"c4bc583d-0b6a-4fe8-825d-267d69de6906\" (UID: \"c4bc583d-0b6a-4fe8-825d-267d69de6906\") " Jan 31 08:47:05 crc kubenswrapper[4908]: I0131 08:47:05.314315 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb28s\" (UniqueName: \"kubernetes.io/projected/c4bc583d-0b6a-4fe8-825d-267d69de6906-kube-api-access-vb28s\") pod \"c4bc583d-0b6a-4fe8-825d-267d69de6906\" (UID: \"c4bc583d-0b6a-4fe8-825d-267d69de6906\") " Jan 31 08:47:05 crc kubenswrapper[4908]: I0131 08:47:05.317288 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4bc583d-0b6a-4fe8-825d-267d69de6906-utilities" (OuterVolumeSpecName: "utilities") pod "c4bc583d-0b6a-4fe8-825d-267d69de6906" (UID: "c4bc583d-0b6a-4fe8-825d-267d69de6906"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:47:05 crc kubenswrapper[4908]: I0131 08:47:05.331176 4908 scope.go:117] "RemoveContainer" containerID="151d54a054612f9661a76d42db5d1693a07002f47ce3b6850d244ce1ed86f2f3" Jan 31 08:47:05 crc kubenswrapper[4908]: I0131 08:47:05.335188 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4bc583d-0b6a-4fe8-825d-267d69de6906-kube-api-access-vb28s" (OuterVolumeSpecName: "kube-api-access-vb28s") pod "c4bc583d-0b6a-4fe8-825d-267d69de6906" (UID: "c4bc583d-0b6a-4fe8-825d-267d69de6906"). InnerVolumeSpecName "kube-api-access-vb28s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:47:05 crc kubenswrapper[4908]: I0131 08:47:05.417227 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4bc583d-0b6a-4fe8-825d-267d69de6906-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:47:05 crc kubenswrapper[4908]: I0131 08:47:05.417266 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb28s\" (UniqueName: \"kubernetes.io/projected/c4bc583d-0b6a-4fe8-825d-267d69de6906-kube-api-access-vb28s\") on node \"crc\" DevicePath \"\"" Jan 31 08:47:05 crc kubenswrapper[4908]: I0131 08:47:05.440170 4908 scope.go:117] "RemoveContainer" containerID="079bd55ad1baf60db594f31d93c3cf24a4fba9fcf81a7e78517b7c5979e79315" Jan 31 08:47:05 crc kubenswrapper[4908]: I0131 08:47:05.446821 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4bc583d-0b6a-4fe8-825d-267d69de6906-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4bc583d-0b6a-4fe8-825d-267d69de6906" (UID: "c4bc583d-0b6a-4fe8-825d-267d69de6906"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:47:05 crc kubenswrapper[4908]: I0131 08:47:05.519258 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4bc583d-0b6a-4fe8-825d-267d69de6906-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:47:05 crc kubenswrapper[4908]: I0131 08:47:05.635101 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xs7m5"] Jan 31 08:47:05 crc kubenswrapper[4908]: I0131 08:47:05.647571 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xs7m5"] Jan 31 08:47:05 crc kubenswrapper[4908]: I0131 08:47:05.957080 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4bc583d-0b6a-4fe8-825d-267d69de6906" path="/var/lib/kubelet/pods/c4bc583d-0b6a-4fe8-825d-267d69de6906/volumes" Jan 31 08:47:10 crc kubenswrapper[4908]: I0131 08:47:10.432178 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:47:10 crc kubenswrapper[4908]: I0131 08:47:10.432814 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:47:29 crc kubenswrapper[4908]: I0131 08:47:29.515069 4908 generic.go:334] "Generic (PLEG): container finished" podID="9c78150a-2975-44c5-88d5-41c2980b07d3" containerID="9c7f7d1b0ba2568d4e9789de67262762b6bb5f67f18d9e6c9ec07bc0e10d3f43" exitCode=0 Jan 31 08:47:29 crc kubenswrapper[4908]: I0131 08:47:29.515149 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizontest-tests-horizontest" event={"ID":"9c78150a-2975-44c5-88d5-41c2980b07d3","Type":"ContainerDied","Data":"9c7f7d1b0ba2568d4e9789de67262762b6bb5f67f18d9e6c9ec07bc0e10d3f43"} Jan 31 08:47:30 crc kubenswrapper[4908]: I0131 08:47:30.892908 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizontest-tests-horizontest" Jan 31 08:47:31 crc kubenswrapper[4908]: I0131 08:47:31.055238 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9c78150a-2975-44c5-88d5-41c2980b07d3-openstack-config-secret\") pod \"9c78150a-2975-44c5-88d5-41c2980b07d3\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " Jan 31 08:47:31 crc kubenswrapper[4908]: I0131 08:47:31.055331 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"9c78150a-2975-44c5-88d5-41c2980b07d3\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " Jan 31 08:47:31 crc kubenswrapper[4908]: I0131 08:47:31.055417 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9c78150a-2975-44c5-88d5-41c2980b07d3-test-operator-ephemeral-temporary\") pod \"9c78150a-2975-44c5-88d5-41c2980b07d3\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " Jan 31 08:47:31 crc kubenswrapper[4908]: I0131 08:47:31.055610 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/9c78150a-2975-44c5-88d5-41c2980b07d3-test-operator-clouds-config\") pod \"9c78150a-2975-44c5-88d5-41c2980b07d3\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " Jan 31 08:47:31 crc kubenswrapper[4908]: I0131 08:47:31.055649 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9c78150a-2975-44c5-88d5-41c2980b07d3-ceph\") pod \"9c78150a-2975-44c5-88d5-41c2980b07d3\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " Jan 31 08:47:31 crc kubenswrapper[4908]: I0131 08:47:31.055678 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4bbd\" (UniqueName: \"kubernetes.io/projected/9c78150a-2975-44c5-88d5-41c2980b07d3-kube-api-access-g4bbd\") pod \"9c78150a-2975-44c5-88d5-41c2980b07d3\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " Jan 31 08:47:31 crc kubenswrapper[4908]: I0131 08:47:31.055724 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9c78150a-2975-44c5-88d5-41c2980b07d3-test-operator-ephemeral-workdir\") pod \"9c78150a-2975-44c5-88d5-41c2980b07d3\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " Jan 31 08:47:31 crc kubenswrapper[4908]: I0131 08:47:31.055750 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9c78150a-2975-44c5-88d5-41c2980b07d3-ca-certs\") pod \"9c78150a-2975-44c5-88d5-41c2980b07d3\" (UID: \"9c78150a-2975-44c5-88d5-41c2980b07d3\") " Jan 31 08:47:31 crc kubenswrapper[4908]: I0131 08:47:31.056719 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c78150a-2975-44c5-88d5-41c2980b07d3-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "9c78150a-2975-44c5-88d5-41c2980b07d3" (UID: "9c78150a-2975-44c5-88d5-41c2980b07d3"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:47:31 crc kubenswrapper[4908]: I0131 08:47:31.071832 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "test-operator-logs") pod "9c78150a-2975-44c5-88d5-41c2980b07d3" (UID: "9c78150a-2975-44c5-88d5-41c2980b07d3"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 08:47:31 crc kubenswrapper[4908]: I0131 08:47:31.078347 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c78150a-2975-44c5-88d5-41c2980b07d3-kube-api-access-g4bbd" (OuterVolumeSpecName: "kube-api-access-g4bbd") pod "9c78150a-2975-44c5-88d5-41c2980b07d3" (UID: "9c78150a-2975-44c5-88d5-41c2980b07d3"). InnerVolumeSpecName "kube-api-access-g4bbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:47:31 crc kubenswrapper[4908]: I0131 08:47:31.078496 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c78150a-2975-44c5-88d5-41c2980b07d3-ceph" (OuterVolumeSpecName: "ceph") pod "9c78150a-2975-44c5-88d5-41c2980b07d3" (UID: "9c78150a-2975-44c5-88d5-41c2980b07d3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:47:31 crc kubenswrapper[4908]: I0131 08:47:31.091753 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c78150a-2975-44c5-88d5-41c2980b07d3-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "9c78150a-2975-44c5-88d5-41c2980b07d3" (UID: "9c78150a-2975-44c5-88d5-41c2980b07d3"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:47:31 crc kubenswrapper[4908]: I0131 08:47:31.108257 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c78150a-2975-44c5-88d5-41c2980b07d3-test-operator-clouds-config" (OuterVolumeSpecName: "test-operator-clouds-config") pod "9c78150a-2975-44c5-88d5-41c2980b07d3" (UID: "9c78150a-2975-44c5-88d5-41c2980b07d3"). InnerVolumeSpecName "test-operator-clouds-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:47:31 crc kubenswrapper[4908]: I0131 08:47:31.118146 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c78150a-2975-44c5-88d5-41c2980b07d3-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "9c78150a-2975-44c5-88d5-41c2980b07d3" (UID: "9c78150a-2975-44c5-88d5-41c2980b07d3"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:47:31 crc kubenswrapper[4908]: I0131 08:47:31.158950 4908 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9c78150a-2975-44c5-88d5-41c2980b07d3-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 31 08:47:31 crc kubenswrapper[4908]: I0131 08:47:31.159005 4908 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9c78150a-2975-44c5-88d5-41c2980b07d3-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:47:31 crc kubenswrapper[4908]: I0131 08:47:31.159019 4908 reconciler_common.go:293] "Volume detached for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/9c78150a-2975-44c5-88d5-41c2980b07d3-test-operator-clouds-config\") on node \"crc\" DevicePath \"\"" Jan 31 08:47:31 crc kubenswrapper[4908]: I0131 08:47:31.159028 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4bbd\" (UniqueName: \"kubernetes.io/projected/9c78150a-2975-44c5-88d5-41c2980b07d3-kube-api-access-g4bbd\") on node \"crc\" DevicePath \"\"" Jan 31 08:47:31 crc kubenswrapper[4908]: I0131 08:47:31.159037 4908 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9c78150a-2975-44c5-88d5-41c2980b07d3-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 31 08:47:31 crc kubenswrapper[4908]: I0131 08:47:31.159106 4908 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9c78150a-2975-44c5-88d5-41c2980b07d3-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 31 08:47:31 crc kubenswrapper[4908]: I0131 08:47:31.159137 4908 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 31 08:47:31 crc kubenswrapper[4908]: I0131 08:47:31.188851 4908 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 31 08:47:31 crc kubenswrapper[4908]: I0131 08:47:31.262571 4908 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 31 08:47:31 crc kubenswrapper[4908]: I0131 08:47:31.267622 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c78150a-2975-44c5-88d5-41c2980b07d3-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "9c78150a-2975-44c5-88d5-41c2980b07d3" (UID: "9c78150a-2975-44c5-88d5-41c2980b07d3"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:47:31 crc kubenswrapper[4908]: I0131 08:47:31.364840 4908 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9c78150a-2975-44c5-88d5-41c2980b07d3-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 31 08:47:31 crc kubenswrapper[4908]: I0131 08:47:31.535151 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizontest-tests-horizontest" event={"ID":"9c78150a-2975-44c5-88d5-41c2980b07d3","Type":"ContainerDied","Data":"4d2506cccfc8fe5f62ec6e6831f2a13d132123ee5ccb9e894c9198a3d421fcc4"} Jan 31 08:47:31 crc kubenswrapper[4908]: I0131 08:47:31.535196 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d2506cccfc8fe5f62ec6e6831f2a13d132123ee5ccb9e894c9198a3d421fcc4" Jan 31 08:47:31 crc kubenswrapper[4908]: I0131 08:47:31.535232 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizontest-tests-horizontest" Jan 31 08:47:32 crc kubenswrapper[4908]: I0131 08:47:32.361368 4908 scope.go:117] "RemoveContainer" containerID="176fff099906422b81e71efcc20f27c8f77a797dd08a8ac31f382c44f14ee907" Jan 31 08:47:32 crc kubenswrapper[4908]: I0131 08:47:32.396231 4908 scope.go:117] "RemoveContainer" containerID="f2d53fcb533005554643595ffa7bbec6f97d3efac988f63cb9fa1025bd61845f" Jan 31 08:47:32 crc kubenswrapper[4908]: I0131 08:47:32.415181 4908 scope.go:117] "RemoveContainer" containerID="f63e7ec41f45c36c3212855adf7559739650a0852223e8b6f1416a012bb908a8" Jan 31 08:47:40 crc kubenswrapper[4908]: I0131 08:47:40.431497 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:47:40 crc kubenswrapper[4908]: I0131 08:47:40.431969 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:47:43 crc kubenswrapper[4908]: I0131 08:47:43.757602 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest"] Jan 31 08:47:43 crc kubenswrapper[4908]: E0131 08:47:43.758608 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c78150a-2975-44c5-88d5-41c2980b07d3" containerName="horizontest-tests-horizontest" Jan 31 08:47:43 crc kubenswrapper[4908]: I0131 08:47:43.758623 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c78150a-2975-44c5-88d5-41c2980b07d3" containerName="horizontest-tests-horizontest" Jan 31 08:47:43 crc kubenswrapper[4908]: E0131 08:47:43.758634 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4bc583d-0b6a-4fe8-825d-267d69de6906" containerName="extract-utilities" Jan 31 08:47:43 crc kubenswrapper[4908]: I0131 08:47:43.758640 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4bc583d-0b6a-4fe8-825d-267d69de6906" containerName="extract-utilities" Jan 31 08:47:43 crc kubenswrapper[4908]: E0131 08:47:43.758650 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4bc583d-0b6a-4fe8-825d-267d69de6906" containerName="extract-content" Jan 31 08:47:43 crc kubenswrapper[4908]: I0131 08:47:43.758657 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4bc583d-0b6a-4fe8-825d-267d69de6906" containerName="extract-content" Jan 31 08:47:43 crc kubenswrapper[4908]: E0131 08:47:43.758687 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4bc583d-0b6a-4fe8-825d-267d69de6906" containerName="registry-server" Jan 31 08:47:43 crc kubenswrapper[4908]: I0131 08:47:43.758693 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4bc583d-0b6a-4fe8-825d-267d69de6906" containerName="registry-server" Jan 31 08:47:43 crc kubenswrapper[4908]: I0131 08:47:43.758901 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c78150a-2975-44c5-88d5-41c2980b07d3" containerName="horizontest-tests-horizontest" Jan 31 08:47:43 crc kubenswrapper[4908]: I0131 08:47:43.758928 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4bc583d-0b6a-4fe8-825d-267d69de6906" containerName="registry-server" Jan 31 08:47:43 crc kubenswrapper[4908]: I0131 08:47:43.759677 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Jan 31 08:47:43 crc kubenswrapper[4908]: I0131 08:47:43.767904 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest"] Jan 31 08:47:43 crc kubenswrapper[4908]: I0131 08:47:43.819510 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"9df6f55e-4530-41a0-9eae-08d459c61ab2\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Jan 31 08:47:43 crc kubenswrapper[4908]: I0131 08:47:43.819649 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29rh9\" (UniqueName: \"kubernetes.io/projected/9df6f55e-4530-41a0-9eae-08d459c61ab2-kube-api-access-29rh9\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"9df6f55e-4530-41a0-9eae-08d459c61ab2\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Jan 31 08:47:43 crc kubenswrapper[4908]: I0131 08:47:43.921325 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"9df6f55e-4530-41a0-9eae-08d459c61ab2\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Jan 31 08:47:43 crc kubenswrapper[4908]: I0131 08:47:43.921384 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29rh9\" (UniqueName: \"kubernetes.io/projected/9df6f55e-4530-41a0-9eae-08d459c61ab2-kube-api-access-29rh9\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"9df6f55e-4530-41a0-9eae-08d459c61ab2\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Jan 31 08:47:43 crc kubenswrapper[4908]: I0131 08:47:43.921839 4908 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"9df6f55e-4530-41a0-9eae-08d459c61ab2\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Jan 31 08:47:43 crc kubenswrapper[4908]: I0131 08:47:43.946291 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29rh9\" (UniqueName: \"kubernetes.io/projected/9df6f55e-4530-41a0-9eae-08d459c61ab2-kube-api-access-29rh9\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"9df6f55e-4530-41a0-9eae-08d459c61ab2\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Jan 31 08:47:43 crc kubenswrapper[4908]: I0131 08:47:43.952609 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"9df6f55e-4530-41a0-9eae-08d459c61ab2\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Jan 31 08:47:44 crc kubenswrapper[4908]: I0131 08:47:44.095834 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Jan 31 08:47:44 crc kubenswrapper[4908]: E0131 08:47:44.096038 4908 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Jan 31 08:47:44 crc kubenswrapper[4908]: I0131 08:47:44.529853 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest"] Jan 31 08:47:44 crc kubenswrapper[4908]: E0131 08:47:44.917655 4908 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Jan 31 08:47:45 crc kubenswrapper[4908]: I0131 08:47:45.663697 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" event={"ID":"9df6f55e-4530-41a0-9eae-08d459c61ab2","Type":"ContainerStarted","Data":"a29477a321b3365064d96c88de39ca26d077239e3c2c47dc493556e76fc3ae0e"} Jan 31 08:47:46 crc kubenswrapper[4908]: E0131 08:47:46.759214 4908 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Jan 31 08:47:47 crc kubenswrapper[4908]: I0131 08:47:47.680931 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" event={"ID":"9df6f55e-4530-41a0-9eae-08d459c61ab2","Type":"ContainerStarted","Data":"51a3860b4737386bfaeedc9e5548e985553203b637c435925f2f9af8d871760f"} Jan 31 08:47:47 crc kubenswrapper[4908]: E0131 08:47:47.682061 4908 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Jan 31 08:47:47 crc kubenswrapper[4908]: I0131 08:47:47.699847 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" podStartSLOduration=2.860205025 podStartE2EDuration="4.699827986s" podCreationTimestamp="2026-01-31 08:47:43 +0000 UTC" firstStartedPulling="2026-01-31 08:47:44.919396656 +0000 UTC m=+5171.535341320" lastFinishedPulling="2026-01-31 08:47:46.759019627 +0000 UTC m=+5173.374964281" observedRunningTime="2026-01-31 08:47:47.6922733 +0000 UTC m=+5174.308217974" watchObservedRunningTime="2026-01-31 08:47:47.699827986 +0000 UTC m=+5174.315772640" Jan 31 08:47:48 crc kubenswrapper[4908]: E0131 08:47:48.688315 4908 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Jan 31 08:48:10 crc kubenswrapper[4908]: I0131 08:48:10.431347 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:48:10 crc kubenswrapper[4908]: I0131 08:48:10.432014 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:48:10 crc kubenswrapper[4908]: I0131 08:48:10.432062 4908 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 08:48:10 crc kubenswrapper[4908]: I0131 08:48:10.432770 4908 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c31afd0aae9fdd20c6d396f2a0969c95b69e611fce07660e6aa9842abd69892e"} pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 08:48:10 crc kubenswrapper[4908]: I0131 08:48:10.432832 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" containerID="cri-o://c31afd0aae9fdd20c6d396f2a0969c95b69e611fce07660e6aa9842abd69892e" gracePeriod=600 Jan 31 08:48:10 crc kubenswrapper[4908]: E0131 08:48:10.572374 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:48:10 crc kubenswrapper[4908]: I0131 08:48:10.866178 4908 generic.go:334] "Generic (PLEG): container finished" podID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerID="c31afd0aae9fdd20c6d396f2a0969c95b69e611fce07660e6aa9842abd69892e" exitCode=0 Jan 31 08:48:10 crc kubenswrapper[4908]: I0131 08:48:10.866222 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerDied","Data":"c31afd0aae9fdd20c6d396f2a0969c95b69e611fce07660e6aa9842abd69892e"} Jan 31 08:48:10 crc kubenswrapper[4908]: I0131 08:48:10.866256 4908 scope.go:117] "RemoveContainer" containerID="d5f30cf61c5b06459cc2a12e83525b01723e89cc4e3d8f64f0cb144ac620f50a" Jan 31 08:48:10 crc kubenswrapper[4908]: I0131 08:48:10.867024 4908 scope.go:117] "RemoveContainer" containerID="c31afd0aae9fdd20c6d396f2a0969c95b69e611fce07660e6aa9842abd69892e" Jan 31 08:48:10 crc kubenswrapper[4908]: E0131 08:48:10.867407 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:48:22 crc kubenswrapper[4908]: I0131 08:48:22.940214 4908 scope.go:117] "RemoveContainer" containerID="c31afd0aae9fdd20c6d396f2a0969c95b69e611fce07660e6aa9842abd69892e" Jan 31 08:48:22 crc kubenswrapper[4908]: E0131 08:48:22.941050 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:48:28 crc kubenswrapper[4908]: I0131 08:48:28.460187 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9sx6m/must-gather-xlc9p"] Jan 31 08:48:28 crc kubenswrapper[4908]: I0131 08:48:28.462517 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9sx6m/must-gather-xlc9p" Jan 31 08:48:28 crc kubenswrapper[4908]: I0131 08:48:28.467855 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9sx6m"/"kube-root-ca.crt" Jan 31 08:48:28 crc kubenswrapper[4908]: I0131 08:48:28.468019 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-9sx6m"/"default-dockercfg-f4x9b" Jan 31 08:48:28 crc kubenswrapper[4908]: I0131 08:48:28.468177 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9sx6m"/"openshift-service-ca.crt" Jan 31 08:48:28 crc kubenswrapper[4908]: I0131 08:48:28.474019 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9sx6m/must-gather-xlc9p"] Jan 31 08:48:28 crc kubenswrapper[4908]: I0131 08:48:28.577501 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-446b5\" (UniqueName: \"kubernetes.io/projected/759b96f0-114d-41c7-a885-7e9fafc2662b-kube-api-access-446b5\") pod \"must-gather-xlc9p\" (UID: \"759b96f0-114d-41c7-a885-7e9fafc2662b\") " pod="openshift-must-gather-9sx6m/must-gather-xlc9p" Jan 31 08:48:28 crc kubenswrapper[4908]: I0131 08:48:28.578264 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/759b96f0-114d-41c7-a885-7e9fafc2662b-must-gather-output\") pod \"must-gather-xlc9p\" (UID: \"759b96f0-114d-41c7-a885-7e9fafc2662b\") " pod="openshift-must-gather-9sx6m/must-gather-xlc9p" Jan 31 08:48:28 crc kubenswrapper[4908]: I0131 08:48:28.679787 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/759b96f0-114d-41c7-a885-7e9fafc2662b-must-gather-output\") pod \"must-gather-xlc9p\" (UID: \"759b96f0-114d-41c7-a885-7e9fafc2662b\") " pod="openshift-must-gather-9sx6m/must-gather-xlc9p" Jan 31 08:48:28 crc kubenswrapper[4908]: I0131 08:48:28.679884 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-446b5\" (UniqueName: \"kubernetes.io/projected/759b96f0-114d-41c7-a885-7e9fafc2662b-kube-api-access-446b5\") pod \"must-gather-xlc9p\" (UID: \"759b96f0-114d-41c7-a885-7e9fafc2662b\") " pod="openshift-must-gather-9sx6m/must-gather-xlc9p" Jan 31 08:48:28 crc kubenswrapper[4908]: I0131 08:48:28.680300 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/759b96f0-114d-41c7-a885-7e9fafc2662b-must-gather-output\") pod \"must-gather-xlc9p\" (UID: \"759b96f0-114d-41c7-a885-7e9fafc2662b\") " pod="openshift-must-gather-9sx6m/must-gather-xlc9p" Jan 31 08:48:28 crc kubenswrapper[4908]: I0131 08:48:28.699396 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-446b5\" (UniqueName: \"kubernetes.io/projected/759b96f0-114d-41c7-a885-7e9fafc2662b-kube-api-access-446b5\") pod \"must-gather-xlc9p\" (UID: \"759b96f0-114d-41c7-a885-7e9fafc2662b\") " pod="openshift-must-gather-9sx6m/must-gather-xlc9p" Jan 31 08:48:28 crc kubenswrapper[4908]: I0131 08:48:28.785612 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9sx6m/must-gather-xlc9p" Jan 31 08:48:29 crc kubenswrapper[4908]: I0131 08:48:29.265891 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9sx6m/must-gather-xlc9p"] Jan 31 08:48:30 crc kubenswrapper[4908]: I0131 08:48:30.025148 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9sx6m/must-gather-xlc9p" event={"ID":"759b96f0-114d-41c7-a885-7e9fafc2662b","Type":"ContainerStarted","Data":"e441c6f607743626e5acfdc63ce609f34e31aeb8485b667cb5f3d4ccf24e0c4f"} Jan 31 08:48:35 crc kubenswrapper[4908]: I0131 08:48:35.084650 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9sx6m/must-gather-xlc9p" event={"ID":"759b96f0-114d-41c7-a885-7e9fafc2662b","Type":"ContainerStarted","Data":"48322836d952ab70cbd0970315f066e81eebfd5906d2b1ab13630ea5dcf00760"} Jan 31 08:48:36 crc kubenswrapper[4908]: I0131 08:48:36.093116 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9sx6m/must-gather-xlc9p" event={"ID":"759b96f0-114d-41c7-a885-7e9fafc2662b","Type":"ContainerStarted","Data":"cf5d1f95dbbdcf9914fe8e8180f0a02344f5d3ac960f212f5857cf391041380f"} Jan 31 08:48:36 crc kubenswrapper[4908]: I0131 08:48:36.112563 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9sx6m/must-gather-xlc9p" podStartSLOduration=2.831661197 podStartE2EDuration="8.112541708s" podCreationTimestamp="2026-01-31 08:48:28 +0000 UTC" firstStartedPulling="2026-01-31 08:48:29.294017558 +0000 UTC m=+5215.909962212" lastFinishedPulling="2026-01-31 08:48:34.574898069 +0000 UTC m=+5221.190842723" observedRunningTime="2026-01-31 08:48:36.108915238 +0000 UTC m=+5222.724859912" watchObservedRunningTime="2026-01-31 08:48:36.112541708 +0000 UTC m=+5222.728486362" Jan 31 08:48:37 crc kubenswrapper[4908]: I0131 08:48:37.946088 4908 scope.go:117] "RemoveContainer" containerID="c31afd0aae9fdd20c6d396f2a0969c95b69e611fce07660e6aa9842abd69892e" Jan 31 08:48:37 crc kubenswrapper[4908]: E0131 08:48:37.946948 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:48:39 crc kubenswrapper[4908]: I0131 08:48:39.464487 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9sx6m/crc-debug-7lz5v"] Jan 31 08:48:39 crc kubenswrapper[4908]: I0131 08:48:39.466546 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9sx6m/crc-debug-7lz5v" Jan 31 08:48:39 crc kubenswrapper[4908]: I0131 08:48:39.612706 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/783cba32-80de-423f-be52-bb22b8c2adc0-host\") pod \"crc-debug-7lz5v\" (UID: \"783cba32-80de-423f-be52-bb22b8c2adc0\") " pod="openshift-must-gather-9sx6m/crc-debug-7lz5v" Jan 31 08:48:39 crc kubenswrapper[4908]: I0131 08:48:39.612884 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfqbl\" (UniqueName: \"kubernetes.io/projected/783cba32-80de-423f-be52-bb22b8c2adc0-kube-api-access-pfqbl\") pod \"crc-debug-7lz5v\" (UID: \"783cba32-80de-423f-be52-bb22b8c2adc0\") " pod="openshift-must-gather-9sx6m/crc-debug-7lz5v" Jan 31 08:48:39 crc kubenswrapper[4908]: I0131 08:48:39.715154 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/783cba32-80de-423f-be52-bb22b8c2adc0-host\") pod \"crc-debug-7lz5v\" (UID: \"783cba32-80de-423f-be52-bb22b8c2adc0\") " pod="openshift-must-gather-9sx6m/crc-debug-7lz5v" Jan 31 08:48:39 crc kubenswrapper[4908]: I0131 08:48:39.715329 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfqbl\" (UniqueName: \"kubernetes.io/projected/783cba32-80de-423f-be52-bb22b8c2adc0-kube-api-access-pfqbl\") pod \"crc-debug-7lz5v\" (UID: \"783cba32-80de-423f-be52-bb22b8c2adc0\") " pod="openshift-must-gather-9sx6m/crc-debug-7lz5v" Jan 31 08:48:39 crc kubenswrapper[4908]: I0131 08:48:39.715223 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/783cba32-80de-423f-be52-bb22b8c2adc0-host\") pod \"crc-debug-7lz5v\" (UID: \"783cba32-80de-423f-be52-bb22b8c2adc0\") " pod="openshift-must-gather-9sx6m/crc-debug-7lz5v" Jan 31 08:48:39 crc kubenswrapper[4908]: I0131 08:48:39.736688 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfqbl\" (UniqueName: \"kubernetes.io/projected/783cba32-80de-423f-be52-bb22b8c2adc0-kube-api-access-pfqbl\") pod \"crc-debug-7lz5v\" (UID: \"783cba32-80de-423f-be52-bb22b8c2adc0\") " pod="openshift-must-gather-9sx6m/crc-debug-7lz5v" Jan 31 08:48:39 crc kubenswrapper[4908]: I0131 08:48:39.788430 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9sx6m/crc-debug-7lz5v" Jan 31 08:48:39 crc kubenswrapper[4908]: W0131 08:48:39.818178 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod783cba32_80de_423f_be52_bb22b8c2adc0.slice/crio-f1b407f7847185c76da29c9355d47cfc4edc2f504659edacf19b6bcfa9fa7b04 WatchSource:0}: Error finding container f1b407f7847185c76da29c9355d47cfc4edc2f504659edacf19b6bcfa9fa7b04: Status 404 returned error can't find the container with id f1b407f7847185c76da29c9355d47cfc4edc2f504659edacf19b6bcfa9fa7b04 Jan 31 08:48:40 crc kubenswrapper[4908]: I0131 08:48:40.129748 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9sx6m/crc-debug-7lz5v" event={"ID":"783cba32-80de-423f-be52-bb22b8c2adc0","Type":"ContainerStarted","Data":"f1b407f7847185c76da29c9355d47cfc4edc2f504659edacf19b6bcfa9fa7b04"} Jan 31 08:48:52 crc kubenswrapper[4908]: I0131 08:48:52.940650 4908 scope.go:117] "RemoveContainer" containerID="c31afd0aae9fdd20c6d396f2a0969c95b69e611fce07660e6aa9842abd69892e" Jan 31 08:48:52 crc kubenswrapper[4908]: E0131 08:48:52.941442 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:48:55 crc kubenswrapper[4908]: E0131 08:48:55.854995 4908 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Jan 31 08:48:55 crc kubenswrapper[4908]: E0131 08:48:55.855699 4908 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pfqbl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-7lz5v_openshift-must-gather-9sx6m(783cba32-80de-423f-be52-bb22b8c2adc0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 08:48:55 crc kubenswrapper[4908]: E0131 08:48:55.856934 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-9sx6m/crc-debug-7lz5v" podUID="783cba32-80de-423f-be52-bb22b8c2adc0" Jan 31 08:48:56 crc kubenswrapper[4908]: E0131 08:48:56.436328 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-9sx6m/crc-debug-7lz5v" podUID="783cba32-80de-423f-be52-bb22b8c2adc0" Jan 31 08:49:03 crc kubenswrapper[4908]: I0131 08:49:03.941005 4908 scope.go:117] "RemoveContainer" containerID="c31afd0aae9fdd20c6d396f2a0969c95b69e611fce07660e6aa9842abd69892e" Jan 31 08:49:03 crc kubenswrapper[4908]: E0131 08:49:03.942532 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:49:09 crc kubenswrapper[4908]: E0131 08:49:09.940683 4908 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Jan 31 08:49:15 crc kubenswrapper[4908]: I0131 08:49:15.596521 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9sx6m/crc-debug-7lz5v" event={"ID":"783cba32-80de-423f-be52-bb22b8c2adc0","Type":"ContainerStarted","Data":"9581d4d01cab2ae4efcd484c57ba1d60237652151f0ed76465a1befa6eb8341c"} Jan 31 08:49:15 crc kubenswrapper[4908]: I0131 08:49:15.639507 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9sx6m/crc-debug-7lz5v" podStartSLOduration=1.529204616 podStartE2EDuration="36.639484268s" podCreationTimestamp="2026-01-31 08:48:39 +0000 UTC" firstStartedPulling="2026-01-31 08:48:39.820355686 +0000 UTC m=+5226.436300340" lastFinishedPulling="2026-01-31 08:49:14.930635338 +0000 UTC m=+5261.546579992" observedRunningTime="2026-01-31 08:49:15.617283611 +0000 UTC m=+5262.233228265" watchObservedRunningTime="2026-01-31 08:49:15.639484268 +0000 UTC m=+5262.255428932" Jan 31 08:49:15 crc kubenswrapper[4908]: I0131 08:49:15.940691 4908 scope.go:117] "RemoveContainer" containerID="c31afd0aae9fdd20c6d396f2a0969c95b69e611fce07660e6aa9842abd69892e" Jan 31 08:49:15 crc kubenswrapper[4908]: E0131 08:49:15.940932 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:49:29 crc kubenswrapper[4908]: I0131 08:49:29.941440 4908 scope.go:117] "RemoveContainer" containerID="c31afd0aae9fdd20c6d396f2a0969c95b69e611fce07660e6aa9842abd69892e" Jan 31 08:49:29 crc kubenswrapper[4908]: E0131 08:49:29.942385 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:49:40 crc kubenswrapper[4908]: I0131 08:49:40.940748 4908 scope.go:117] "RemoveContainer" containerID="c31afd0aae9fdd20c6d396f2a0969c95b69e611fce07660e6aa9842abd69892e" Jan 31 08:49:40 crc kubenswrapper[4908]: E0131 08:49:40.941838 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:49:54 crc kubenswrapper[4908]: I0131 08:49:54.940715 4908 scope.go:117] "RemoveContainer" containerID="c31afd0aae9fdd20c6d396f2a0969c95b69e611fce07660e6aa9842abd69892e" Jan 31 08:49:54 crc kubenswrapper[4908]: E0131 08:49:54.941874 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:50:07 crc kubenswrapper[4908]: I0131 08:50:07.033227 4908 generic.go:334] "Generic (PLEG): container finished" podID="783cba32-80de-423f-be52-bb22b8c2adc0" containerID="9581d4d01cab2ae4efcd484c57ba1d60237652151f0ed76465a1befa6eb8341c" exitCode=0 Jan 31 08:50:07 crc kubenswrapper[4908]: I0131 08:50:07.033302 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9sx6m/crc-debug-7lz5v" event={"ID":"783cba32-80de-423f-be52-bb22b8c2adc0","Type":"ContainerDied","Data":"9581d4d01cab2ae4efcd484c57ba1d60237652151f0ed76465a1befa6eb8341c"} Jan 31 08:50:08 crc kubenswrapper[4908]: I0131 08:50:08.156477 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9sx6m/crc-debug-7lz5v" Jan 31 08:50:08 crc kubenswrapper[4908]: I0131 08:50:08.194941 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9sx6m/crc-debug-7lz5v"] Jan 31 08:50:08 crc kubenswrapper[4908]: I0131 08:50:08.202883 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9sx6m/crc-debug-7lz5v"] Jan 31 08:50:08 crc kubenswrapper[4908]: I0131 08:50:08.231683 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/783cba32-80de-423f-be52-bb22b8c2adc0-host\") pod \"783cba32-80de-423f-be52-bb22b8c2adc0\" (UID: \"783cba32-80de-423f-be52-bb22b8c2adc0\") " Jan 31 08:50:08 crc kubenswrapper[4908]: I0131 08:50:08.231813 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/783cba32-80de-423f-be52-bb22b8c2adc0-host" (OuterVolumeSpecName: "host") pod "783cba32-80de-423f-be52-bb22b8c2adc0" (UID: "783cba32-80de-423f-be52-bb22b8c2adc0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 08:50:08 crc kubenswrapper[4908]: I0131 08:50:08.231876 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfqbl\" (UniqueName: \"kubernetes.io/projected/783cba32-80de-423f-be52-bb22b8c2adc0-kube-api-access-pfqbl\") pod \"783cba32-80de-423f-be52-bb22b8c2adc0\" (UID: \"783cba32-80de-423f-be52-bb22b8c2adc0\") " Jan 31 08:50:08 crc kubenswrapper[4908]: I0131 08:50:08.232636 4908 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/783cba32-80de-423f-be52-bb22b8c2adc0-host\") on node \"crc\" DevicePath \"\"" Jan 31 08:50:08 crc kubenswrapper[4908]: I0131 08:50:08.239250 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/783cba32-80de-423f-be52-bb22b8c2adc0-kube-api-access-pfqbl" (OuterVolumeSpecName: "kube-api-access-pfqbl") pod "783cba32-80de-423f-be52-bb22b8c2adc0" (UID: "783cba32-80de-423f-be52-bb22b8c2adc0"). InnerVolumeSpecName "kube-api-access-pfqbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:50:08 crc kubenswrapper[4908]: I0131 08:50:08.334214 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfqbl\" (UniqueName: \"kubernetes.io/projected/783cba32-80de-423f-be52-bb22b8c2adc0-kube-api-access-pfqbl\") on node \"crc\" DevicePath \"\"" Jan 31 08:50:09 crc kubenswrapper[4908]: I0131 08:50:09.051905 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1b407f7847185c76da29c9355d47cfc4edc2f504659edacf19b6bcfa9fa7b04" Jan 31 08:50:09 crc kubenswrapper[4908]: I0131 08:50:09.051973 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9sx6m/crc-debug-7lz5v" Jan 31 08:50:09 crc kubenswrapper[4908]: I0131 08:50:09.342614 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9sx6m/crc-debug-pzbxh"] Jan 31 08:50:09 crc kubenswrapper[4908]: E0131 08:50:09.343020 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="783cba32-80de-423f-be52-bb22b8c2adc0" containerName="container-00" Jan 31 08:50:09 crc kubenswrapper[4908]: I0131 08:50:09.343033 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="783cba32-80de-423f-be52-bb22b8c2adc0" containerName="container-00" Jan 31 08:50:09 crc kubenswrapper[4908]: I0131 08:50:09.343234 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="783cba32-80de-423f-be52-bb22b8c2adc0" containerName="container-00" Jan 31 08:50:09 crc kubenswrapper[4908]: I0131 08:50:09.343800 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9sx6m/crc-debug-pzbxh" Jan 31 08:50:09 crc kubenswrapper[4908]: I0131 08:50:09.456775 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkrfv\" (UniqueName: \"kubernetes.io/projected/c689cfe3-2266-4156-8044-196763ea02eb-kube-api-access-wkrfv\") pod \"crc-debug-pzbxh\" (UID: \"c689cfe3-2266-4156-8044-196763ea02eb\") " pod="openshift-must-gather-9sx6m/crc-debug-pzbxh" Jan 31 08:50:09 crc kubenswrapper[4908]: I0131 08:50:09.456877 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c689cfe3-2266-4156-8044-196763ea02eb-host\") pod \"crc-debug-pzbxh\" (UID: \"c689cfe3-2266-4156-8044-196763ea02eb\") " pod="openshift-must-gather-9sx6m/crc-debug-pzbxh" Jan 31 08:50:09 crc kubenswrapper[4908]: I0131 08:50:09.558756 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkrfv\" (UniqueName: \"kubernetes.io/projected/c689cfe3-2266-4156-8044-196763ea02eb-kube-api-access-wkrfv\") pod \"crc-debug-pzbxh\" (UID: \"c689cfe3-2266-4156-8044-196763ea02eb\") " pod="openshift-must-gather-9sx6m/crc-debug-pzbxh" Jan 31 08:50:09 crc kubenswrapper[4908]: I0131 08:50:09.558863 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c689cfe3-2266-4156-8044-196763ea02eb-host\") pod \"crc-debug-pzbxh\" (UID: \"c689cfe3-2266-4156-8044-196763ea02eb\") " pod="openshift-must-gather-9sx6m/crc-debug-pzbxh" Jan 31 08:50:09 crc kubenswrapper[4908]: I0131 08:50:09.559016 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c689cfe3-2266-4156-8044-196763ea02eb-host\") pod \"crc-debug-pzbxh\" (UID: \"c689cfe3-2266-4156-8044-196763ea02eb\") " pod="openshift-must-gather-9sx6m/crc-debug-pzbxh" Jan 31 08:50:09 crc kubenswrapper[4908]: I0131 08:50:09.576704 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkrfv\" (UniqueName: \"kubernetes.io/projected/c689cfe3-2266-4156-8044-196763ea02eb-kube-api-access-wkrfv\") pod \"crc-debug-pzbxh\" (UID: \"c689cfe3-2266-4156-8044-196763ea02eb\") " pod="openshift-must-gather-9sx6m/crc-debug-pzbxh" Jan 31 08:50:09 crc kubenswrapper[4908]: I0131 08:50:09.660365 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9sx6m/crc-debug-pzbxh" Jan 31 08:50:09 crc kubenswrapper[4908]: I0131 08:50:09.940332 4908 scope.go:117] "RemoveContainer" containerID="c31afd0aae9fdd20c6d396f2a0969c95b69e611fce07660e6aa9842abd69892e" Jan 31 08:50:09 crc kubenswrapper[4908]: E0131 08:50:09.941280 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:50:09 crc kubenswrapper[4908]: I0131 08:50:09.954054 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="783cba32-80de-423f-be52-bb22b8c2adc0" path="/var/lib/kubelet/pods/783cba32-80de-423f-be52-bb22b8c2adc0/volumes" Jan 31 08:50:10 crc kubenswrapper[4908]: I0131 08:50:10.065480 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9sx6m/crc-debug-pzbxh" event={"ID":"c689cfe3-2266-4156-8044-196763ea02eb","Type":"ContainerStarted","Data":"5dcf26669bdf631c60ac913e6f53c88dbd0a69738711c8c878a998e07b6b1852"} Jan 31 08:50:11 crc kubenswrapper[4908]: I0131 08:50:11.075190 4908 generic.go:334] "Generic (PLEG): container finished" podID="c689cfe3-2266-4156-8044-196763ea02eb" containerID="7d80062c1e826eefb5aabdc19e753266968f9909ae6ac22922e43c444640e2ad" exitCode=1 Jan 31 08:50:11 crc kubenswrapper[4908]: I0131 08:50:11.075264 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9sx6m/crc-debug-pzbxh" event={"ID":"c689cfe3-2266-4156-8044-196763ea02eb","Type":"ContainerDied","Data":"7d80062c1e826eefb5aabdc19e753266968f9909ae6ac22922e43c444640e2ad"} Jan 31 08:50:11 crc kubenswrapper[4908]: I0131 08:50:11.111698 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9sx6m/crc-debug-pzbxh"] Jan 31 08:50:11 crc kubenswrapper[4908]: I0131 08:50:11.121179 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9sx6m/crc-debug-pzbxh"] Jan 31 08:50:12 crc kubenswrapper[4908]: I0131 08:50:12.187866 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9sx6m/crc-debug-pzbxh" Jan 31 08:50:12 crc kubenswrapper[4908]: I0131 08:50:12.316236 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c689cfe3-2266-4156-8044-196763ea02eb-host\") pod \"c689cfe3-2266-4156-8044-196763ea02eb\" (UID: \"c689cfe3-2266-4156-8044-196763ea02eb\") " Jan 31 08:50:12 crc kubenswrapper[4908]: I0131 08:50:12.316339 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkrfv\" (UniqueName: \"kubernetes.io/projected/c689cfe3-2266-4156-8044-196763ea02eb-kube-api-access-wkrfv\") pod \"c689cfe3-2266-4156-8044-196763ea02eb\" (UID: \"c689cfe3-2266-4156-8044-196763ea02eb\") " Jan 31 08:50:12 crc kubenswrapper[4908]: I0131 08:50:12.316474 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c689cfe3-2266-4156-8044-196763ea02eb-host" (OuterVolumeSpecName: "host") pod "c689cfe3-2266-4156-8044-196763ea02eb" (UID: "c689cfe3-2266-4156-8044-196763ea02eb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 08:50:12 crc kubenswrapper[4908]: I0131 08:50:12.316964 4908 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c689cfe3-2266-4156-8044-196763ea02eb-host\") on node \"crc\" DevicePath \"\"" Jan 31 08:50:12 crc kubenswrapper[4908]: I0131 08:50:12.327668 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c689cfe3-2266-4156-8044-196763ea02eb-kube-api-access-wkrfv" (OuterVolumeSpecName: "kube-api-access-wkrfv") pod "c689cfe3-2266-4156-8044-196763ea02eb" (UID: "c689cfe3-2266-4156-8044-196763ea02eb"). InnerVolumeSpecName "kube-api-access-wkrfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:50:12 crc kubenswrapper[4908]: I0131 08:50:12.419229 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkrfv\" (UniqueName: \"kubernetes.io/projected/c689cfe3-2266-4156-8044-196763ea02eb-kube-api-access-wkrfv\") on node \"crc\" DevicePath \"\"" Jan 31 08:50:13 crc kubenswrapper[4908]: I0131 08:50:13.091773 4908 scope.go:117] "RemoveContainer" containerID="7d80062c1e826eefb5aabdc19e753266968f9909ae6ac22922e43c444640e2ad" Jan 31 08:50:13 crc kubenswrapper[4908]: I0131 08:50:13.091802 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9sx6m/crc-debug-pzbxh" Jan 31 08:50:13 crc kubenswrapper[4908]: I0131 08:50:13.949884 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c689cfe3-2266-4156-8044-196763ea02eb" path="/var/lib/kubelet/pods/c689cfe3-2266-4156-8044-196763ea02eb/volumes" Jan 31 08:50:22 crc kubenswrapper[4908]: I0131 08:50:22.939917 4908 scope.go:117] "RemoveContainer" containerID="c31afd0aae9fdd20c6d396f2a0969c95b69e611fce07660e6aa9842abd69892e" Jan 31 08:50:22 crc kubenswrapper[4908]: E0131 08:50:22.940699 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:50:23 crc kubenswrapper[4908]: E0131 08:50:23.940455 4908 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Jan 31 08:50:32 crc kubenswrapper[4908]: I0131 08:50:32.106360 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ansibletest-ansibletest_d91c231b-3805-4477-bb16-cdd3a3c087b3/ansibletest-ansibletest/0.log" Jan 31 08:50:32 crc kubenswrapper[4908]: I0131 08:50:32.279652 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6b879d7b7d-bdrjp_f507dd3e-9063-413d-9549-88d6a9298d28/barbican-api/0.log" Jan 31 08:50:32 crc kubenswrapper[4908]: I0131 08:50:32.336868 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6b879d7b7d-bdrjp_f507dd3e-9063-413d-9549-88d6a9298d28/barbican-api-log/0.log" Jan 31 08:50:32 crc kubenswrapper[4908]: I0131 08:50:32.484144 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5d54fb7d5b-4554k_a90a3a29-b5c4-4af3-a192-0d897b673ae7/barbican-keystone-listener/0.log" Jan 31 08:50:32 crc kubenswrapper[4908]: I0131 08:50:32.558286 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5d54fb7d5b-4554k_a90a3a29-b5c4-4af3-a192-0d897b673ae7/barbican-keystone-listener-log/0.log" Jan 31 08:50:32 crc kubenswrapper[4908]: I0131 08:50:32.630875 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-68b6964dc9-tqj78_1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0/barbican-worker/0.log" Jan 31 08:50:32 crc kubenswrapper[4908]: I0131 08:50:32.723685 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-68b6964dc9-tqj78_1bbcc212-3a50-4e5b-9ca7-e3c4d440cba0/barbican-worker-log/0.log" Jan 31 08:50:32 crc kubenswrapper[4908]: I0131 08:50:32.868763 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-k7gqq_46102316-b0a9-463d-9b57-6499478e3031/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 08:50:32 crc kubenswrapper[4908]: I0131 08:50:32.950696 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7a010224-6b0f-4037-a4ca-f32fce0ab77b/ceilometer-central-agent/0.log" Jan 31 08:50:33 crc kubenswrapper[4908]: I0131 08:50:33.031745 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7a010224-6b0f-4037-a4ca-f32fce0ab77b/ceilometer-notification-agent/0.log" Jan 31 08:50:33 crc kubenswrapper[4908]: I0131 08:50:33.066218 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7a010224-6b0f-4037-a4ca-f32fce0ab77b/sg-core/0.log" Jan 31 08:50:33 crc kubenswrapper[4908]: I0131 08:50:33.075490 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7a010224-6b0f-4037-a4ca-f32fce0ab77b/proxy-httpd/0.log" Jan 31 08:50:33 crc kubenswrapper[4908]: I0131 08:50:33.390038 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-d77nz_ef67c3c5-b34c-4302-9f15-55df61dc6e41/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 08:50:33 crc kubenswrapper[4908]: I0131 08:50:33.437630 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mtv8n_e7290989-e8f1-419b-8196-7e2acaaac9db/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 08:50:34 crc kubenswrapper[4908]: I0131 08:50:34.031611 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e4b5f98a-7a0f-42c1-8af4-c220716fe6b4/cinder-api-log/0.log" Jan 31 08:50:34 crc kubenswrapper[4908]: I0131 08:50:34.084766 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130/probe/0.log" Jan 31 08:50:34 crc kubenswrapper[4908]: I0131 08:50:34.115960 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e4b5f98a-7a0f-42c1-8af4-c220716fe6b4/cinder-api/0.log" Jan 31 08:50:34 crc kubenswrapper[4908]: I0131 08:50:34.186889 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_e1dfb9b7-4b8c-4d05-b52a-c1dfc0789130/cinder-backup/0.log" Jan 31 08:50:34 crc kubenswrapper[4908]: I0131 08:50:34.476640 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_650ef73d-d2fe-4042-accc-ae37bbacde25/cinder-scheduler/0.log" Jan 31 08:50:34 crc kubenswrapper[4908]: I0131 08:50:34.580840 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_223899d0-e94c-4e2d-bfba-d9b7baec40e1/cinder-volume/0.log" Jan 31 08:50:34 crc kubenswrapper[4908]: I0131 08:50:34.581398 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_650ef73d-d2fe-4042-accc-ae37bbacde25/probe/0.log" Jan 31 08:50:34 crc kubenswrapper[4908]: I0131 08:50:34.657013 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_223899d0-e94c-4e2d-bfba-d9b7baec40e1/probe/0.log" Jan 31 08:50:34 crc kubenswrapper[4908]: I0131 08:50:34.866208 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-9vqg2_a5ab199c-fca0-4b18-aaf0-572c43e84695/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 08:50:34 crc kubenswrapper[4908]: I0131 08:50:34.876008 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-c82r6_f7bb5704-7aa7-4021-bd84-2065fdda3980/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 08:50:35 crc kubenswrapper[4908]: I0131 08:50:35.064415 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-cq764_6b5bca03-0d34-432b-a2a8-2a30d6cec2cd/init/0.log" Jan 31 08:50:35 crc kubenswrapper[4908]: I0131 08:50:35.331678 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6/glance-httpd/0.log" Jan 31 08:50:35 crc kubenswrapper[4908]: I0131 08:50:35.365090 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-cq764_6b5bca03-0d34-432b-a2a8-2a30d6cec2cd/dnsmasq-dns/0.log" Jan 31 08:50:35 crc kubenswrapper[4908]: I0131 08:50:35.386906 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-cq764_6b5bca03-0d34-432b-a2a8-2a30d6cec2cd/init/0.log" Jan 31 08:50:35 crc kubenswrapper[4908]: I0131 08:50:35.517643 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4cb838df-d11c-4ba5-8c51-7f5cb5d83ca6/glance-log/0.log" Jan 31 08:50:35 crc kubenswrapper[4908]: I0131 08:50:35.548664 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_7812465e-1935-4283-865b-c02289d7bd1d/glance-log/0.log" Jan 31 08:50:35 crc kubenswrapper[4908]: I0131 08:50:35.573433 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_7812465e-1935-4283-865b-c02289d7bd1d/glance-httpd/0.log" Jan 31 08:50:35 crc kubenswrapper[4908]: I0131 08:50:35.901628 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7c4bcc4864-knpgw_ee45fe79-e3e5-494d-a355-4f8cd5401c8f/horizon/0.log" Jan 31 08:50:35 crc kubenswrapper[4908]: I0131 08:50:35.939827 4908 scope.go:117] "RemoveContainer" containerID="c31afd0aae9fdd20c6d396f2a0969c95b69e611fce07660e6aa9842abd69892e" Jan 31 08:50:35 crc kubenswrapper[4908]: E0131 08:50:35.940110 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:50:36 crc kubenswrapper[4908]: I0131 08:50:36.059407 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizontest-tests-horizontest_9c78150a-2975-44c5-88d5-41c2980b07d3/horizontest-tests-horizontest/0.log" Jan 31 08:50:36 crc kubenswrapper[4908]: I0131 08:50:36.158710 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-wzh6g_c9105f7e-b7e8-451c-be73-c07970181984/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 08:50:36 crc kubenswrapper[4908]: I0131 08:50:36.507067 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7c4bcc4864-knpgw_ee45fe79-e3e5-494d-a355-4f8cd5401c8f/horizon-log/0.log" Jan 31 08:50:36 crc kubenswrapper[4908]: I0131 08:50:36.520525 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-jxn85_ff124f52-0985-4497-bc1a-4864a1973914/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 08:50:36 crc kubenswrapper[4908]: I0131 08:50:36.655777 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7d75564955-lkfg2_24d8623b-0810-4e1a-8ab2-83e734e71cb3/keystone-api/0.log" Jan 31 08:50:36 crc kubenswrapper[4908]: I0131 08:50:36.668297 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29497441-tkvc9_2dd5446d-4cd7-4de0-83cb-0420400ee416/keystone-cron/0.log" Jan 31 08:50:36 crc kubenswrapper[4908]: I0131 08:50:36.715629 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_9eadc7d2-1530-480d-b152-1e13e0d78eac/kube-state-metrics/0.log" Jan 31 08:50:36 crc kubenswrapper[4908]: I0131 08:50:36.884823 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-8jfph_aa48837f-3373-4699-936f-a64a1c1daf15/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 08:50:36 crc kubenswrapper[4908]: I0131 08:50:36.986623 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_b36d23df-e364-41df-bfd4-751e0104325d/manila-api/0.log" Jan 31 08:50:36 crc kubenswrapper[4908]: I0131 08:50:36.995304 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_b36d23df-e364-41df-bfd4-751e0104325d/manila-api-log/0.log" Jan 31 08:50:37 crc kubenswrapper[4908]: I0131 08:50:37.114863 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_e7d1af3f-b25e-47f1-9b97-ee268d46505f/probe/0.log" Jan 31 08:50:37 crc kubenswrapper[4908]: I0131 08:50:37.196462 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_e7d1af3f-b25e-47f1-9b97-ee268d46505f/manila-scheduler/0.log" Jan 31 08:50:37 crc kubenswrapper[4908]: I0131 08:50:37.260009 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_57e85c46-6c67-4c97-b7e4-49b3090ceda1/manila-share/0.log" Jan 31 08:50:37 crc kubenswrapper[4908]: I0131 08:50:37.318374 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_57e85c46-6c67-4c97-b7e4-49b3090ceda1/probe/0.log" Jan 31 08:50:37 crc kubenswrapper[4908]: I0131 08:50:37.544843 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6577f594f-lz5n8_6333954b-6275-456a-a634-1273c62c4cec/neutron-api/0.log" Jan 31 08:50:37 crc kubenswrapper[4908]: I0131 08:50:37.553291 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6577f594f-lz5n8_6333954b-6275-456a-a634-1273c62c4cec/neutron-httpd/0.log" Jan 31 08:50:37 crc kubenswrapper[4908]: I0131 08:50:37.765835 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-kk89x_697af640-d294-4a01-a0d5-1ee46f8b75ae/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 08:50:38 crc kubenswrapper[4908]: I0131 08:50:38.160420 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f7fe735d-6bf9-4174-a43e-43a5a79bf69b/nova-api-log/0.log" Jan 31 08:50:38 crc kubenswrapper[4908]: I0131 08:50:38.247569 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_7f7e63c4-f87f-4cab-be77-7da55fcddd87/nova-cell0-conductor-conductor/0.log" Jan 31 08:50:38 crc kubenswrapper[4908]: I0131 08:50:38.514015 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_54f0b1fc-48c2-4f04-8087-b5d94b8168b2/nova-cell1-conductor-conductor/0.log" Jan 31 08:50:38 crc kubenswrapper[4908]: I0131 08:50:38.544489 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f7fe735d-6bf9-4174-a43e-43a5a79bf69b/nova-api-api/0.log" Jan 31 08:50:38 crc kubenswrapper[4908]: I0131 08:50:38.644695 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_343522a1-814f-4826-aec8-bdf76f6f9659/nova-cell1-novncproxy-novncproxy/0.log" Jan 31 08:50:38 crc kubenswrapper[4908]: I0131 08:50:38.750823 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-mzl66_e1e4ceb2-80f1-47b2-ab03-54b4742f5b0b/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 08:50:38 crc kubenswrapper[4908]: I0131 08:50:38.971310 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e7464ca4-eaeb-4a8f-a554-33acef353bfa/nova-metadata-log/0.log" Jan 31 08:50:39 crc kubenswrapper[4908]: I0131 08:50:39.157278 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_6d36e511-03b7-4d73-9a5e-ac775fafa866/nova-scheduler-scheduler/0.log" Jan 31 08:50:39 crc kubenswrapper[4908]: I0131 08:50:39.223338 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b3616414-a3b1-49e4-b87e-29abb6752ccb/mysql-bootstrap/0.log" Jan 31 08:50:39 crc kubenswrapper[4908]: I0131 08:50:39.431031 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b3616414-a3b1-49e4-b87e-29abb6752ccb/mysql-bootstrap/0.log" Jan 31 08:50:39 crc kubenswrapper[4908]: I0131 08:50:39.514762 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b3616414-a3b1-49e4-b87e-29abb6752ccb/galera/0.log" Jan 31 08:50:39 crc kubenswrapper[4908]: I0131 08:50:39.647969 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0b3b18d7-fe50-4d65-b351-bb3bf14854f1/mysql-bootstrap/0.log" Jan 31 08:50:39 crc kubenswrapper[4908]: I0131 08:50:39.940992 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0b3b18d7-fe50-4d65-b351-bb3bf14854f1/galera/0.log" Jan 31 08:50:39 crc kubenswrapper[4908]: I0131 08:50:39.965445 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0b3b18d7-fe50-4d65-b351-bb3bf14854f1/mysql-bootstrap/0.log" Jan 31 08:50:40 crc kubenswrapper[4908]: I0131 08:50:40.147052 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_f0d30b67-d125-4065-bb37-e91a0ba45b29/openstackclient/0.log" Jan 31 08:50:40 crc kubenswrapper[4908]: I0131 08:50:40.147315 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-jt4wn_d6e9ace1-1aad-474c-a7be-73e4a08770e1/ovn-controller/0.log" Jan 31 08:50:40 crc kubenswrapper[4908]: I0131 08:50:40.404370 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-4xcwk_eea5d160-58e2-45d2-9546-4a837cb35f0b/openstack-network-exporter/0.log" Jan 31 08:50:40 crc kubenswrapper[4908]: I0131 08:50:40.629602 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lnjb2_a95e7519-346b-4852-a98a-f164fc0d2b83/ovsdb-server-init/0.log" Jan 31 08:50:40 crc kubenswrapper[4908]: I0131 08:50:40.796888 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lnjb2_a95e7519-346b-4852-a98a-f164fc0d2b83/ovsdb-server-init/0.log" Jan 31 08:50:40 crc kubenswrapper[4908]: I0131 08:50:40.879866 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lnjb2_a95e7519-346b-4852-a98a-f164fc0d2b83/ovsdb-server/0.log" Jan 31 08:50:41 crc kubenswrapper[4908]: I0131 08:50:41.342642 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lnjb2_a95e7519-346b-4852-a98a-f164fc0d2b83/ovs-vswitchd/0.log" Jan 31 08:50:41 crc kubenswrapper[4908]: I0131 08:50:41.479442 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e7464ca4-eaeb-4a8f-a554-33acef353bfa/nova-metadata-metadata/0.log" Jan 31 08:50:41 crc kubenswrapper[4908]: I0131 08:50:41.602766 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-5gx7p_55dbf02b-7749-423f-9f04-7b2d545b9eaa/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 08:50:41 crc kubenswrapper[4908]: I0131 08:50:41.663241 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a672a17c-744a-4e39-b753-e406a91a02f0/openstack-network-exporter/0.log" Jan 31 08:50:41 crc kubenswrapper[4908]: I0131 08:50:41.761134 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a672a17c-744a-4e39-b753-e406a91a02f0/ovn-northd/0.log" Jan 31 08:50:41 crc kubenswrapper[4908]: I0131 08:50:41.870652 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2/openstack-network-exporter/0.log" Jan 31 08:50:41 crc kubenswrapper[4908]: I0131 08:50:41.900010 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d2a696d8-67cd-4f3d-b4db-b26d2e58d4c2/ovsdbserver-nb/0.log" Jan 31 08:50:42 crc kubenswrapper[4908]: I0131 08:50:42.118515 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_9177e471-d5ab-4e6f-85d6-24bb337facf6/openstack-network-exporter/0.log" Jan 31 08:50:42 crc kubenswrapper[4908]: I0131 08:50:42.155196 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_9177e471-d5ab-4e6f-85d6-24bb337facf6/ovsdbserver-sb/0.log" Jan 31 08:50:42 crc kubenswrapper[4908]: I0131 08:50:42.452786 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5b4c4c4576-4d8kj_2c09e85d-b07b-4be2-b947-4777bcdd977a/placement-log/0.log" Jan 31 08:50:42 crc kubenswrapper[4908]: I0131 08:50:42.463795 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5b4c4c4576-4d8kj_2c09e85d-b07b-4be2-b947-4777bcdd977a/placement-api/0.log" Jan 31 08:50:42 crc kubenswrapper[4908]: I0131 08:50:42.497206 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_676f585e-f8bd-4dd3-bcab-e75c830382e3/setup-container/0.log" Jan 31 08:50:42 crc kubenswrapper[4908]: I0131 08:50:42.727505 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32/setup-container/0.log" Jan 31 08:50:42 crc kubenswrapper[4908]: I0131 08:50:42.752034 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_676f585e-f8bd-4dd3-bcab-e75c830382e3/setup-container/0.log" Jan 31 08:50:42 crc kubenswrapper[4908]: I0131 08:50:42.797103 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_676f585e-f8bd-4dd3-bcab-e75c830382e3/rabbitmq/0.log" Jan 31 08:50:42 crc kubenswrapper[4908]: I0131 08:50:42.994501 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32/setup-container/0.log" Jan 31 08:50:43 crc kubenswrapper[4908]: I0131 08:50:43.000504 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b9bf60d3-67cf-4bdc-9d31-eaa99a1e1f32/rabbitmq/0.log" Jan 31 08:50:43 crc kubenswrapper[4908]: I0131 08:50:43.065852 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-2nb4x_a22134c5-5619-491a-9a1e-b7c07167ee98/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 08:50:43 crc kubenswrapper[4908]: I0131 08:50:43.217811 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-pp7g5_48a117b4-fbbf-464c-a5bb-301f52736dee/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 08:50:43 crc kubenswrapper[4908]: I0131 08:50:43.323687 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-96lv7_5ab86783-08b3-4eed-bb8c-f053a7d46d0c/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 08:50:43 crc kubenswrapper[4908]: I0131 08:50:43.533111 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-wp9w6_cf1127bf-036b-4e90-ae26-84a790d46e73/ssh-known-hosts-edpm-deployment/0.log" Jan 31 08:50:43 crc kubenswrapper[4908]: I0131 08:50:43.628193 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s00-full_40735625-500c-4305-a02b-1ba667645b50/tempest-tests-tempest-tests-runner/0.log" Jan 31 08:50:43 crc kubenswrapper[4908]: I0131 08:50:43.788261 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s01-single-test_4104d917-4c43-4e30-8d26-600b50a30e83/tempest-tests-tempest-tests-runner/0.log" Jan 31 08:50:43 crc kubenswrapper[4908]: I0131 08:50:43.856784 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-ansibletest-ansibletest-ansibletest_5d1dd36d-b9dd-4f86-bfed-d63e1d1a2359/test-operator-logs-container/0.log" Jan 31 08:50:44 crc kubenswrapper[4908]: I0131 08:50:44.117493 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-horizontest-horizontest-tests-horizontest_9df6f55e-4530-41a0-9eae-08d459c61ab2/test-operator-logs-container/0.log" Jan 31 08:50:44 crc kubenswrapper[4908]: I0131 08:50:44.143696 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_467d67cb-8713-4136-b0ac-9aa27649b944/test-operator-logs-container/0.log" Jan 31 08:50:44 crc kubenswrapper[4908]: I0131 08:50:44.371246 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tobiko-tobiko-tests-tobiko_06e3656a-1b5e-404c-86dc-95842de093cc/test-operator-logs-container/0.log" Jan 31 08:50:44 crc kubenswrapper[4908]: I0131 08:50:44.457532 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tobiko-tests-tobiko-s00-podified-functional_c13761ee-a71a-4670-bab8-904ed63f2b92/tobiko-tests-tobiko/0.log" Jan 31 08:50:44 crc kubenswrapper[4908]: I0131 08:50:44.646321 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tobiko-tests-tobiko-s01-sanity_e9c28ea9-f878-43bb-bce2-fe9eebbf2baf/tobiko-tests-tobiko/0.log" Jan 31 08:50:44 crc kubenswrapper[4908]: I0131 08:50:44.683462 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-5hnj2_0c429649-0307-4288-87cb-43c90dc9bad2/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 08:50:47 crc kubenswrapper[4908]: I0131 08:50:47.947469 4908 scope.go:117] "RemoveContainer" containerID="c31afd0aae9fdd20c6d396f2a0969c95b69e611fce07660e6aa9842abd69892e" Jan 31 08:50:47 crc kubenswrapper[4908]: E0131 08:50:47.948391 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:51:01 crc kubenswrapper[4908]: I0131 08:51:01.888239 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_dc051fb6-064f-4ae7-8a0b-c69967d67049/memcached/0.log" Jan 31 08:51:02 crc kubenswrapper[4908]: I0131 08:51:02.940853 4908 scope.go:117] "RemoveContainer" containerID="c31afd0aae9fdd20c6d396f2a0969c95b69e611fce07660e6aa9842abd69892e" Jan 31 08:51:02 crc kubenswrapper[4908]: E0131 08:51:02.941112 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:51:11 crc kubenswrapper[4908]: I0131 08:51:11.611864 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb_914ddeaf-aa45-4a08-a266-d166da23a80b/util/0.log" Jan 31 08:51:11 crc kubenswrapper[4908]: I0131 08:51:11.782760 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb_914ddeaf-aa45-4a08-a266-d166da23a80b/util/0.log" Jan 31 08:51:11 crc kubenswrapper[4908]: I0131 08:51:11.789284 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb_914ddeaf-aa45-4a08-a266-d166da23a80b/pull/0.log" Jan 31 08:51:11 crc kubenswrapper[4908]: I0131 08:51:11.800001 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb_914ddeaf-aa45-4a08-a266-d166da23a80b/pull/0.log" Jan 31 08:51:11 crc kubenswrapper[4908]: I0131 08:51:11.997650 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb_914ddeaf-aa45-4a08-a266-d166da23a80b/extract/0.log" Jan 31 08:51:12 crc kubenswrapper[4908]: I0131 08:51:12.041939 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb_914ddeaf-aa45-4a08-a266-d166da23a80b/pull/0.log" Jan 31 08:51:12 crc kubenswrapper[4908]: I0131 08:51:12.064002 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3144c027b2f9fef1ea1afc8edc39e8d1ae457e5a83d20e810a18fae4b99b9sb_914ddeaf-aa45-4a08-a266-d166da23a80b/util/0.log" Jan 31 08:51:12 crc kubenswrapper[4908]: I0131 08:51:12.323503 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-9xflb_cccc8258-9b99-4e94-911f-46cd1f95e2b7/manager/0.log" Jan 31 08:51:12 crc kubenswrapper[4908]: I0131 08:51:12.323885 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-d4khs_94ede047-b24c-4510-946c-1fcc23ce8862/manager/0.log" Jan 31 08:51:12 crc kubenswrapper[4908]: I0131 08:51:12.424846 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-nkfsp_fda2b57a-aff7-4b0f-baf1-dcc00fb5aa32/manager/0.log" Jan 31 08:51:12 crc kubenswrapper[4908]: I0131 08:51:12.545094 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-944q9_51c0e91e-58e4-4cdf-a06a-b79078097f32/manager/0.log" Jan 31 08:51:12 crc kubenswrapper[4908]: I0131 08:51:12.618562 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-p4nmk_0f7ae625-c53e-4e59-8fa6-357e0cf2e058/manager/0.log" Jan 31 08:51:12 crc kubenswrapper[4908]: I0131 08:51:12.776817 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-z9bfq_1103bcc6-b9d1-44b0-9206-1cf316e40aa1/manager/0.log" Jan 31 08:51:13 crc kubenswrapper[4908]: I0131 08:51:13.061811 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-f254v_0ba21e20-1850-4eea-9eb1-c07fcb41619f/manager/0.log" Jan 31 08:51:13 crc kubenswrapper[4908]: I0131 08:51:13.162475 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-kqzsg_2decce95-3c8d-4a0e-b624-dcb914947d90/manager/0.log" Jan 31 08:51:13 crc kubenswrapper[4908]: I0131 08:51:13.174601 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-fvt6p_d689356c-4f7d-46ed-927d-ce6b20bb4906/manager/0.log" Jan 31 08:51:13 crc kubenswrapper[4908]: I0131 08:51:13.325769 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-brc72_ed8b4596-65fa-4171-bde7-1507bc3fe80b/manager/0.log" Jan 31 08:51:13 crc kubenswrapper[4908]: I0131 08:51:13.448723 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-j6b5m_8d3bf73c-1178-4613-a68a-7897eaf053e9/manager/0.log" Jan 31 08:51:13 crc kubenswrapper[4908]: I0131 08:51:13.606538 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-h8xqc_1d777955-e0fc-4554-8e40-b17bdaaf752f/manager/0.log" Jan 31 08:51:13 crc kubenswrapper[4908]: I0131 08:51:13.721632 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-bf666_1a292336-6b89-4bd0-9f25-28190fad7f20/manager/0.log" Jan 31 08:51:13 crc kubenswrapper[4908]: I0131 08:51:13.860445 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-j8gx2_e8436c75-0fe8-4fd9-98f0-7c2f9c41ef61/manager/0.log" Jan 31 08:51:13 crc kubenswrapper[4908]: I0131 08:51:13.903606 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4d5hbbl_043d998e-7d47-4223-8bb5-8aa2f4a16b9c/manager/0.log" Jan 31 08:51:14 crc kubenswrapper[4908]: I0131 08:51:14.265531 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-86bfc46b97-tmj4g_5a322882-c0d9-45ec-803c-6e1ea6270dbb/operator/0.log" Jan 31 08:51:14 crc kubenswrapper[4908]: I0131 08:51:14.329311 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-62zcb_ec772726-ea9e-4f95-a9e5-88ab00f607f9/registry-server/0.log" Jan 31 08:51:14 crc kubenswrapper[4908]: I0131 08:51:14.564961 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-nf2q7_c3d892f3-217a-4c11-9625-4b0dfffeaca0/manager/0.log" Jan 31 08:51:14 crc kubenswrapper[4908]: I0131 08:51:14.840396 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-2nnfd_349437a4-aa6e-4e78-95e5-0ee664eba158/manager/0.log" Jan 31 08:51:14 crc kubenswrapper[4908]: I0131 08:51:14.983839 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-phddh_78b4780a-6127-426f-9d27-754ab311f0f8/operator/0.log" Jan 31 08:51:15 crc kubenswrapper[4908]: I0131 08:51:15.093057 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-rcz4z_f8753705-b38b-4124-ad36-013ca716e47e/manager/0.log" Jan 31 08:51:15 crc kubenswrapper[4908]: I0131 08:51:15.434303 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-bgpbn_d00296fc-50ea-4b6e-9c9b-492ea9027347/manager/0.log" Jan 31 08:51:15 crc kubenswrapper[4908]: I0131 08:51:15.546612 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-86c469f8fb-d6v25_317d11b5-971f-4f7c-8ab7-60b95122d08a/manager/0.log" Jan 31 08:51:15 crc kubenswrapper[4908]: I0131 08:51:15.655845 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-v9f72_4601f2a2-ec79-4ab3-a3b5-d176ac2359e8/manager/0.log" Jan 31 08:51:15 crc kubenswrapper[4908]: I0131 08:51:15.961557 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-64c4b754d-2gx64_6117707d-0a29-4b0b-bf9e-c23c308952b4/manager/0.log" Jan 31 08:51:16 crc kubenswrapper[4908]: I0131 08:51:16.940345 4908 scope.go:117] "RemoveContainer" containerID="c31afd0aae9fdd20c6d396f2a0969c95b69e611fce07660e6aa9842abd69892e" Jan 31 08:51:16 crc kubenswrapper[4908]: E0131 08:51:16.940859 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:51:24 crc kubenswrapper[4908]: I0131 08:51:24.874638 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7p82z"] Jan 31 08:51:24 crc kubenswrapper[4908]: E0131 08:51:24.875561 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c689cfe3-2266-4156-8044-196763ea02eb" containerName="container-00" Jan 31 08:51:24 crc kubenswrapper[4908]: I0131 08:51:24.875574 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="c689cfe3-2266-4156-8044-196763ea02eb" containerName="container-00" Jan 31 08:51:24 crc kubenswrapper[4908]: I0131 08:51:24.875784 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="c689cfe3-2266-4156-8044-196763ea02eb" containerName="container-00" Jan 31 08:51:24 crc kubenswrapper[4908]: I0131 08:51:24.877934 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7p82z" Jan 31 08:51:24 crc kubenswrapper[4908]: I0131 08:51:24.897179 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7p82z"] Jan 31 08:51:24 crc kubenswrapper[4908]: E0131 08:51:24.940604 4908 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Jan 31 08:51:24 crc kubenswrapper[4908]: I0131 08:51:24.995437 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rknkl\" (UniqueName: \"kubernetes.io/projected/c978d0d1-b3b9-49c9-9e4f-f76443928ca5-kube-api-access-rknkl\") pod \"redhat-marketplace-7p82z\" (UID: \"c978d0d1-b3b9-49c9-9e4f-f76443928ca5\") " pod="openshift-marketplace/redhat-marketplace-7p82z" Jan 31 08:51:24 crc kubenswrapper[4908]: I0131 08:51:24.995680 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c978d0d1-b3b9-49c9-9e4f-f76443928ca5-utilities\") pod \"redhat-marketplace-7p82z\" (UID: \"c978d0d1-b3b9-49c9-9e4f-f76443928ca5\") " pod="openshift-marketplace/redhat-marketplace-7p82z" Jan 31 08:51:24 crc kubenswrapper[4908]: I0131 08:51:24.995845 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c978d0d1-b3b9-49c9-9e4f-f76443928ca5-catalog-content\") pod \"redhat-marketplace-7p82z\" (UID: \"c978d0d1-b3b9-49c9-9e4f-f76443928ca5\") " pod="openshift-marketplace/redhat-marketplace-7p82z" Jan 31 08:51:25 crc kubenswrapper[4908]: I0131 08:51:25.097639 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c978d0d1-b3b9-49c9-9e4f-f76443928ca5-utilities\") pod \"redhat-marketplace-7p82z\" (UID: \"c978d0d1-b3b9-49c9-9e4f-f76443928ca5\") " pod="openshift-marketplace/redhat-marketplace-7p82z" Jan 31 08:51:25 crc kubenswrapper[4908]: I0131 08:51:25.097800 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c978d0d1-b3b9-49c9-9e4f-f76443928ca5-catalog-content\") pod \"redhat-marketplace-7p82z\" (UID: \"c978d0d1-b3b9-49c9-9e4f-f76443928ca5\") " pod="openshift-marketplace/redhat-marketplace-7p82z" Jan 31 08:51:25 crc kubenswrapper[4908]: I0131 08:51:25.097886 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rknkl\" (UniqueName: \"kubernetes.io/projected/c978d0d1-b3b9-49c9-9e4f-f76443928ca5-kube-api-access-rknkl\") pod \"redhat-marketplace-7p82z\" (UID: \"c978d0d1-b3b9-49c9-9e4f-f76443928ca5\") " pod="openshift-marketplace/redhat-marketplace-7p82z" Jan 31 08:51:25 crc kubenswrapper[4908]: I0131 08:51:25.098332 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c978d0d1-b3b9-49c9-9e4f-f76443928ca5-utilities\") pod \"redhat-marketplace-7p82z\" (UID: \"c978d0d1-b3b9-49c9-9e4f-f76443928ca5\") " pod="openshift-marketplace/redhat-marketplace-7p82z" Jan 31 08:51:25 crc kubenswrapper[4908]: I0131 08:51:25.098390 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c978d0d1-b3b9-49c9-9e4f-f76443928ca5-catalog-content\") pod \"redhat-marketplace-7p82z\" (UID: \"c978d0d1-b3b9-49c9-9e4f-f76443928ca5\") " pod="openshift-marketplace/redhat-marketplace-7p82z" Jan 31 08:51:25 crc kubenswrapper[4908]: I0131 08:51:25.122571 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rknkl\" (UniqueName: \"kubernetes.io/projected/c978d0d1-b3b9-49c9-9e4f-f76443928ca5-kube-api-access-rknkl\") pod \"redhat-marketplace-7p82z\" (UID: \"c978d0d1-b3b9-49c9-9e4f-f76443928ca5\") " pod="openshift-marketplace/redhat-marketplace-7p82z" Jan 31 08:51:25 crc kubenswrapper[4908]: I0131 08:51:25.198356 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7p82z" Jan 31 08:51:25 crc kubenswrapper[4908]: I0131 08:51:25.979472 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7p82z"] Jan 31 08:51:26 crc kubenswrapper[4908]: I0131 08:51:26.719751 4908 generic.go:334] "Generic (PLEG): container finished" podID="c978d0d1-b3b9-49c9-9e4f-f76443928ca5" containerID="69250d955a729077bdd3a076768c37df08416a32f363e4f4dedba59a1fbd79b2" exitCode=0 Jan 31 08:51:26 crc kubenswrapper[4908]: I0131 08:51:26.719797 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7p82z" event={"ID":"c978d0d1-b3b9-49c9-9e4f-f76443928ca5","Type":"ContainerDied","Data":"69250d955a729077bdd3a076768c37df08416a32f363e4f4dedba59a1fbd79b2"} Jan 31 08:51:26 crc kubenswrapper[4908]: I0131 08:51:26.720052 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7p82z" event={"ID":"c978d0d1-b3b9-49c9-9e4f-f76443928ca5","Type":"ContainerStarted","Data":"b166575f7daf0b431dfc9ab4a82097bda49886bc5170d7fe22a57eae35d70325"} Jan 31 08:51:28 crc kubenswrapper[4908]: I0131 08:51:28.735523 4908 generic.go:334] "Generic (PLEG): container finished" podID="c978d0d1-b3b9-49c9-9e4f-f76443928ca5" containerID="be0683b9257398f7d217ef35c512836089afb43167f1019426d19d891bf7e48b" exitCode=0 Jan 31 08:51:28 crc kubenswrapper[4908]: I0131 08:51:28.735762 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7p82z" event={"ID":"c978d0d1-b3b9-49c9-9e4f-f76443928ca5","Type":"ContainerDied","Data":"be0683b9257398f7d217ef35c512836089afb43167f1019426d19d891bf7e48b"} Jan 31 08:51:29 crc kubenswrapper[4908]: I0131 08:51:29.664504 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-94b8j"] Jan 31 08:51:29 crc kubenswrapper[4908]: I0131 08:51:29.666869 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94b8j" Jan 31 08:51:29 crc kubenswrapper[4908]: I0131 08:51:29.676470 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-94b8j"] Jan 31 08:51:29 crc kubenswrapper[4908]: I0131 08:51:29.795495 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv5w5\" (UniqueName: \"kubernetes.io/projected/03acdc42-57b9-41bf-a1f3-d33213850a35-kube-api-access-sv5w5\") pod \"redhat-operators-94b8j\" (UID: \"03acdc42-57b9-41bf-a1f3-d33213850a35\") " pod="openshift-marketplace/redhat-operators-94b8j" Jan 31 08:51:29 crc kubenswrapper[4908]: I0131 08:51:29.795568 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03acdc42-57b9-41bf-a1f3-d33213850a35-catalog-content\") pod \"redhat-operators-94b8j\" (UID: \"03acdc42-57b9-41bf-a1f3-d33213850a35\") " pod="openshift-marketplace/redhat-operators-94b8j" Jan 31 08:51:29 crc kubenswrapper[4908]: I0131 08:51:29.795605 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03acdc42-57b9-41bf-a1f3-d33213850a35-utilities\") pod \"redhat-operators-94b8j\" (UID: \"03acdc42-57b9-41bf-a1f3-d33213850a35\") " pod="openshift-marketplace/redhat-operators-94b8j" Jan 31 08:51:29 crc kubenswrapper[4908]: I0131 08:51:29.897657 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv5w5\" (UniqueName: \"kubernetes.io/projected/03acdc42-57b9-41bf-a1f3-d33213850a35-kube-api-access-sv5w5\") pod \"redhat-operators-94b8j\" (UID: \"03acdc42-57b9-41bf-a1f3-d33213850a35\") " pod="openshift-marketplace/redhat-operators-94b8j" Jan 31 08:51:29 crc kubenswrapper[4908]: I0131 08:51:29.897738 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03acdc42-57b9-41bf-a1f3-d33213850a35-catalog-content\") pod \"redhat-operators-94b8j\" (UID: \"03acdc42-57b9-41bf-a1f3-d33213850a35\") " pod="openshift-marketplace/redhat-operators-94b8j" Jan 31 08:51:29 crc kubenswrapper[4908]: I0131 08:51:29.897773 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03acdc42-57b9-41bf-a1f3-d33213850a35-utilities\") pod \"redhat-operators-94b8j\" (UID: \"03acdc42-57b9-41bf-a1f3-d33213850a35\") " pod="openshift-marketplace/redhat-operators-94b8j" Jan 31 08:51:29 crc kubenswrapper[4908]: I0131 08:51:29.898493 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03acdc42-57b9-41bf-a1f3-d33213850a35-utilities\") pod \"redhat-operators-94b8j\" (UID: \"03acdc42-57b9-41bf-a1f3-d33213850a35\") " pod="openshift-marketplace/redhat-operators-94b8j" Jan 31 08:51:29 crc kubenswrapper[4908]: I0131 08:51:29.898573 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03acdc42-57b9-41bf-a1f3-d33213850a35-catalog-content\") pod \"redhat-operators-94b8j\" (UID: \"03acdc42-57b9-41bf-a1f3-d33213850a35\") " pod="openshift-marketplace/redhat-operators-94b8j" Jan 31 08:51:29 crc kubenswrapper[4908]: I0131 08:51:29.916175 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv5w5\" (UniqueName: \"kubernetes.io/projected/03acdc42-57b9-41bf-a1f3-d33213850a35-kube-api-access-sv5w5\") pod \"redhat-operators-94b8j\" (UID: \"03acdc42-57b9-41bf-a1f3-d33213850a35\") " pod="openshift-marketplace/redhat-operators-94b8j" Jan 31 08:51:29 crc kubenswrapper[4908]: I0131 08:51:29.944513 4908 scope.go:117] "RemoveContainer" containerID="c31afd0aae9fdd20c6d396f2a0969c95b69e611fce07660e6aa9842abd69892e" Jan 31 08:51:29 crc kubenswrapper[4908]: E0131 08:51:29.944927 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:51:29 crc kubenswrapper[4908]: I0131 08:51:29.987191 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94b8j" Jan 31 08:51:31 crc kubenswrapper[4908]: W0131 08:51:31.664276 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03acdc42_57b9_41bf_a1f3_d33213850a35.slice/crio-49aa934decfc294c67a3b26f31f0267767a2474ab6fa6f0ca6901b808ac92084 WatchSource:0}: Error finding container 49aa934decfc294c67a3b26f31f0267767a2474ab6fa6f0ca6901b808ac92084: Status 404 returned error can't find the container with id 49aa934decfc294c67a3b26f31f0267767a2474ab6fa6f0ca6901b808ac92084 Jan 31 08:51:31 crc kubenswrapper[4908]: I0131 08:51:31.686205 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-94b8j"] Jan 31 08:51:31 crc kubenswrapper[4908]: I0131 08:51:31.779712 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7p82z" event={"ID":"c978d0d1-b3b9-49c9-9e4f-f76443928ca5","Type":"ContainerStarted","Data":"c1878db9818cb7417801e4b44b8b84e4fefc4de0f819f10be6d0d02690a0015d"} Jan 31 08:51:31 crc kubenswrapper[4908]: I0131 08:51:31.781960 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94b8j" event={"ID":"03acdc42-57b9-41bf-a1f3-d33213850a35","Type":"ContainerStarted","Data":"49aa934decfc294c67a3b26f31f0267767a2474ab6fa6f0ca6901b808ac92084"} Jan 31 08:51:31 crc kubenswrapper[4908]: I0131 08:51:31.805464 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7p82z" podStartSLOduration=3.210573141 podStartE2EDuration="7.805449934s" podCreationTimestamp="2026-01-31 08:51:24 +0000 UTC" firstStartedPulling="2026-01-31 08:51:26.721497477 +0000 UTC m=+5393.337442121" lastFinishedPulling="2026-01-31 08:51:31.31637425 +0000 UTC m=+5397.932318914" observedRunningTime="2026-01-31 08:51:31.804460439 +0000 UTC m=+5398.420405103" watchObservedRunningTime="2026-01-31 08:51:31.805449934 +0000 UTC m=+5398.421394588" Jan 31 08:51:32 crc kubenswrapper[4908]: I0131 08:51:32.792272 4908 generic.go:334] "Generic (PLEG): container finished" podID="03acdc42-57b9-41bf-a1f3-d33213850a35" containerID="86e19ad1fcc36d96e8e7f0e34854b99843bfa46aac55ca2b0709fdd021c1b43b" exitCode=0 Jan 31 08:51:32 crc kubenswrapper[4908]: I0131 08:51:32.792342 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94b8j" event={"ID":"03acdc42-57b9-41bf-a1f3-d33213850a35","Type":"ContainerDied","Data":"86e19ad1fcc36d96e8e7f0e34854b99843bfa46aac55ca2b0709fdd021c1b43b"} Jan 31 08:51:34 crc kubenswrapper[4908]: I0131 08:51:34.811792 4908 generic.go:334] "Generic (PLEG): container finished" podID="03acdc42-57b9-41bf-a1f3-d33213850a35" containerID="79c1c2b2bf84bb4256c0ae479d4075588bbaf0cfdc492b406d94727c5e7ea778" exitCode=0 Jan 31 08:51:34 crc kubenswrapper[4908]: I0131 08:51:34.811847 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94b8j" event={"ID":"03acdc42-57b9-41bf-a1f3-d33213850a35","Type":"ContainerDied","Data":"79c1c2b2bf84bb4256c0ae479d4075588bbaf0cfdc492b406d94727c5e7ea778"} Jan 31 08:51:34 crc kubenswrapper[4908]: I0131 08:51:34.813634 4908 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 08:51:35 crc kubenswrapper[4908]: I0131 08:51:35.199292 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7p82z" Jan 31 08:51:35 crc kubenswrapper[4908]: I0131 08:51:35.200706 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7p82z" Jan 31 08:51:35 crc kubenswrapper[4908]: I0131 08:51:35.246676 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7p82z" Jan 31 08:51:36 crc kubenswrapper[4908]: I0131 08:51:36.646287 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-5bzw7_a1eb7039-2ff7-48da-85d1-471dfe4f956b/control-plane-machine-set-operator/0.log" Jan 31 08:51:36 crc kubenswrapper[4908]: I0131 08:51:36.868122 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7zrxf_232fc61b-967c-45a9-86fc-f9481f555e6e/machine-api-operator/0.log" Jan 31 08:51:36 crc kubenswrapper[4908]: I0131 08:51:36.887145 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7p82z" Jan 31 08:51:36 crc kubenswrapper[4908]: I0131 08:51:36.942761 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7zrxf_232fc61b-967c-45a9-86fc-f9481f555e6e/kube-rbac-proxy/0.log" Jan 31 08:51:37 crc kubenswrapper[4908]: I0131 08:51:37.448212 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7p82z"] Jan 31 08:51:37 crc kubenswrapper[4908]: I0131 08:51:37.846649 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94b8j" event={"ID":"03acdc42-57b9-41bf-a1f3-d33213850a35","Type":"ContainerStarted","Data":"d6564e04e1b0d2fb42ec4778d0d4d1047ceab9295d61db7d9a6d76ba78e9a03c"} Jan 31 08:51:38 crc kubenswrapper[4908]: I0131 08:51:38.854002 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7p82z" podUID="c978d0d1-b3b9-49c9-9e4f-f76443928ca5" containerName="registry-server" containerID="cri-o://c1878db9818cb7417801e4b44b8b84e4fefc4de0f819f10be6d0d02690a0015d" gracePeriod=2 Jan 31 08:51:38 crc kubenswrapper[4908]: I0131 08:51:38.878471 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-94b8j" podStartSLOduration=6.576910271 podStartE2EDuration="9.878451376s" podCreationTimestamp="2026-01-31 08:51:29 +0000 UTC" firstStartedPulling="2026-01-31 08:51:32.79481982 +0000 UTC m=+5399.410764474" lastFinishedPulling="2026-01-31 08:51:36.096360925 +0000 UTC m=+5402.712305579" observedRunningTime="2026-01-31 08:51:38.875325049 +0000 UTC m=+5405.491269703" watchObservedRunningTime="2026-01-31 08:51:38.878451376 +0000 UTC m=+5405.494396050" Jan 31 08:51:39 crc kubenswrapper[4908]: I0131 08:51:39.988012 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-94b8j" Jan 31 08:51:39 crc kubenswrapper[4908]: I0131 08:51:39.988660 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-94b8j" Jan 31 08:51:41 crc kubenswrapper[4908]: I0131 08:51:41.034375 4908 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-94b8j" podUID="03acdc42-57b9-41bf-a1f3-d33213850a35" containerName="registry-server" probeResult="failure" output=< Jan 31 08:51:41 crc kubenswrapper[4908]: timeout: failed to connect service ":50051" within 1s Jan 31 08:51:41 crc kubenswrapper[4908]: > Jan 31 08:51:41 crc kubenswrapper[4908]: I0131 08:51:41.883801 4908 generic.go:334] "Generic (PLEG): container finished" podID="c978d0d1-b3b9-49c9-9e4f-f76443928ca5" containerID="c1878db9818cb7417801e4b44b8b84e4fefc4de0f819f10be6d0d02690a0015d" exitCode=0 Jan 31 08:51:41 crc kubenswrapper[4908]: I0131 08:51:41.884326 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7p82z" event={"ID":"c978d0d1-b3b9-49c9-9e4f-f76443928ca5","Type":"ContainerDied","Data":"c1878db9818cb7417801e4b44b8b84e4fefc4de0f819f10be6d0d02690a0015d"} Jan 31 08:51:42 crc kubenswrapper[4908]: I0131 08:51:42.625954 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7p82z" Jan 31 08:51:42 crc kubenswrapper[4908]: I0131 08:51:42.773311 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c978d0d1-b3b9-49c9-9e4f-f76443928ca5-catalog-content\") pod \"c978d0d1-b3b9-49c9-9e4f-f76443928ca5\" (UID: \"c978d0d1-b3b9-49c9-9e4f-f76443928ca5\") " Jan 31 08:51:42 crc kubenswrapper[4908]: I0131 08:51:42.773540 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c978d0d1-b3b9-49c9-9e4f-f76443928ca5-utilities\") pod \"c978d0d1-b3b9-49c9-9e4f-f76443928ca5\" (UID: \"c978d0d1-b3b9-49c9-9e4f-f76443928ca5\") " Jan 31 08:51:42 crc kubenswrapper[4908]: I0131 08:51:42.773651 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rknkl\" (UniqueName: \"kubernetes.io/projected/c978d0d1-b3b9-49c9-9e4f-f76443928ca5-kube-api-access-rknkl\") pod \"c978d0d1-b3b9-49c9-9e4f-f76443928ca5\" (UID: \"c978d0d1-b3b9-49c9-9e4f-f76443928ca5\") " Jan 31 08:51:42 crc kubenswrapper[4908]: I0131 08:51:42.774328 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c978d0d1-b3b9-49c9-9e4f-f76443928ca5-utilities" (OuterVolumeSpecName: "utilities") pod "c978d0d1-b3b9-49c9-9e4f-f76443928ca5" (UID: "c978d0d1-b3b9-49c9-9e4f-f76443928ca5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:51:42 crc kubenswrapper[4908]: I0131 08:51:42.787233 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c978d0d1-b3b9-49c9-9e4f-f76443928ca5-kube-api-access-rknkl" (OuterVolumeSpecName: "kube-api-access-rknkl") pod "c978d0d1-b3b9-49c9-9e4f-f76443928ca5" (UID: "c978d0d1-b3b9-49c9-9e4f-f76443928ca5"). InnerVolumeSpecName "kube-api-access-rknkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:51:42 crc kubenswrapper[4908]: I0131 08:51:42.798343 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c978d0d1-b3b9-49c9-9e4f-f76443928ca5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c978d0d1-b3b9-49c9-9e4f-f76443928ca5" (UID: "c978d0d1-b3b9-49c9-9e4f-f76443928ca5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:51:42 crc kubenswrapper[4908]: I0131 08:51:42.876099 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c978d0d1-b3b9-49c9-9e4f-f76443928ca5-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:51:42 crc kubenswrapper[4908]: I0131 08:51:42.876404 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rknkl\" (UniqueName: \"kubernetes.io/projected/c978d0d1-b3b9-49c9-9e4f-f76443928ca5-kube-api-access-rknkl\") on node \"crc\" DevicePath \"\"" Jan 31 08:51:42 crc kubenswrapper[4908]: I0131 08:51:42.876413 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c978d0d1-b3b9-49c9-9e4f-f76443928ca5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:51:42 crc kubenswrapper[4908]: I0131 08:51:42.894780 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7p82z" event={"ID":"c978d0d1-b3b9-49c9-9e4f-f76443928ca5","Type":"ContainerDied","Data":"b166575f7daf0b431dfc9ab4a82097bda49886bc5170d7fe22a57eae35d70325"} Jan 31 08:51:42 crc kubenswrapper[4908]: I0131 08:51:42.894832 4908 scope.go:117] "RemoveContainer" containerID="c1878db9818cb7417801e4b44b8b84e4fefc4de0f819f10be6d0d02690a0015d" Jan 31 08:51:42 crc kubenswrapper[4908]: I0131 08:51:42.894847 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7p82z" Jan 31 08:51:42 crc kubenswrapper[4908]: I0131 08:51:42.927898 4908 scope.go:117] "RemoveContainer" containerID="be0683b9257398f7d217ef35c512836089afb43167f1019426d19d891bf7e48b" Jan 31 08:51:42 crc kubenswrapper[4908]: I0131 08:51:42.934118 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7p82z"] Jan 31 08:51:42 crc kubenswrapper[4908]: I0131 08:51:42.947405 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7p82z"] Jan 31 08:51:42 crc kubenswrapper[4908]: I0131 08:51:42.951034 4908 scope.go:117] "RemoveContainer" containerID="69250d955a729077bdd3a076768c37df08416a32f363e4f4dedba59a1fbd79b2" Jan 31 08:51:43 crc kubenswrapper[4908]: I0131 08:51:43.950243 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c978d0d1-b3b9-49c9-9e4f-f76443928ca5" path="/var/lib/kubelet/pods/c978d0d1-b3b9-49c9-9e4f-f76443928ca5/volumes" Jan 31 08:51:44 crc kubenswrapper[4908]: I0131 08:51:44.940047 4908 scope.go:117] "RemoveContainer" containerID="c31afd0aae9fdd20c6d396f2a0969c95b69e611fce07660e6aa9842abd69892e" Jan 31 08:51:44 crc kubenswrapper[4908]: E0131 08:51:44.940663 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:51:50 crc kubenswrapper[4908]: I0131 08:51:50.037417 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-94b8j" Jan 31 08:51:50 crc kubenswrapper[4908]: I0131 08:51:50.098635 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-94b8j" Jan 31 08:51:50 crc kubenswrapper[4908]: I0131 08:51:50.278426 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-94b8j"] Jan 31 08:51:51 crc kubenswrapper[4908]: I0131 08:51:51.083212 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-bfcpr_80a11742-1646-4db4-9a09-cf364dc4d60c/cert-manager-controller/0.log" Jan 31 08:51:51 crc kubenswrapper[4908]: I0131 08:51:51.283157 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-mbqfv_2ec70fc1-2433-4ad9-9c14-f9decb3ae354/cert-manager-cainjector/0.log" Jan 31 08:51:51 crc kubenswrapper[4908]: I0131 08:51:51.353536 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-8dx2s_2af952c0-4d39-4459-a32c-a963c7a6741b/cert-manager-webhook/0.log" Jan 31 08:51:51 crc kubenswrapper[4908]: I0131 08:51:51.966466 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-94b8j" podUID="03acdc42-57b9-41bf-a1f3-d33213850a35" containerName="registry-server" containerID="cri-o://d6564e04e1b0d2fb42ec4778d0d4d1047ceab9295d61db7d9a6d76ba78e9a03c" gracePeriod=2 Jan 31 08:51:52 crc kubenswrapper[4908]: I0131 08:51:52.476859 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94b8j" Jan 31 08:51:52 crc kubenswrapper[4908]: I0131 08:51:52.561959 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03acdc42-57b9-41bf-a1f3-d33213850a35-catalog-content\") pod \"03acdc42-57b9-41bf-a1f3-d33213850a35\" (UID: \"03acdc42-57b9-41bf-a1f3-d33213850a35\") " Jan 31 08:51:52 crc kubenswrapper[4908]: I0131 08:51:52.562073 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03acdc42-57b9-41bf-a1f3-d33213850a35-utilities\") pod \"03acdc42-57b9-41bf-a1f3-d33213850a35\" (UID: \"03acdc42-57b9-41bf-a1f3-d33213850a35\") " Jan 31 08:51:52 crc kubenswrapper[4908]: I0131 08:51:52.562165 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv5w5\" (UniqueName: \"kubernetes.io/projected/03acdc42-57b9-41bf-a1f3-d33213850a35-kube-api-access-sv5w5\") pod \"03acdc42-57b9-41bf-a1f3-d33213850a35\" (UID: \"03acdc42-57b9-41bf-a1f3-d33213850a35\") " Jan 31 08:51:52 crc kubenswrapper[4908]: I0131 08:51:52.563085 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03acdc42-57b9-41bf-a1f3-d33213850a35-utilities" (OuterVolumeSpecName: "utilities") pod "03acdc42-57b9-41bf-a1f3-d33213850a35" (UID: "03acdc42-57b9-41bf-a1f3-d33213850a35"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:51:52 crc kubenswrapper[4908]: I0131 08:51:52.569305 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03acdc42-57b9-41bf-a1f3-d33213850a35-kube-api-access-sv5w5" (OuterVolumeSpecName: "kube-api-access-sv5w5") pod "03acdc42-57b9-41bf-a1f3-d33213850a35" (UID: "03acdc42-57b9-41bf-a1f3-d33213850a35"). InnerVolumeSpecName "kube-api-access-sv5w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:51:52 crc kubenswrapper[4908]: I0131 08:51:52.676136 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03acdc42-57b9-41bf-a1f3-d33213850a35-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:51:52 crc kubenswrapper[4908]: I0131 08:51:52.676191 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv5w5\" (UniqueName: \"kubernetes.io/projected/03acdc42-57b9-41bf-a1f3-d33213850a35-kube-api-access-sv5w5\") on node \"crc\" DevicePath \"\"" Jan 31 08:51:52 crc kubenswrapper[4908]: I0131 08:51:52.717443 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03acdc42-57b9-41bf-a1f3-d33213850a35-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03acdc42-57b9-41bf-a1f3-d33213850a35" (UID: "03acdc42-57b9-41bf-a1f3-d33213850a35"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:51:52 crc kubenswrapper[4908]: I0131 08:51:52.778416 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03acdc42-57b9-41bf-a1f3-d33213850a35-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:51:52 crc kubenswrapper[4908]: I0131 08:51:52.977663 4908 generic.go:334] "Generic (PLEG): container finished" podID="03acdc42-57b9-41bf-a1f3-d33213850a35" containerID="d6564e04e1b0d2fb42ec4778d0d4d1047ceab9295d61db7d9a6d76ba78e9a03c" exitCode=0 Jan 31 08:51:52 crc kubenswrapper[4908]: I0131 08:51:52.977725 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94b8j" event={"ID":"03acdc42-57b9-41bf-a1f3-d33213850a35","Type":"ContainerDied","Data":"d6564e04e1b0d2fb42ec4778d0d4d1047ceab9295d61db7d9a6d76ba78e9a03c"} Jan 31 08:51:52 crc kubenswrapper[4908]: I0131 08:51:52.977758 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94b8j" event={"ID":"03acdc42-57b9-41bf-a1f3-d33213850a35","Type":"ContainerDied","Data":"49aa934decfc294c67a3b26f31f0267767a2474ab6fa6f0ca6901b808ac92084"} Jan 31 08:51:52 crc kubenswrapper[4908]: I0131 08:51:52.977823 4908 scope.go:117] "RemoveContainer" containerID="d6564e04e1b0d2fb42ec4778d0d4d1047ceab9295d61db7d9a6d76ba78e9a03c" Jan 31 08:51:52 crc kubenswrapper[4908]: I0131 08:51:52.978042 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94b8j" Jan 31 08:51:52 crc kubenswrapper[4908]: I0131 08:51:52.999348 4908 scope.go:117] "RemoveContainer" containerID="79c1c2b2bf84bb4256c0ae479d4075588bbaf0cfdc492b406d94727c5e7ea778" Jan 31 08:51:53 crc kubenswrapper[4908]: I0131 08:51:53.022181 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-94b8j"] Jan 31 08:51:53 crc kubenswrapper[4908]: I0131 08:51:53.035495 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-94b8j"] Jan 31 08:51:53 crc kubenswrapper[4908]: I0131 08:51:53.051041 4908 scope.go:117] "RemoveContainer" containerID="86e19ad1fcc36d96e8e7f0e34854b99843bfa46aac55ca2b0709fdd021c1b43b" Jan 31 08:51:53 crc kubenswrapper[4908]: I0131 08:51:53.086251 4908 scope.go:117] "RemoveContainer" containerID="d6564e04e1b0d2fb42ec4778d0d4d1047ceab9295d61db7d9a6d76ba78e9a03c" Jan 31 08:51:53 crc kubenswrapper[4908]: E0131 08:51:53.087402 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6564e04e1b0d2fb42ec4778d0d4d1047ceab9295d61db7d9a6d76ba78e9a03c\": container with ID starting with d6564e04e1b0d2fb42ec4778d0d4d1047ceab9295d61db7d9a6d76ba78e9a03c not found: ID does not exist" containerID="d6564e04e1b0d2fb42ec4778d0d4d1047ceab9295d61db7d9a6d76ba78e9a03c" Jan 31 08:51:53 crc kubenswrapper[4908]: I0131 08:51:53.087449 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6564e04e1b0d2fb42ec4778d0d4d1047ceab9295d61db7d9a6d76ba78e9a03c"} err="failed to get container status \"d6564e04e1b0d2fb42ec4778d0d4d1047ceab9295d61db7d9a6d76ba78e9a03c\": rpc error: code = NotFound desc = could not find container \"d6564e04e1b0d2fb42ec4778d0d4d1047ceab9295d61db7d9a6d76ba78e9a03c\": container with ID starting with d6564e04e1b0d2fb42ec4778d0d4d1047ceab9295d61db7d9a6d76ba78e9a03c not found: ID does not exist" Jan 31 08:51:53 crc kubenswrapper[4908]: I0131 08:51:53.087474 4908 scope.go:117] "RemoveContainer" containerID="79c1c2b2bf84bb4256c0ae479d4075588bbaf0cfdc492b406d94727c5e7ea778" Jan 31 08:51:53 crc kubenswrapper[4908]: E0131 08:51:53.088757 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79c1c2b2bf84bb4256c0ae479d4075588bbaf0cfdc492b406d94727c5e7ea778\": container with ID starting with 79c1c2b2bf84bb4256c0ae479d4075588bbaf0cfdc492b406d94727c5e7ea778 not found: ID does not exist" containerID="79c1c2b2bf84bb4256c0ae479d4075588bbaf0cfdc492b406d94727c5e7ea778" Jan 31 08:51:53 crc kubenswrapper[4908]: I0131 08:51:53.088802 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79c1c2b2bf84bb4256c0ae479d4075588bbaf0cfdc492b406d94727c5e7ea778"} err="failed to get container status \"79c1c2b2bf84bb4256c0ae479d4075588bbaf0cfdc492b406d94727c5e7ea778\": rpc error: code = NotFound desc = could not find container \"79c1c2b2bf84bb4256c0ae479d4075588bbaf0cfdc492b406d94727c5e7ea778\": container with ID starting with 79c1c2b2bf84bb4256c0ae479d4075588bbaf0cfdc492b406d94727c5e7ea778 not found: ID does not exist" Jan 31 08:51:53 crc kubenswrapper[4908]: I0131 08:51:53.088831 4908 scope.go:117] "RemoveContainer" containerID="86e19ad1fcc36d96e8e7f0e34854b99843bfa46aac55ca2b0709fdd021c1b43b" Jan 31 08:51:53 crc kubenswrapper[4908]: E0131 08:51:53.089104 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86e19ad1fcc36d96e8e7f0e34854b99843bfa46aac55ca2b0709fdd021c1b43b\": container with ID starting with 86e19ad1fcc36d96e8e7f0e34854b99843bfa46aac55ca2b0709fdd021c1b43b not found: ID does not exist" containerID="86e19ad1fcc36d96e8e7f0e34854b99843bfa46aac55ca2b0709fdd021c1b43b" Jan 31 08:51:53 crc kubenswrapper[4908]: I0131 08:51:53.089136 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86e19ad1fcc36d96e8e7f0e34854b99843bfa46aac55ca2b0709fdd021c1b43b"} err="failed to get container status \"86e19ad1fcc36d96e8e7f0e34854b99843bfa46aac55ca2b0709fdd021c1b43b\": rpc error: code = NotFound desc = could not find container \"86e19ad1fcc36d96e8e7f0e34854b99843bfa46aac55ca2b0709fdd021c1b43b\": container with ID starting with 86e19ad1fcc36d96e8e7f0e34854b99843bfa46aac55ca2b0709fdd021c1b43b not found: ID does not exist" Jan 31 08:51:53 crc kubenswrapper[4908]: I0131 08:51:53.952487 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03acdc42-57b9-41bf-a1f3-d33213850a35" path="/var/lib/kubelet/pods/03acdc42-57b9-41bf-a1f3-d33213850a35/volumes" Jan 31 08:51:57 crc kubenswrapper[4908]: I0131 08:51:57.947139 4908 scope.go:117] "RemoveContainer" containerID="c31afd0aae9fdd20c6d396f2a0969c95b69e611fce07660e6aa9842abd69892e" Jan 31 08:51:57 crc kubenswrapper[4908]: E0131 08:51:57.949397 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:52:04 crc kubenswrapper[4908]: I0131 08:52:04.527998 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-2jnxn_3a0b9ab4-75be-4ba4-bd4a-f87df5b21366/nmstate-console-plugin/0.log" Jan 31 08:52:04 crc kubenswrapper[4908]: I0131 08:52:04.700525 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-r9rqh_d9f1a13a-3bbd-4ee8-a70f-e58da94c82d5/nmstate-handler/0.log" Jan 31 08:52:04 crc kubenswrapper[4908]: I0131 08:52:04.804468 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-nqkw8_6513c39c-085e-4d01-bf22-be7f55191bd5/kube-rbac-proxy/0.log" Jan 31 08:52:04 crc kubenswrapper[4908]: I0131 08:52:04.863099 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-nqkw8_6513c39c-085e-4d01-bf22-be7f55191bd5/nmstate-metrics/0.log" Jan 31 08:52:04 crc kubenswrapper[4908]: I0131 08:52:04.972915 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-psr6s_e4126da2-44f9-417d-88ca-4f56a8d7e30e/nmstate-operator/0.log" Jan 31 08:52:05 crc kubenswrapper[4908]: I0131 08:52:05.098745 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-hrzpt_347aecb4-8ba2-4837-af2b-11582ba4de6f/nmstate-webhook/0.log" Jan 31 08:52:09 crc kubenswrapper[4908]: I0131 08:52:09.940877 4908 scope.go:117] "RemoveContainer" containerID="c31afd0aae9fdd20c6d396f2a0969c95b69e611fce07660e6aa9842abd69892e" Jan 31 08:52:09 crc kubenswrapper[4908]: E0131 08:52:09.941653 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:52:22 crc kubenswrapper[4908]: I0131 08:52:22.941051 4908 scope.go:117] "RemoveContainer" containerID="c31afd0aae9fdd20c6d396f2a0969c95b69e611fce07660e6aa9842abd69892e" Jan 31 08:52:22 crc kubenswrapper[4908]: E0131 08:52:22.941705 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:52:33 crc kubenswrapper[4908]: I0131 08:52:33.281597 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-mkc4z_6e02f235-b699-4c99-a66e-7bde91d7b5be/kube-rbac-proxy/0.log" Jan 31 08:52:33 crc kubenswrapper[4908]: I0131 08:52:33.457304 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-mkc4z_6e02f235-b699-4c99-a66e-7bde91d7b5be/controller/0.log" Jan 31 08:52:33 crc kubenswrapper[4908]: I0131 08:52:33.518592 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rp98d_6c6977f3-afad-417f-b8e0-8283a6456b1b/cp-frr-files/0.log" Jan 31 08:52:33 crc kubenswrapper[4908]: I0131 08:52:33.714454 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rp98d_6c6977f3-afad-417f-b8e0-8283a6456b1b/cp-reloader/0.log" Jan 31 08:52:33 crc kubenswrapper[4908]: I0131 08:52:33.763389 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rp98d_6c6977f3-afad-417f-b8e0-8283a6456b1b/cp-frr-files/0.log" Jan 31 08:52:33 crc kubenswrapper[4908]: I0131 08:52:33.769264 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rp98d_6c6977f3-afad-417f-b8e0-8283a6456b1b/cp-metrics/0.log" Jan 31 08:52:33 crc kubenswrapper[4908]: I0131 08:52:33.820144 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rp98d_6c6977f3-afad-417f-b8e0-8283a6456b1b/cp-reloader/0.log" Jan 31 08:52:33 crc kubenswrapper[4908]: I0131 08:52:33.923757 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rp98d_6c6977f3-afad-417f-b8e0-8283a6456b1b/cp-frr-files/0.log" Jan 31 08:52:33 crc kubenswrapper[4908]: I0131 08:52:33.979086 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rp98d_6c6977f3-afad-417f-b8e0-8283a6456b1b/cp-reloader/0.log" Jan 31 08:52:33 crc kubenswrapper[4908]: I0131 08:52:33.986843 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rp98d_6c6977f3-afad-417f-b8e0-8283a6456b1b/cp-metrics/0.log" Jan 31 08:52:34 crc kubenswrapper[4908]: I0131 08:52:34.040341 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rp98d_6c6977f3-afad-417f-b8e0-8283a6456b1b/cp-metrics/0.log" Jan 31 08:52:34 crc kubenswrapper[4908]: I0131 08:52:34.252647 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rp98d_6c6977f3-afad-417f-b8e0-8283a6456b1b/cp-metrics/0.log" Jan 31 08:52:34 crc kubenswrapper[4908]: I0131 08:52:34.252682 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rp98d_6c6977f3-afad-417f-b8e0-8283a6456b1b/cp-frr-files/0.log" Jan 31 08:52:34 crc kubenswrapper[4908]: I0131 08:52:34.263296 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rp98d_6c6977f3-afad-417f-b8e0-8283a6456b1b/cp-reloader/0.log" Jan 31 08:52:34 crc kubenswrapper[4908]: I0131 08:52:34.273556 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rp98d_6c6977f3-afad-417f-b8e0-8283a6456b1b/controller/0.log" Jan 31 08:52:34 crc kubenswrapper[4908]: I0131 08:52:34.442405 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rp98d_6c6977f3-afad-417f-b8e0-8283a6456b1b/kube-rbac-proxy/0.log" Jan 31 08:52:34 crc kubenswrapper[4908]: I0131 08:52:34.478133 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rp98d_6c6977f3-afad-417f-b8e0-8283a6456b1b/frr-metrics/0.log" Jan 31 08:52:34 crc kubenswrapper[4908]: I0131 08:52:34.509125 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rp98d_6c6977f3-afad-417f-b8e0-8283a6456b1b/kube-rbac-proxy-frr/0.log" Jan 31 08:52:34 crc kubenswrapper[4908]: I0131 08:52:34.660518 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rp98d_6c6977f3-afad-417f-b8e0-8283a6456b1b/reloader/0.log" Jan 31 08:52:34 crc kubenswrapper[4908]: I0131 08:52:34.765940 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-2xcng_d12d1a65-d2bd-47b1-a662-d97bbfa8aa51/frr-k8s-webhook-server/0.log" Jan 31 08:52:34 crc kubenswrapper[4908]: I0131 08:52:34.940372 4908 scope.go:117] "RemoveContainer" containerID="c31afd0aae9fdd20c6d396f2a0969c95b69e611fce07660e6aa9842abd69892e" Jan 31 08:52:34 crc kubenswrapper[4908]: E0131 08:52:34.940641 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:52:34 crc kubenswrapper[4908]: I0131 08:52:34.954378 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-655f5d8bc7-frrz6_776290b3-3d7d-4abb-8718-0e6dadf1bbfa/manager/0.log" Jan 31 08:52:35 crc kubenswrapper[4908]: I0131 08:52:35.130841 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5dc4575dbd-vcbx4_0bf3501e-40f0-4fd9-aa69-e1843e83887e/webhook-server/0.log" Jan 31 08:52:35 crc kubenswrapper[4908]: I0131 08:52:35.152562 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wfcsd_92a4638a-389b-465b-8c59-c8689328205b/kube-rbac-proxy/0.log" Jan 31 08:52:36 crc kubenswrapper[4908]: I0131 08:52:36.076586 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wfcsd_92a4638a-389b-465b-8c59-c8689328205b/speaker/0.log" Jan 31 08:52:36 crc kubenswrapper[4908]: I0131 08:52:36.373474 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rp98d_6c6977f3-afad-417f-b8e0-8283a6456b1b/frr/0.log" Jan 31 08:52:48 crc kubenswrapper[4908]: I0131 08:52:48.908666 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh_625a99c3-e2e7-493b-8a5a-071981756003/util/0.log" Jan 31 08:52:49 crc kubenswrapper[4908]: I0131 08:52:49.128086 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh_625a99c3-e2e7-493b-8a5a-071981756003/util/0.log" Jan 31 08:52:49 crc kubenswrapper[4908]: I0131 08:52:49.151346 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh_625a99c3-e2e7-493b-8a5a-071981756003/pull/0.log" Jan 31 08:52:49 crc kubenswrapper[4908]: I0131 08:52:49.168001 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh_625a99c3-e2e7-493b-8a5a-071981756003/pull/0.log" Jan 31 08:52:49 crc kubenswrapper[4908]: I0131 08:52:49.549673 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh_625a99c3-e2e7-493b-8a5a-071981756003/util/0.log" Jan 31 08:52:49 crc kubenswrapper[4908]: I0131 08:52:49.569601 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh_625a99c3-e2e7-493b-8a5a-071981756003/pull/0.log" Jan 31 08:52:49 crc kubenswrapper[4908]: I0131 08:52:49.572131 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7kqh_625a99c3-e2e7-493b-8a5a-071981756003/extract/0.log" Jan 31 08:52:49 crc kubenswrapper[4908]: I0131 08:52:49.767375 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj_1e355e32-5cfc-404d-9934-3d15a8189545/util/0.log" Jan 31 08:52:49 crc kubenswrapper[4908]: I0131 08:52:49.876923 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj_1e355e32-5cfc-404d-9934-3d15a8189545/util/0.log" Jan 31 08:52:49 crc kubenswrapper[4908]: I0131 08:52:49.914111 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj_1e355e32-5cfc-404d-9934-3d15a8189545/pull/0.log" Jan 31 08:52:49 crc kubenswrapper[4908]: I0131 08:52:49.917211 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj_1e355e32-5cfc-404d-9934-3d15a8189545/pull/0.log" Jan 31 08:52:49 crc kubenswrapper[4908]: E0131 08:52:49.940961 4908 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Jan 31 08:52:49 crc kubenswrapper[4908]: I0131 08:52:49.941359 4908 scope.go:117] "RemoveContainer" containerID="c31afd0aae9fdd20c6d396f2a0969c95b69e611fce07660e6aa9842abd69892e" Jan 31 08:52:49 crc kubenswrapper[4908]: E0131 08:52:49.941641 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:52:50 crc kubenswrapper[4908]: I0131 08:52:50.125084 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj_1e355e32-5cfc-404d-9934-3d15a8189545/util/0.log" Jan 31 08:52:50 crc kubenswrapper[4908]: I0131 08:52:50.129919 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj_1e355e32-5cfc-404d-9934-3d15a8189545/extract/0.log" Jan 31 08:52:50 crc kubenswrapper[4908]: I0131 08:52:50.149781 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ctsfj_1e355e32-5cfc-404d-9934-3d15a8189545/pull/0.log" Jan 31 08:52:50 crc kubenswrapper[4908]: I0131 08:52:50.284788 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-m4dmt_5cabe96a-25f1-4049-a1f6-048e6fa2da67/extract-utilities/0.log" Jan 31 08:52:50 crc kubenswrapper[4908]: I0131 08:52:50.508337 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-m4dmt_5cabe96a-25f1-4049-a1f6-048e6fa2da67/extract-utilities/0.log" Jan 31 08:52:50 crc kubenswrapper[4908]: I0131 08:52:50.547172 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-m4dmt_5cabe96a-25f1-4049-a1f6-048e6fa2da67/extract-content/0.log" Jan 31 08:52:50 crc kubenswrapper[4908]: I0131 08:52:50.607905 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-m4dmt_5cabe96a-25f1-4049-a1f6-048e6fa2da67/extract-content/0.log" Jan 31 08:52:50 crc kubenswrapper[4908]: I0131 08:52:50.711759 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-m4dmt_5cabe96a-25f1-4049-a1f6-048e6fa2da67/extract-content/0.log" Jan 31 08:52:50 crc kubenswrapper[4908]: I0131 08:52:50.758832 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-m4dmt_5cabe96a-25f1-4049-a1f6-048e6fa2da67/extract-utilities/0.log" Jan 31 08:52:50 crc kubenswrapper[4908]: I0131 08:52:50.945997 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qnqj6_1867a8a2-ed70-4a9f-a1a2-7329b27688a8/extract-utilities/0.log" Jan 31 08:52:51 crc kubenswrapper[4908]: I0131 08:52:51.199455 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qnqj6_1867a8a2-ed70-4a9f-a1a2-7329b27688a8/extract-utilities/0.log" Jan 31 08:52:51 crc kubenswrapper[4908]: I0131 08:52:51.211019 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qnqj6_1867a8a2-ed70-4a9f-a1a2-7329b27688a8/extract-content/0.log" Jan 31 08:52:51 crc kubenswrapper[4908]: I0131 08:52:51.238613 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qnqj6_1867a8a2-ed70-4a9f-a1a2-7329b27688a8/extract-content/0.log" Jan 31 08:52:51 crc kubenswrapper[4908]: I0131 08:52:51.429114 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qnqj6_1867a8a2-ed70-4a9f-a1a2-7329b27688a8/extract-content/0.log" Jan 31 08:52:51 crc kubenswrapper[4908]: I0131 08:52:51.457744 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qnqj6_1867a8a2-ed70-4a9f-a1a2-7329b27688a8/extract-utilities/0.log" Jan 31 08:52:51 crc kubenswrapper[4908]: I0131 08:52:51.671671 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-wfl8d_148ec21b-11ac-46af-b840-f814f86ff031/marketplace-operator/0.log" Jan 31 08:52:51 crc kubenswrapper[4908]: I0131 08:52:51.941721 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-m4dmt_5cabe96a-25f1-4049-a1f6-048e6fa2da67/registry-server/0.log" Jan 31 08:52:51 crc kubenswrapper[4908]: I0131 08:52:51.966534 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lqqs8_356c3f0f-a93e-470a-850d-0da5329bc06c/extract-utilities/0.log" Jan 31 08:52:52 crc kubenswrapper[4908]: I0131 08:52:52.230185 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lqqs8_356c3f0f-a93e-470a-850d-0da5329bc06c/extract-utilities/0.log" Jan 31 08:52:52 crc kubenswrapper[4908]: I0131 08:52:52.253866 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lqqs8_356c3f0f-a93e-470a-850d-0da5329bc06c/extract-content/0.log" Jan 31 08:52:52 crc kubenswrapper[4908]: I0131 08:52:52.280917 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lqqs8_356c3f0f-a93e-470a-850d-0da5329bc06c/extract-content/0.log" Jan 31 08:52:52 crc kubenswrapper[4908]: I0131 08:52:52.374992 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qnqj6_1867a8a2-ed70-4a9f-a1a2-7329b27688a8/registry-server/0.log" Jan 31 08:52:52 crc kubenswrapper[4908]: I0131 08:52:52.484991 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lqqs8_356c3f0f-a93e-470a-850d-0da5329bc06c/extract-utilities/0.log" Jan 31 08:52:52 crc kubenswrapper[4908]: I0131 08:52:52.505797 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lqqs8_356c3f0f-a93e-470a-850d-0da5329bc06c/extract-content/0.log" Jan 31 08:52:52 crc kubenswrapper[4908]: I0131 08:52:52.681919 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lqqs8_356c3f0f-a93e-470a-850d-0da5329bc06c/registry-server/0.log" Jan 31 08:52:52 crc kubenswrapper[4908]: I0131 08:52:52.779838 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qf824_ef2517bf-41da-4135-98bd-71b7483a6cf8/extract-utilities/0.log" Jan 31 08:52:52 crc kubenswrapper[4908]: I0131 08:52:52.898223 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qf824_ef2517bf-41da-4135-98bd-71b7483a6cf8/extract-utilities/0.log" Jan 31 08:52:52 crc kubenswrapper[4908]: I0131 08:52:52.948178 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qf824_ef2517bf-41da-4135-98bd-71b7483a6cf8/extract-content/0.log" Jan 31 08:52:52 crc kubenswrapper[4908]: I0131 08:52:52.959063 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qf824_ef2517bf-41da-4135-98bd-71b7483a6cf8/extract-content/0.log" Jan 31 08:52:53 crc kubenswrapper[4908]: I0131 08:52:53.130280 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qf824_ef2517bf-41da-4135-98bd-71b7483a6cf8/extract-utilities/0.log" Jan 31 08:52:53 crc kubenswrapper[4908]: I0131 08:52:53.149875 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qf824_ef2517bf-41da-4135-98bd-71b7483a6cf8/extract-content/0.log" Jan 31 08:52:54 crc kubenswrapper[4908]: I0131 08:52:54.077972 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qf824_ef2517bf-41da-4135-98bd-71b7483a6cf8/registry-server/0.log" Jan 31 08:53:01 crc kubenswrapper[4908]: I0131 08:53:01.940595 4908 scope.go:117] "RemoveContainer" containerID="c31afd0aae9fdd20c6d396f2a0969c95b69e611fce07660e6aa9842abd69892e" Jan 31 08:53:01 crc kubenswrapper[4908]: E0131 08:53:01.941339 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 08:53:12 crc kubenswrapper[4908]: I0131 08:53:12.940183 4908 scope.go:117] "RemoveContainer" containerID="c31afd0aae9fdd20c6d396f2a0969c95b69e611fce07660e6aa9842abd69892e" Jan 31 08:53:13 crc kubenswrapper[4908]: I0131 08:53:13.679411 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerStarted","Data":"378b82a73addaa09daeb19b6e2398f3408bd923d472cee816d40bb3da79c238c"} Jan 31 08:53:50 crc kubenswrapper[4908]: E0131 08:53:50.941439 4908 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Jan 31 08:54:46 crc kubenswrapper[4908]: I0131 08:54:46.492827 4908 generic.go:334] "Generic (PLEG): container finished" podID="759b96f0-114d-41c7-a885-7e9fafc2662b" containerID="48322836d952ab70cbd0970315f066e81eebfd5906d2b1ab13630ea5dcf00760" exitCode=0 Jan 31 08:54:46 crc kubenswrapper[4908]: I0131 08:54:46.492926 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9sx6m/must-gather-xlc9p" event={"ID":"759b96f0-114d-41c7-a885-7e9fafc2662b","Type":"ContainerDied","Data":"48322836d952ab70cbd0970315f066e81eebfd5906d2b1ab13630ea5dcf00760"} Jan 31 08:54:46 crc kubenswrapper[4908]: I0131 08:54:46.493950 4908 scope.go:117] "RemoveContainer" containerID="48322836d952ab70cbd0970315f066e81eebfd5906d2b1ab13630ea5dcf00760" Jan 31 08:54:46 crc kubenswrapper[4908]: I0131 08:54:46.924281 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9sx6m_must-gather-xlc9p_759b96f0-114d-41c7-a885-7e9fafc2662b/gather/0.log" Jan 31 08:54:55 crc kubenswrapper[4908]: I0131 08:54:55.779165 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9sx6m/must-gather-xlc9p"] Jan 31 08:54:55 crc kubenswrapper[4908]: I0131 08:54:55.780006 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-9sx6m/must-gather-xlc9p" podUID="759b96f0-114d-41c7-a885-7e9fafc2662b" containerName="copy" containerID="cri-o://cf5d1f95dbbdcf9914fe8e8180f0a02344f5d3ac960f212f5857cf391041380f" gracePeriod=2 Jan 31 08:54:55 crc kubenswrapper[4908]: I0131 08:54:55.788946 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9sx6m/must-gather-xlc9p"] Jan 31 08:54:56 crc kubenswrapper[4908]: I0131 08:54:56.280449 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9sx6m_must-gather-xlc9p_759b96f0-114d-41c7-a885-7e9fafc2662b/copy/0.log" Jan 31 08:54:56 crc kubenswrapper[4908]: I0131 08:54:56.281078 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9sx6m/must-gather-xlc9p" Jan 31 08:54:56 crc kubenswrapper[4908]: I0131 08:54:56.414200 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/759b96f0-114d-41c7-a885-7e9fafc2662b-must-gather-output\") pod \"759b96f0-114d-41c7-a885-7e9fafc2662b\" (UID: \"759b96f0-114d-41c7-a885-7e9fafc2662b\") " Jan 31 08:54:56 crc kubenswrapper[4908]: I0131 08:54:56.414427 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-446b5\" (UniqueName: \"kubernetes.io/projected/759b96f0-114d-41c7-a885-7e9fafc2662b-kube-api-access-446b5\") pod \"759b96f0-114d-41c7-a885-7e9fafc2662b\" (UID: \"759b96f0-114d-41c7-a885-7e9fafc2662b\") " Jan 31 08:54:56 crc kubenswrapper[4908]: I0131 08:54:56.422095 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/759b96f0-114d-41c7-a885-7e9fafc2662b-kube-api-access-446b5" (OuterVolumeSpecName: "kube-api-access-446b5") pod "759b96f0-114d-41c7-a885-7e9fafc2662b" (UID: "759b96f0-114d-41c7-a885-7e9fafc2662b"). InnerVolumeSpecName "kube-api-access-446b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:54:56 crc kubenswrapper[4908]: I0131 08:54:56.517385 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-446b5\" (UniqueName: \"kubernetes.io/projected/759b96f0-114d-41c7-a885-7e9fafc2662b-kube-api-access-446b5\") on node \"crc\" DevicePath \"\"" Jan 31 08:54:56 crc kubenswrapper[4908]: I0131 08:54:56.570786 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/759b96f0-114d-41c7-a885-7e9fafc2662b-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "759b96f0-114d-41c7-a885-7e9fafc2662b" (UID: "759b96f0-114d-41c7-a885-7e9fafc2662b"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:54:56 crc kubenswrapper[4908]: I0131 08:54:56.598660 4908 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9sx6m_must-gather-xlc9p_759b96f0-114d-41c7-a885-7e9fafc2662b/copy/0.log" Jan 31 08:54:56 crc kubenswrapper[4908]: I0131 08:54:56.599052 4908 generic.go:334] "Generic (PLEG): container finished" podID="759b96f0-114d-41c7-a885-7e9fafc2662b" containerID="cf5d1f95dbbdcf9914fe8e8180f0a02344f5d3ac960f212f5857cf391041380f" exitCode=143 Jan 31 08:54:56 crc kubenswrapper[4908]: I0131 08:54:56.599089 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9sx6m/must-gather-xlc9p" Jan 31 08:54:56 crc kubenswrapper[4908]: I0131 08:54:56.599118 4908 scope.go:117] "RemoveContainer" containerID="cf5d1f95dbbdcf9914fe8e8180f0a02344f5d3ac960f212f5857cf391041380f" Jan 31 08:54:56 crc kubenswrapper[4908]: I0131 08:54:56.616923 4908 scope.go:117] "RemoveContainer" containerID="48322836d952ab70cbd0970315f066e81eebfd5906d2b1ab13630ea5dcf00760" Jan 31 08:54:56 crc kubenswrapper[4908]: I0131 08:54:56.618767 4908 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/759b96f0-114d-41c7-a885-7e9fafc2662b-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 31 08:54:56 crc kubenswrapper[4908]: I0131 08:54:56.655333 4908 scope.go:117] "RemoveContainer" containerID="cf5d1f95dbbdcf9914fe8e8180f0a02344f5d3ac960f212f5857cf391041380f" Jan 31 08:54:56 crc kubenswrapper[4908]: E0131 08:54:56.655699 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf5d1f95dbbdcf9914fe8e8180f0a02344f5d3ac960f212f5857cf391041380f\": container with ID starting with cf5d1f95dbbdcf9914fe8e8180f0a02344f5d3ac960f212f5857cf391041380f not found: ID does not exist" containerID="cf5d1f95dbbdcf9914fe8e8180f0a02344f5d3ac960f212f5857cf391041380f" Jan 31 08:54:56 crc kubenswrapper[4908]: I0131 08:54:56.655740 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf5d1f95dbbdcf9914fe8e8180f0a02344f5d3ac960f212f5857cf391041380f"} err="failed to get container status \"cf5d1f95dbbdcf9914fe8e8180f0a02344f5d3ac960f212f5857cf391041380f\": rpc error: code = NotFound desc = could not find container \"cf5d1f95dbbdcf9914fe8e8180f0a02344f5d3ac960f212f5857cf391041380f\": container with ID starting with cf5d1f95dbbdcf9914fe8e8180f0a02344f5d3ac960f212f5857cf391041380f not found: ID does not exist" Jan 31 08:54:56 crc kubenswrapper[4908]: I0131 08:54:56.655767 4908 scope.go:117] "RemoveContainer" containerID="48322836d952ab70cbd0970315f066e81eebfd5906d2b1ab13630ea5dcf00760" Jan 31 08:54:56 crc kubenswrapper[4908]: E0131 08:54:56.656022 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48322836d952ab70cbd0970315f066e81eebfd5906d2b1ab13630ea5dcf00760\": container with ID starting with 48322836d952ab70cbd0970315f066e81eebfd5906d2b1ab13630ea5dcf00760 not found: ID does not exist" containerID="48322836d952ab70cbd0970315f066e81eebfd5906d2b1ab13630ea5dcf00760" Jan 31 08:54:56 crc kubenswrapper[4908]: I0131 08:54:56.656057 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48322836d952ab70cbd0970315f066e81eebfd5906d2b1ab13630ea5dcf00760"} err="failed to get container status \"48322836d952ab70cbd0970315f066e81eebfd5906d2b1ab13630ea5dcf00760\": rpc error: code = NotFound desc = could not find container \"48322836d952ab70cbd0970315f066e81eebfd5906d2b1ab13630ea5dcf00760\": container with ID starting with 48322836d952ab70cbd0970315f066e81eebfd5906d2b1ab13630ea5dcf00760 not found: ID does not exist" Jan 31 08:54:57 crc kubenswrapper[4908]: I0131 08:54:57.950303 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="759b96f0-114d-41c7-a885-7e9fafc2662b" path="/var/lib/kubelet/pods/759b96f0-114d-41c7-a885-7e9fafc2662b/volumes" Jan 31 08:55:04 crc kubenswrapper[4908]: E0131 08:55:04.941528 4908 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Jan 31 08:55:32 crc kubenswrapper[4908]: I0131 08:55:32.668147 4908 scope.go:117] "RemoveContainer" containerID="9581d4d01cab2ae4efcd484c57ba1d60237652151f0ed76465a1befa6eb8341c" Jan 31 08:55:40 crc kubenswrapper[4908]: I0131 08:55:40.431730 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:55:40 crc kubenswrapper[4908]: I0131 08:55:40.432311 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:56:10 crc kubenswrapper[4908]: I0131 08:56:10.430634 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:56:10 crc kubenswrapper[4908]: I0131 08:56:10.431242 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:56:30 crc kubenswrapper[4908]: E0131 08:56:30.940314 4908 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Jan 31 08:56:40 crc kubenswrapper[4908]: I0131 08:56:40.430936 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:56:40 crc kubenswrapper[4908]: I0131 08:56:40.431700 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:56:40 crc kubenswrapper[4908]: I0131 08:56:40.431754 4908 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 08:56:40 crc kubenswrapper[4908]: I0131 08:56:40.432538 4908 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"378b82a73addaa09daeb19b6e2398f3408bd923d472cee816d40bb3da79c238c"} pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 08:56:40 crc kubenswrapper[4908]: I0131 08:56:40.432601 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" containerID="cri-o://378b82a73addaa09daeb19b6e2398f3408bd923d472cee816d40bb3da79c238c" gracePeriod=600 Jan 31 08:56:41 crc kubenswrapper[4908]: I0131 08:56:41.477515 4908 generic.go:334] "Generic (PLEG): container finished" podID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerID="378b82a73addaa09daeb19b6e2398f3408bd923d472cee816d40bb3da79c238c" exitCode=0 Jan 31 08:56:41 crc kubenswrapper[4908]: I0131 08:56:41.477595 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerDied","Data":"378b82a73addaa09daeb19b6e2398f3408bd923d472cee816d40bb3da79c238c"} Jan 31 08:56:41 crc kubenswrapper[4908]: I0131 08:56:41.477827 4908 scope.go:117] "RemoveContainer" containerID="c31afd0aae9fdd20c6d396f2a0969c95b69e611fce07660e6aa9842abd69892e" Jan 31 08:56:42 crc kubenswrapper[4908]: I0131 08:56:42.489297 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerStarted","Data":"bc6a0dd132ad45d61584cdfb77d93de0bca05105789afe3bd08822c2fa88b060"} Jan 31 08:57:32 crc kubenswrapper[4908]: E0131 08:57:32.941223 4908 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Jan 31 08:57:49 crc kubenswrapper[4908]: I0131 08:57:49.427525 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5p486"] Jan 31 08:57:49 crc kubenswrapper[4908]: E0131 08:57:49.428519 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c978d0d1-b3b9-49c9-9e4f-f76443928ca5" containerName="extract-content" Jan 31 08:57:49 crc kubenswrapper[4908]: I0131 08:57:49.428539 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="c978d0d1-b3b9-49c9-9e4f-f76443928ca5" containerName="extract-content" Jan 31 08:57:49 crc kubenswrapper[4908]: E0131 08:57:49.428560 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="759b96f0-114d-41c7-a885-7e9fafc2662b" containerName="copy" Jan 31 08:57:49 crc kubenswrapper[4908]: I0131 08:57:49.428569 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="759b96f0-114d-41c7-a885-7e9fafc2662b" containerName="copy" Jan 31 08:57:49 crc kubenswrapper[4908]: E0131 08:57:49.428585 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03acdc42-57b9-41bf-a1f3-d33213850a35" containerName="registry-server" Jan 31 08:57:49 crc kubenswrapper[4908]: I0131 08:57:49.428594 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="03acdc42-57b9-41bf-a1f3-d33213850a35" containerName="registry-server" Jan 31 08:57:49 crc kubenswrapper[4908]: E0131 08:57:49.428611 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="759b96f0-114d-41c7-a885-7e9fafc2662b" containerName="gather" Jan 31 08:57:49 crc kubenswrapper[4908]: I0131 08:57:49.428618 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="759b96f0-114d-41c7-a885-7e9fafc2662b" containerName="gather" Jan 31 08:57:49 crc kubenswrapper[4908]: E0131 08:57:49.428638 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03acdc42-57b9-41bf-a1f3-d33213850a35" containerName="extract-utilities" Jan 31 08:57:49 crc kubenswrapper[4908]: I0131 08:57:49.428645 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="03acdc42-57b9-41bf-a1f3-d33213850a35" containerName="extract-utilities" Jan 31 08:57:49 crc kubenswrapper[4908]: E0131 08:57:49.428664 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c978d0d1-b3b9-49c9-9e4f-f76443928ca5" containerName="extract-utilities" Jan 31 08:57:49 crc kubenswrapper[4908]: I0131 08:57:49.428671 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="c978d0d1-b3b9-49c9-9e4f-f76443928ca5" containerName="extract-utilities" Jan 31 08:57:49 crc kubenswrapper[4908]: E0131 08:57:49.428683 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c978d0d1-b3b9-49c9-9e4f-f76443928ca5" containerName="registry-server" Jan 31 08:57:49 crc kubenswrapper[4908]: I0131 08:57:49.428690 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="c978d0d1-b3b9-49c9-9e4f-f76443928ca5" containerName="registry-server" Jan 31 08:57:49 crc kubenswrapper[4908]: E0131 08:57:49.428698 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03acdc42-57b9-41bf-a1f3-d33213850a35" containerName="extract-content" Jan 31 08:57:49 crc kubenswrapper[4908]: I0131 08:57:49.428705 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="03acdc42-57b9-41bf-a1f3-d33213850a35" containerName="extract-content" Jan 31 08:57:49 crc kubenswrapper[4908]: I0131 08:57:49.428956 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="759b96f0-114d-41c7-a885-7e9fafc2662b" containerName="copy" Jan 31 08:57:49 crc kubenswrapper[4908]: I0131 08:57:49.428999 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="c978d0d1-b3b9-49c9-9e4f-f76443928ca5" containerName="registry-server" Jan 31 08:57:49 crc kubenswrapper[4908]: I0131 08:57:49.429011 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="03acdc42-57b9-41bf-a1f3-d33213850a35" containerName="registry-server" Jan 31 08:57:49 crc kubenswrapper[4908]: I0131 08:57:49.429032 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="759b96f0-114d-41c7-a885-7e9fafc2662b" containerName="gather" Jan 31 08:57:49 crc kubenswrapper[4908]: I0131 08:57:49.430749 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5p486" Jan 31 08:57:49 crc kubenswrapper[4908]: I0131 08:57:49.442422 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5p486"] Jan 31 08:57:49 crc kubenswrapper[4908]: I0131 08:57:49.549884 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c98b6cb-4df7-470d-95bc-53eb250d3586-utilities\") pod \"certified-operators-5p486\" (UID: \"1c98b6cb-4df7-470d-95bc-53eb250d3586\") " pod="openshift-marketplace/certified-operators-5p486" Jan 31 08:57:49 crc kubenswrapper[4908]: I0131 08:57:49.550433 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c98b6cb-4df7-470d-95bc-53eb250d3586-catalog-content\") pod \"certified-operators-5p486\" (UID: \"1c98b6cb-4df7-470d-95bc-53eb250d3586\") " pod="openshift-marketplace/certified-operators-5p486" Jan 31 08:57:49 crc kubenswrapper[4908]: I0131 08:57:49.550511 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtvn9\" (UniqueName: \"kubernetes.io/projected/1c98b6cb-4df7-470d-95bc-53eb250d3586-kube-api-access-wtvn9\") pod \"certified-operators-5p486\" (UID: \"1c98b6cb-4df7-470d-95bc-53eb250d3586\") " pod="openshift-marketplace/certified-operators-5p486" Jan 31 08:57:49 crc kubenswrapper[4908]: I0131 08:57:49.653641 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c98b6cb-4df7-470d-95bc-53eb250d3586-catalog-content\") pod \"certified-operators-5p486\" (UID: \"1c98b6cb-4df7-470d-95bc-53eb250d3586\") " pod="openshift-marketplace/certified-operators-5p486" Jan 31 08:57:49 crc kubenswrapper[4908]: I0131 08:57:49.653733 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtvn9\" (UniqueName: \"kubernetes.io/projected/1c98b6cb-4df7-470d-95bc-53eb250d3586-kube-api-access-wtvn9\") pod \"certified-operators-5p486\" (UID: \"1c98b6cb-4df7-470d-95bc-53eb250d3586\") " pod="openshift-marketplace/certified-operators-5p486" Jan 31 08:57:49 crc kubenswrapper[4908]: I0131 08:57:49.653860 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c98b6cb-4df7-470d-95bc-53eb250d3586-utilities\") pod \"certified-operators-5p486\" (UID: \"1c98b6cb-4df7-470d-95bc-53eb250d3586\") " pod="openshift-marketplace/certified-operators-5p486" Jan 31 08:57:49 crc kubenswrapper[4908]: I0131 08:57:49.654610 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c98b6cb-4df7-470d-95bc-53eb250d3586-catalog-content\") pod \"certified-operators-5p486\" (UID: \"1c98b6cb-4df7-470d-95bc-53eb250d3586\") " pod="openshift-marketplace/certified-operators-5p486" Jan 31 08:57:49 crc kubenswrapper[4908]: I0131 08:57:49.654701 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c98b6cb-4df7-470d-95bc-53eb250d3586-utilities\") pod \"certified-operators-5p486\" (UID: \"1c98b6cb-4df7-470d-95bc-53eb250d3586\") " pod="openshift-marketplace/certified-operators-5p486" Jan 31 08:57:49 crc kubenswrapper[4908]: I0131 08:57:49.678126 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtvn9\" (UniqueName: \"kubernetes.io/projected/1c98b6cb-4df7-470d-95bc-53eb250d3586-kube-api-access-wtvn9\") pod \"certified-operators-5p486\" (UID: \"1c98b6cb-4df7-470d-95bc-53eb250d3586\") " pod="openshift-marketplace/certified-operators-5p486" Jan 31 08:57:49 crc kubenswrapper[4908]: I0131 08:57:49.763510 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5p486" Jan 31 08:57:50 crc kubenswrapper[4908]: I0131 08:57:50.306376 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5p486"] Jan 31 08:57:51 crc kubenswrapper[4908]: I0131 08:57:51.110613 4908 generic.go:334] "Generic (PLEG): container finished" podID="1c98b6cb-4df7-470d-95bc-53eb250d3586" containerID="0d2e8272a75d8095dc30a94790368d66fe523cac1854477346d68fdd7aad8a75" exitCode=0 Jan 31 08:57:51 crc kubenswrapper[4908]: I0131 08:57:51.110680 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p486" event={"ID":"1c98b6cb-4df7-470d-95bc-53eb250d3586","Type":"ContainerDied","Data":"0d2e8272a75d8095dc30a94790368d66fe523cac1854477346d68fdd7aad8a75"} Jan 31 08:57:51 crc kubenswrapper[4908]: I0131 08:57:51.110897 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p486" event={"ID":"1c98b6cb-4df7-470d-95bc-53eb250d3586","Type":"ContainerStarted","Data":"d8f791da34056627dfdf9e7f7d68677da9d65f3c34e8dcc9359fe2292dbb12c3"} Jan 31 08:57:51 crc kubenswrapper[4908]: I0131 08:57:51.112614 4908 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 08:57:53 crc kubenswrapper[4908]: I0131 08:57:53.136390 4908 generic.go:334] "Generic (PLEG): container finished" podID="1c98b6cb-4df7-470d-95bc-53eb250d3586" containerID="3bb53145d7ca730a7db21cc2954ffcb962eae884b8013ea775ed5bd3055675de" exitCode=0 Jan 31 08:57:53 crc kubenswrapper[4908]: I0131 08:57:53.136459 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p486" event={"ID":"1c98b6cb-4df7-470d-95bc-53eb250d3586","Type":"ContainerDied","Data":"3bb53145d7ca730a7db21cc2954ffcb962eae884b8013ea775ed5bd3055675de"} Jan 31 08:57:55 crc kubenswrapper[4908]: I0131 08:57:55.163616 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p486" event={"ID":"1c98b6cb-4df7-470d-95bc-53eb250d3586","Type":"ContainerStarted","Data":"747519f26ca51192dc73ff4181574ca3d4adf4fa53964473a83b05b80b0a4e96"} Jan 31 08:57:55 crc kubenswrapper[4908]: I0131 08:57:55.191121 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5p486" podStartSLOduration=2.870450816 podStartE2EDuration="6.191099363s" podCreationTimestamp="2026-01-31 08:57:49 +0000 UTC" firstStartedPulling="2026-01-31 08:57:51.112362812 +0000 UTC m=+5777.728307466" lastFinishedPulling="2026-01-31 08:57:54.433011359 +0000 UTC m=+5781.048956013" observedRunningTime="2026-01-31 08:57:55.182326062 +0000 UTC m=+5781.798270716" watchObservedRunningTime="2026-01-31 08:57:55.191099363 +0000 UTC m=+5781.807044027" Jan 31 08:57:59 crc kubenswrapper[4908]: I0131 08:57:59.765491 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5p486" Jan 31 08:57:59 crc kubenswrapper[4908]: I0131 08:57:59.766126 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5p486" Jan 31 08:57:59 crc kubenswrapper[4908]: I0131 08:57:59.807962 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5p486" Jan 31 08:58:00 crc kubenswrapper[4908]: I0131 08:58:00.257346 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5p486" Jan 31 08:58:00 crc kubenswrapper[4908]: I0131 08:58:00.304345 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5p486"] Jan 31 08:58:02 crc kubenswrapper[4908]: I0131 08:58:02.229084 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5p486" podUID="1c98b6cb-4df7-470d-95bc-53eb250d3586" containerName="registry-server" containerID="cri-o://747519f26ca51192dc73ff4181574ca3d4adf4fa53964473a83b05b80b0a4e96" gracePeriod=2 Jan 31 08:58:03 crc kubenswrapper[4908]: I0131 08:58:03.244923 4908 generic.go:334] "Generic (PLEG): container finished" podID="1c98b6cb-4df7-470d-95bc-53eb250d3586" containerID="747519f26ca51192dc73ff4181574ca3d4adf4fa53964473a83b05b80b0a4e96" exitCode=0 Jan 31 08:58:03 crc kubenswrapper[4908]: I0131 08:58:03.245012 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p486" event={"ID":"1c98b6cb-4df7-470d-95bc-53eb250d3586","Type":"ContainerDied","Data":"747519f26ca51192dc73ff4181574ca3d4adf4fa53964473a83b05b80b0a4e96"} Jan 31 08:58:03 crc kubenswrapper[4908]: I0131 08:58:03.245053 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p486" event={"ID":"1c98b6cb-4df7-470d-95bc-53eb250d3586","Type":"ContainerDied","Data":"d8f791da34056627dfdf9e7f7d68677da9d65f3c34e8dcc9359fe2292dbb12c3"} Jan 31 08:58:03 crc kubenswrapper[4908]: I0131 08:58:03.245074 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8f791da34056627dfdf9e7f7d68677da9d65f3c34e8dcc9359fe2292dbb12c3" Jan 31 08:58:03 crc kubenswrapper[4908]: I0131 08:58:03.265344 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5p486" Jan 31 08:58:03 crc kubenswrapper[4908]: I0131 08:58:03.439838 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtvn9\" (UniqueName: \"kubernetes.io/projected/1c98b6cb-4df7-470d-95bc-53eb250d3586-kube-api-access-wtvn9\") pod \"1c98b6cb-4df7-470d-95bc-53eb250d3586\" (UID: \"1c98b6cb-4df7-470d-95bc-53eb250d3586\") " Jan 31 08:58:03 crc kubenswrapper[4908]: I0131 08:58:03.439928 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c98b6cb-4df7-470d-95bc-53eb250d3586-utilities\") pod \"1c98b6cb-4df7-470d-95bc-53eb250d3586\" (UID: \"1c98b6cb-4df7-470d-95bc-53eb250d3586\") " Jan 31 08:58:03 crc kubenswrapper[4908]: I0131 08:58:03.440039 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c98b6cb-4df7-470d-95bc-53eb250d3586-catalog-content\") pod \"1c98b6cb-4df7-470d-95bc-53eb250d3586\" (UID: \"1c98b6cb-4df7-470d-95bc-53eb250d3586\") " Jan 31 08:58:03 crc kubenswrapper[4908]: I0131 08:58:03.441001 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c98b6cb-4df7-470d-95bc-53eb250d3586-utilities" (OuterVolumeSpecName: "utilities") pod "1c98b6cb-4df7-470d-95bc-53eb250d3586" (UID: "1c98b6cb-4df7-470d-95bc-53eb250d3586"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:58:03 crc kubenswrapper[4908]: I0131 08:58:03.445970 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c98b6cb-4df7-470d-95bc-53eb250d3586-kube-api-access-wtvn9" (OuterVolumeSpecName: "kube-api-access-wtvn9") pod "1c98b6cb-4df7-470d-95bc-53eb250d3586" (UID: "1c98b6cb-4df7-470d-95bc-53eb250d3586"). InnerVolumeSpecName "kube-api-access-wtvn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:58:03 crc kubenswrapper[4908]: I0131 08:58:03.542889 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtvn9\" (UniqueName: \"kubernetes.io/projected/1c98b6cb-4df7-470d-95bc-53eb250d3586-kube-api-access-wtvn9\") on node \"crc\" DevicePath \"\"" Jan 31 08:58:03 crc kubenswrapper[4908]: I0131 08:58:03.542925 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c98b6cb-4df7-470d-95bc-53eb250d3586-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:58:04 crc kubenswrapper[4908]: I0131 08:58:04.254154 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5p486" Jan 31 08:58:04 crc kubenswrapper[4908]: I0131 08:58:04.265887 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c98b6cb-4df7-470d-95bc-53eb250d3586-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c98b6cb-4df7-470d-95bc-53eb250d3586" (UID: "1c98b6cb-4df7-470d-95bc-53eb250d3586"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:58:04 crc kubenswrapper[4908]: I0131 08:58:04.358027 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c98b6cb-4df7-470d-95bc-53eb250d3586-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:58:04 crc kubenswrapper[4908]: I0131 08:58:04.587036 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5p486"] Jan 31 08:58:04 crc kubenswrapper[4908]: I0131 08:58:04.596120 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5p486"] Jan 31 08:58:05 crc kubenswrapper[4908]: I0131 08:58:05.951996 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c98b6cb-4df7-470d-95bc-53eb250d3586" path="/var/lib/kubelet/pods/1c98b6cb-4df7-470d-95bc-53eb250d3586/volumes" Jan 31 08:58:44 crc kubenswrapper[4908]: E0131 08:58:44.940706 4908 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Jan 31 08:59:10 crc kubenswrapper[4908]: I0131 08:59:10.431427 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:59:10 crc kubenswrapper[4908]: I0131 08:59:10.432268 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:59:40 crc kubenswrapper[4908]: I0131 08:59:40.431840 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:59:40 crc kubenswrapper[4908]: I0131 08:59:40.432684 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:59:55 crc kubenswrapper[4908]: E0131 08:59:55.942212 4908 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Jan 31 08:59:58 crc kubenswrapper[4908]: I0131 08:59:58.873729 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hm4vz"] Jan 31 08:59:58 crc kubenswrapper[4908]: E0131 08:59:58.874400 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c98b6cb-4df7-470d-95bc-53eb250d3586" containerName="extract-content" Jan 31 08:59:58 crc kubenswrapper[4908]: I0131 08:59:58.874413 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c98b6cb-4df7-470d-95bc-53eb250d3586" containerName="extract-content" Jan 31 08:59:58 crc kubenswrapper[4908]: E0131 08:59:58.874428 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c98b6cb-4df7-470d-95bc-53eb250d3586" containerName="extract-utilities" Jan 31 08:59:58 crc kubenswrapper[4908]: I0131 08:59:58.874433 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c98b6cb-4df7-470d-95bc-53eb250d3586" containerName="extract-utilities" Jan 31 08:59:58 crc kubenswrapper[4908]: E0131 08:59:58.874448 4908 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c98b6cb-4df7-470d-95bc-53eb250d3586" containerName="registry-server" Jan 31 08:59:58 crc kubenswrapper[4908]: I0131 08:59:58.874454 4908 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c98b6cb-4df7-470d-95bc-53eb250d3586" containerName="registry-server" Jan 31 08:59:58 crc kubenswrapper[4908]: I0131 08:59:58.874848 4908 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c98b6cb-4df7-470d-95bc-53eb250d3586" containerName="registry-server" Jan 31 08:59:58 crc kubenswrapper[4908]: I0131 08:59:58.876434 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hm4vz" Jan 31 08:59:58 crc kubenswrapper[4908]: I0131 08:59:58.894480 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hm4vz"] Jan 31 08:59:58 crc kubenswrapper[4908]: I0131 08:59:58.968585 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47a6d207-14d1-4748-85b3-ff5753416079-catalog-content\") pod \"community-operators-hm4vz\" (UID: \"47a6d207-14d1-4748-85b3-ff5753416079\") " pod="openshift-marketplace/community-operators-hm4vz" Jan 31 08:59:58 crc kubenswrapper[4908]: I0131 08:59:58.968672 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47a6d207-14d1-4748-85b3-ff5753416079-utilities\") pod \"community-operators-hm4vz\" (UID: \"47a6d207-14d1-4748-85b3-ff5753416079\") " pod="openshift-marketplace/community-operators-hm4vz" Jan 31 08:59:58 crc kubenswrapper[4908]: I0131 08:59:58.968767 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmbxt\" (UniqueName: \"kubernetes.io/projected/47a6d207-14d1-4748-85b3-ff5753416079-kube-api-access-nmbxt\") pod \"community-operators-hm4vz\" (UID: \"47a6d207-14d1-4748-85b3-ff5753416079\") " pod="openshift-marketplace/community-operators-hm4vz" Jan 31 08:59:59 crc kubenswrapper[4908]: I0131 08:59:59.071013 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47a6d207-14d1-4748-85b3-ff5753416079-catalog-content\") pod \"community-operators-hm4vz\" (UID: \"47a6d207-14d1-4748-85b3-ff5753416079\") " pod="openshift-marketplace/community-operators-hm4vz" Jan 31 08:59:59 crc kubenswrapper[4908]: I0131 08:59:59.071380 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47a6d207-14d1-4748-85b3-ff5753416079-utilities\") pod \"community-operators-hm4vz\" (UID: \"47a6d207-14d1-4748-85b3-ff5753416079\") " pod="openshift-marketplace/community-operators-hm4vz" Jan 31 08:59:59 crc kubenswrapper[4908]: I0131 08:59:59.071606 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmbxt\" (UniqueName: \"kubernetes.io/projected/47a6d207-14d1-4748-85b3-ff5753416079-kube-api-access-nmbxt\") pod \"community-operators-hm4vz\" (UID: \"47a6d207-14d1-4748-85b3-ff5753416079\") " pod="openshift-marketplace/community-operators-hm4vz" Jan 31 08:59:59 crc kubenswrapper[4908]: I0131 08:59:59.071746 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47a6d207-14d1-4748-85b3-ff5753416079-catalog-content\") pod \"community-operators-hm4vz\" (UID: \"47a6d207-14d1-4748-85b3-ff5753416079\") " pod="openshift-marketplace/community-operators-hm4vz" Jan 31 08:59:59 crc kubenswrapper[4908]: I0131 08:59:59.071746 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47a6d207-14d1-4748-85b3-ff5753416079-utilities\") pod \"community-operators-hm4vz\" (UID: \"47a6d207-14d1-4748-85b3-ff5753416079\") " pod="openshift-marketplace/community-operators-hm4vz" Jan 31 08:59:59 crc kubenswrapper[4908]: I0131 08:59:59.092965 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmbxt\" (UniqueName: \"kubernetes.io/projected/47a6d207-14d1-4748-85b3-ff5753416079-kube-api-access-nmbxt\") pod \"community-operators-hm4vz\" (UID: \"47a6d207-14d1-4748-85b3-ff5753416079\") " pod="openshift-marketplace/community-operators-hm4vz" Jan 31 08:59:59 crc kubenswrapper[4908]: I0131 08:59:59.201219 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hm4vz" Jan 31 08:59:59 crc kubenswrapper[4908]: I0131 08:59:59.653339 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hm4vz"] Jan 31 09:00:00 crc kubenswrapper[4908]: I0131 09:00:00.193579 4908 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497500-99gws"] Jan 31 09:00:00 crc kubenswrapper[4908]: I0131 09:00:00.195356 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-99gws" Jan 31 09:00:00 crc kubenswrapper[4908]: I0131 09:00:00.197476 4908 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 09:00:00 crc kubenswrapper[4908]: I0131 09:00:00.198679 4908 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 09:00:00 crc kubenswrapper[4908]: I0131 09:00:00.204937 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497500-99gws"] Jan 31 09:00:00 crc kubenswrapper[4908]: I0131 09:00:00.233411 4908 generic.go:334] "Generic (PLEG): container finished" podID="47a6d207-14d1-4748-85b3-ff5753416079" containerID="9318030d272213da75b3b8d8a80e12388cd2b8e8c20f043bcb5384b60e3bd130" exitCode=0 Jan 31 09:00:00 crc kubenswrapper[4908]: I0131 09:00:00.233456 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm4vz" event={"ID":"47a6d207-14d1-4748-85b3-ff5753416079","Type":"ContainerDied","Data":"9318030d272213da75b3b8d8a80e12388cd2b8e8c20f043bcb5384b60e3bd130"} Jan 31 09:00:00 crc kubenswrapper[4908]: I0131 09:00:00.233732 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm4vz" event={"ID":"47a6d207-14d1-4748-85b3-ff5753416079","Type":"ContainerStarted","Data":"26bdfd91bb312766d5aac94ed1aad724e4b71f0efa842cf79054af67b3ade581"} Jan 31 09:00:00 crc kubenswrapper[4908]: I0131 09:00:00.320595 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81e5f662-b5f4-4439-8225-a6aa496407aa-secret-volume\") pod \"collect-profiles-29497500-99gws\" (UID: \"81e5f662-b5f4-4439-8225-a6aa496407aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-99gws" Jan 31 09:00:00 crc kubenswrapper[4908]: I0131 09:00:00.320651 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r62d2\" (UniqueName: \"kubernetes.io/projected/81e5f662-b5f4-4439-8225-a6aa496407aa-kube-api-access-r62d2\") pod \"collect-profiles-29497500-99gws\" (UID: \"81e5f662-b5f4-4439-8225-a6aa496407aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-99gws" Jan 31 09:00:00 crc kubenswrapper[4908]: I0131 09:00:00.321565 4908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81e5f662-b5f4-4439-8225-a6aa496407aa-config-volume\") pod \"collect-profiles-29497500-99gws\" (UID: \"81e5f662-b5f4-4439-8225-a6aa496407aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-99gws" Jan 31 09:00:00 crc kubenswrapper[4908]: I0131 09:00:00.423909 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81e5f662-b5f4-4439-8225-a6aa496407aa-config-volume\") pod \"collect-profiles-29497500-99gws\" (UID: \"81e5f662-b5f4-4439-8225-a6aa496407aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-99gws" Jan 31 09:00:00 crc kubenswrapper[4908]: I0131 09:00:00.424313 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81e5f662-b5f4-4439-8225-a6aa496407aa-secret-volume\") pod \"collect-profiles-29497500-99gws\" (UID: \"81e5f662-b5f4-4439-8225-a6aa496407aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-99gws" Jan 31 09:00:00 crc kubenswrapper[4908]: I0131 09:00:00.424345 4908 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r62d2\" (UniqueName: \"kubernetes.io/projected/81e5f662-b5f4-4439-8225-a6aa496407aa-kube-api-access-r62d2\") pod \"collect-profiles-29497500-99gws\" (UID: \"81e5f662-b5f4-4439-8225-a6aa496407aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-99gws" Jan 31 09:00:00 crc kubenswrapper[4908]: I0131 09:00:00.425249 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81e5f662-b5f4-4439-8225-a6aa496407aa-config-volume\") pod \"collect-profiles-29497500-99gws\" (UID: \"81e5f662-b5f4-4439-8225-a6aa496407aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-99gws" Jan 31 09:00:00 crc kubenswrapper[4908]: I0131 09:00:00.431526 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81e5f662-b5f4-4439-8225-a6aa496407aa-secret-volume\") pod \"collect-profiles-29497500-99gws\" (UID: \"81e5f662-b5f4-4439-8225-a6aa496407aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-99gws" Jan 31 09:00:00 crc kubenswrapper[4908]: I0131 09:00:00.440521 4908 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r62d2\" (UniqueName: \"kubernetes.io/projected/81e5f662-b5f4-4439-8225-a6aa496407aa-kube-api-access-r62d2\") pod \"collect-profiles-29497500-99gws\" (UID: \"81e5f662-b5f4-4439-8225-a6aa496407aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-99gws" Jan 31 09:00:00 crc kubenswrapper[4908]: I0131 09:00:00.522572 4908 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-99gws" Jan 31 09:00:00 crc kubenswrapper[4908]: I0131 09:00:00.998473 4908 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497500-99gws"] Jan 31 09:00:00 crc kubenswrapper[4908]: W0131 09:00:00.999202 4908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81e5f662_b5f4_4439_8225_a6aa496407aa.slice/crio-a01a9ab0efd19fd733d93bf0ef9c77a66307562137e44406d2703032c9dd2d4a WatchSource:0}: Error finding container a01a9ab0efd19fd733d93bf0ef9c77a66307562137e44406d2703032c9dd2d4a: Status 404 returned error can't find the container with id a01a9ab0efd19fd733d93bf0ef9c77a66307562137e44406d2703032c9dd2d4a Jan 31 09:00:01 crc kubenswrapper[4908]: I0131 09:00:01.244262 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-99gws" event={"ID":"81e5f662-b5f4-4439-8225-a6aa496407aa","Type":"ContainerStarted","Data":"a01a9ab0efd19fd733d93bf0ef9c77a66307562137e44406d2703032c9dd2d4a"} Jan 31 09:00:02 crc kubenswrapper[4908]: I0131 09:00:02.254938 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm4vz" event={"ID":"47a6d207-14d1-4748-85b3-ff5753416079","Type":"ContainerStarted","Data":"acff0482be9b33b77e35acd3657318cc0418b1ce6e13794411c6b270c37510f7"} Jan 31 09:00:02 crc kubenswrapper[4908]: I0131 09:00:02.256936 4908 generic.go:334] "Generic (PLEG): container finished" podID="81e5f662-b5f4-4439-8225-a6aa496407aa" containerID="f52ce4f3323d77d7a28a6c5baa91298102606c16ca542d7661fa6d5de9bf771e" exitCode=0 Jan 31 09:00:02 crc kubenswrapper[4908]: I0131 09:00:02.257038 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-99gws" event={"ID":"81e5f662-b5f4-4439-8225-a6aa496407aa","Type":"ContainerDied","Data":"f52ce4f3323d77d7a28a6c5baa91298102606c16ca542d7661fa6d5de9bf771e"} Jan 31 09:00:03 crc kubenswrapper[4908]: I0131 09:00:03.711212 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-99gws" Jan 31 09:00:03 crc kubenswrapper[4908]: I0131 09:00:03.792807 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81e5f662-b5f4-4439-8225-a6aa496407aa-secret-volume\") pod \"81e5f662-b5f4-4439-8225-a6aa496407aa\" (UID: \"81e5f662-b5f4-4439-8225-a6aa496407aa\") " Jan 31 09:00:03 crc kubenswrapper[4908]: I0131 09:00:03.792904 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81e5f662-b5f4-4439-8225-a6aa496407aa-config-volume\") pod \"81e5f662-b5f4-4439-8225-a6aa496407aa\" (UID: \"81e5f662-b5f4-4439-8225-a6aa496407aa\") " Jan 31 09:00:03 crc kubenswrapper[4908]: I0131 09:00:03.792965 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r62d2\" (UniqueName: \"kubernetes.io/projected/81e5f662-b5f4-4439-8225-a6aa496407aa-kube-api-access-r62d2\") pod \"81e5f662-b5f4-4439-8225-a6aa496407aa\" (UID: \"81e5f662-b5f4-4439-8225-a6aa496407aa\") " Jan 31 09:00:03 crc kubenswrapper[4908]: I0131 09:00:03.793562 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e5f662-b5f4-4439-8225-a6aa496407aa-config-volume" (OuterVolumeSpecName: "config-volume") pod "81e5f662-b5f4-4439-8225-a6aa496407aa" (UID: "81e5f662-b5f4-4439-8225-a6aa496407aa"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:00:03 crc kubenswrapper[4908]: I0131 09:00:03.798641 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e5f662-b5f4-4439-8225-a6aa496407aa-kube-api-access-r62d2" (OuterVolumeSpecName: "kube-api-access-r62d2") pod "81e5f662-b5f4-4439-8225-a6aa496407aa" (UID: "81e5f662-b5f4-4439-8225-a6aa496407aa"). InnerVolumeSpecName "kube-api-access-r62d2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:00:03 crc kubenswrapper[4908]: I0131 09:00:03.798658 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81e5f662-b5f4-4439-8225-a6aa496407aa-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "81e5f662-b5f4-4439-8225-a6aa496407aa" (UID: "81e5f662-b5f4-4439-8225-a6aa496407aa"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:00:03 crc kubenswrapper[4908]: I0131 09:00:03.895406 4908 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81e5f662-b5f4-4439-8225-a6aa496407aa-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 09:00:03 crc kubenswrapper[4908]: I0131 09:00:03.895444 4908 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81e5f662-b5f4-4439-8225-a6aa496407aa-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 09:00:03 crc kubenswrapper[4908]: I0131 09:00:03.895455 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r62d2\" (UniqueName: \"kubernetes.io/projected/81e5f662-b5f4-4439-8225-a6aa496407aa-kube-api-access-r62d2\") on node \"crc\" DevicePath \"\"" Jan 31 09:00:04 crc kubenswrapper[4908]: I0131 09:00:04.272175 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-99gws" event={"ID":"81e5f662-b5f4-4439-8225-a6aa496407aa","Type":"ContainerDied","Data":"a01a9ab0efd19fd733d93bf0ef9c77a66307562137e44406d2703032c9dd2d4a"} Jan 31 09:00:04 crc kubenswrapper[4908]: I0131 09:00:04.272215 4908 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a01a9ab0efd19fd733d93bf0ef9c77a66307562137e44406d2703032c9dd2d4a" Jan 31 09:00:04 crc kubenswrapper[4908]: I0131 09:00:04.272225 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-99gws" Jan 31 09:00:04 crc kubenswrapper[4908]: I0131 09:00:04.811814 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497455-68c8w"] Jan 31 09:00:04 crc kubenswrapper[4908]: I0131 09:00:04.820625 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497455-68c8w"] Jan 31 09:00:05 crc kubenswrapper[4908]: I0131 09:00:05.949636 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3873dd5-f913-48ef-a2b4-f3fc5e442b61" path="/var/lib/kubelet/pods/a3873dd5-f913-48ef-a2b4-f3fc5e442b61/volumes" Jan 31 09:00:07 crc kubenswrapper[4908]: I0131 09:00:07.300172 4908 generic.go:334] "Generic (PLEG): container finished" podID="47a6d207-14d1-4748-85b3-ff5753416079" containerID="acff0482be9b33b77e35acd3657318cc0418b1ce6e13794411c6b270c37510f7" exitCode=0 Jan 31 09:00:07 crc kubenswrapper[4908]: I0131 09:00:07.300257 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm4vz" event={"ID":"47a6d207-14d1-4748-85b3-ff5753416079","Type":"ContainerDied","Data":"acff0482be9b33b77e35acd3657318cc0418b1ce6e13794411c6b270c37510f7"} Jan 31 09:00:09 crc kubenswrapper[4908]: I0131 09:00:09.319346 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm4vz" event={"ID":"47a6d207-14d1-4748-85b3-ff5753416079","Type":"ContainerStarted","Data":"af5da2598e8f50ce3d0695b4793cc764ad472b4684413ff928482ed12a5af848"} Jan 31 09:00:09 crc kubenswrapper[4908]: I0131 09:00:09.351058 4908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hm4vz" podStartSLOduration=3.066698687 podStartE2EDuration="11.351042246s" podCreationTimestamp="2026-01-31 08:59:58 +0000 UTC" firstStartedPulling="2026-01-31 09:00:00.235071101 +0000 UTC m=+5906.851015765" lastFinishedPulling="2026-01-31 09:00:08.51941467 +0000 UTC m=+5915.135359324" observedRunningTime="2026-01-31 09:00:09.344380009 +0000 UTC m=+5915.960324653" watchObservedRunningTime="2026-01-31 09:00:09.351042246 +0000 UTC m=+5915.966986900" Jan 31 09:00:10 crc kubenswrapper[4908]: I0131 09:00:10.431068 4908 patch_prober.go:28] interesting pod/machine-config-daemon-j7vgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:00:10 crc kubenswrapper[4908]: I0131 09:00:10.431147 4908 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:00:10 crc kubenswrapper[4908]: I0131 09:00:10.431205 4908 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" Jan 31 09:00:10 crc kubenswrapper[4908]: I0131 09:00:10.432018 4908 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bc6a0dd132ad45d61584cdfb77d93de0bca05105789afe3bd08822c2fa88b060"} pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 09:00:10 crc kubenswrapper[4908]: I0131 09:00:10.432078 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerName="machine-config-daemon" containerID="cri-o://bc6a0dd132ad45d61584cdfb77d93de0bca05105789afe3bd08822c2fa88b060" gracePeriod=600 Jan 31 09:00:10 crc kubenswrapper[4908]: E0131 09:00:10.550920 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 09:00:11 crc kubenswrapper[4908]: I0131 09:00:11.363514 4908 generic.go:334] "Generic (PLEG): container finished" podID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" containerID="bc6a0dd132ad45d61584cdfb77d93de0bca05105789afe3bd08822c2fa88b060" exitCode=0 Jan 31 09:00:11 crc kubenswrapper[4908]: I0131 09:00:11.363790 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" event={"ID":"a4e21704-e401-411f-99c0-4b4afe2bcf9f","Type":"ContainerDied","Data":"bc6a0dd132ad45d61584cdfb77d93de0bca05105789afe3bd08822c2fa88b060"} Jan 31 09:00:11 crc kubenswrapper[4908]: I0131 09:00:11.365324 4908 scope.go:117] "RemoveContainer" containerID="378b82a73addaa09daeb19b6e2398f3408bd923d472cee816d40bb3da79c238c" Jan 31 09:00:11 crc kubenswrapper[4908]: I0131 09:00:11.367371 4908 scope.go:117] "RemoveContainer" containerID="bc6a0dd132ad45d61584cdfb77d93de0bca05105789afe3bd08822c2fa88b060" Jan 31 09:00:11 crc kubenswrapper[4908]: E0131 09:00:11.367744 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 09:00:19 crc kubenswrapper[4908]: I0131 09:00:19.202484 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hm4vz" Jan 31 09:00:19 crc kubenswrapper[4908]: I0131 09:00:19.203140 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hm4vz" Jan 31 09:00:19 crc kubenswrapper[4908]: I0131 09:00:19.245701 4908 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hm4vz" Jan 31 09:00:19 crc kubenswrapper[4908]: I0131 09:00:19.480756 4908 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hm4vz" Jan 31 09:00:19 crc kubenswrapper[4908]: I0131 09:00:19.534734 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hm4vz"] Jan 31 09:00:21 crc kubenswrapper[4908]: I0131 09:00:21.445997 4908 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hm4vz" podUID="47a6d207-14d1-4748-85b3-ff5753416079" containerName="registry-server" containerID="cri-o://af5da2598e8f50ce3d0695b4793cc764ad472b4684413ff928482ed12a5af848" gracePeriod=2 Jan 31 09:00:22 crc kubenswrapper[4908]: I0131 09:00:22.212369 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hm4vz" Jan 31 09:00:22 crc kubenswrapper[4908]: I0131 09:00:22.345356 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47a6d207-14d1-4748-85b3-ff5753416079-catalog-content\") pod \"47a6d207-14d1-4748-85b3-ff5753416079\" (UID: \"47a6d207-14d1-4748-85b3-ff5753416079\") " Jan 31 09:00:22 crc kubenswrapper[4908]: I0131 09:00:22.345450 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47a6d207-14d1-4748-85b3-ff5753416079-utilities\") pod \"47a6d207-14d1-4748-85b3-ff5753416079\" (UID: \"47a6d207-14d1-4748-85b3-ff5753416079\") " Jan 31 09:00:22 crc kubenswrapper[4908]: I0131 09:00:22.345604 4908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmbxt\" (UniqueName: \"kubernetes.io/projected/47a6d207-14d1-4748-85b3-ff5753416079-kube-api-access-nmbxt\") pod \"47a6d207-14d1-4748-85b3-ff5753416079\" (UID: \"47a6d207-14d1-4748-85b3-ff5753416079\") " Jan 31 09:00:22 crc kubenswrapper[4908]: I0131 09:00:22.346815 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47a6d207-14d1-4748-85b3-ff5753416079-utilities" (OuterVolumeSpecName: "utilities") pod "47a6d207-14d1-4748-85b3-ff5753416079" (UID: "47a6d207-14d1-4748-85b3-ff5753416079"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:00:22 crc kubenswrapper[4908]: I0131 09:00:22.356483 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47a6d207-14d1-4748-85b3-ff5753416079-kube-api-access-nmbxt" (OuterVolumeSpecName: "kube-api-access-nmbxt") pod "47a6d207-14d1-4748-85b3-ff5753416079" (UID: "47a6d207-14d1-4748-85b3-ff5753416079"). InnerVolumeSpecName "kube-api-access-nmbxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:00:22 crc kubenswrapper[4908]: I0131 09:00:22.400199 4908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47a6d207-14d1-4748-85b3-ff5753416079-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47a6d207-14d1-4748-85b3-ff5753416079" (UID: "47a6d207-14d1-4748-85b3-ff5753416079"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:00:22 crc kubenswrapper[4908]: I0131 09:00:22.447685 4908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmbxt\" (UniqueName: \"kubernetes.io/projected/47a6d207-14d1-4748-85b3-ff5753416079-kube-api-access-nmbxt\") on node \"crc\" DevicePath \"\"" Jan 31 09:00:22 crc kubenswrapper[4908]: I0131 09:00:22.447723 4908 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47a6d207-14d1-4748-85b3-ff5753416079-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:00:22 crc kubenswrapper[4908]: I0131 09:00:22.447733 4908 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47a6d207-14d1-4748-85b3-ff5753416079-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:00:22 crc kubenswrapper[4908]: I0131 09:00:22.458208 4908 generic.go:334] "Generic (PLEG): container finished" podID="47a6d207-14d1-4748-85b3-ff5753416079" containerID="af5da2598e8f50ce3d0695b4793cc764ad472b4684413ff928482ed12a5af848" exitCode=0 Jan 31 09:00:22 crc kubenswrapper[4908]: I0131 09:00:22.458248 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm4vz" event={"ID":"47a6d207-14d1-4748-85b3-ff5753416079","Type":"ContainerDied","Data":"af5da2598e8f50ce3d0695b4793cc764ad472b4684413ff928482ed12a5af848"} Jan 31 09:00:22 crc kubenswrapper[4908]: I0131 09:00:22.458283 4908 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm4vz" event={"ID":"47a6d207-14d1-4748-85b3-ff5753416079","Type":"ContainerDied","Data":"26bdfd91bb312766d5aac94ed1aad724e4b71f0efa842cf79054af67b3ade581"} Jan 31 09:00:22 crc kubenswrapper[4908]: I0131 09:00:22.458312 4908 scope.go:117] "RemoveContainer" containerID="af5da2598e8f50ce3d0695b4793cc764ad472b4684413ff928482ed12a5af848" Jan 31 09:00:22 crc kubenswrapper[4908]: I0131 09:00:22.458355 4908 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hm4vz" Jan 31 09:00:22 crc kubenswrapper[4908]: I0131 09:00:22.493153 4908 scope.go:117] "RemoveContainer" containerID="acff0482be9b33b77e35acd3657318cc0418b1ce6e13794411c6b270c37510f7" Jan 31 09:00:22 crc kubenswrapper[4908]: I0131 09:00:22.506901 4908 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hm4vz"] Jan 31 09:00:22 crc kubenswrapper[4908]: I0131 09:00:22.515023 4908 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hm4vz"] Jan 31 09:00:22 crc kubenswrapper[4908]: I0131 09:00:22.530265 4908 scope.go:117] "RemoveContainer" containerID="9318030d272213da75b3b8d8a80e12388cd2b8e8c20f043bcb5384b60e3bd130" Jan 31 09:00:22 crc kubenswrapper[4908]: I0131 09:00:22.561493 4908 scope.go:117] "RemoveContainer" containerID="af5da2598e8f50ce3d0695b4793cc764ad472b4684413ff928482ed12a5af848" Jan 31 09:00:22 crc kubenswrapper[4908]: E0131 09:00:22.561958 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af5da2598e8f50ce3d0695b4793cc764ad472b4684413ff928482ed12a5af848\": container with ID starting with af5da2598e8f50ce3d0695b4793cc764ad472b4684413ff928482ed12a5af848 not found: ID does not exist" containerID="af5da2598e8f50ce3d0695b4793cc764ad472b4684413ff928482ed12a5af848" Jan 31 09:00:22 crc kubenswrapper[4908]: I0131 09:00:22.562035 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af5da2598e8f50ce3d0695b4793cc764ad472b4684413ff928482ed12a5af848"} err="failed to get container status \"af5da2598e8f50ce3d0695b4793cc764ad472b4684413ff928482ed12a5af848\": rpc error: code = NotFound desc = could not find container \"af5da2598e8f50ce3d0695b4793cc764ad472b4684413ff928482ed12a5af848\": container with ID starting with af5da2598e8f50ce3d0695b4793cc764ad472b4684413ff928482ed12a5af848 not found: ID does not exist" Jan 31 09:00:22 crc kubenswrapper[4908]: I0131 09:00:22.562072 4908 scope.go:117] "RemoveContainer" containerID="acff0482be9b33b77e35acd3657318cc0418b1ce6e13794411c6b270c37510f7" Jan 31 09:00:22 crc kubenswrapper[4908]: E0131 09:00:22.562709 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acff0482be9b33b77e35acd3657318cc0418b1ce6e13794411c6b270c37510f7\": container with ID starting with acff0482be9b33b77e35acd3657318cc0418b1ce6e13794411c6b270c37510f7 not found: ID does not exist" containerID="acff0482be9b33b77e35acd3657318cc0418b1ce6e13794411c6b270c37510f7" Jan 31 09:00:22 crc kubenswrapper[4908]: I0131 09:00:22.562744 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acff0482be9b33b77e35acd3657318cc0418b1ce6e13794411c6b270c37510f7"} err="failed to get container status \"acff0482be9b33b77e35acd3657318cc0418b1ce6e13794411c6b270c37510f7\": rpc error: code = NotFound desc = could not find container \"acff0482be9b33b77e35acd3657318cc0418b1ce6e13794411c6b270c37510f7\": container with ID starting with acff0482be9b33b77e35acd3657318cc0418b1ce6e13794411c6b270c37510f7 not found: ID does not exist" Jan 31 09:00:22 crc kubenswrapper[4908]: I0131 09:00:22.562765 4908 scope.go:117] "RemoveContainer" containerID="9318030d272213da75b3b8d8a80e12388cd2b8e8c20f043bcb5384b60e3bd130" Jan 31 09:00:22 crc kubenswrapper[4908]: E0131 09:00:22.563121 4908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9318030d272213da75b3b8d8a80e12388cd2b8e8c20f043bcb5384b60e3bd130\": container with ID starting with 9318030d272213da75b3b8d8a80e12388cd2b8e8c20f043bcb5384b60e3bd130 not found: ID does not exist" containerID="9318030d272213da75b3b8d8a80e12388cd2b8e8c20f043bcb5384b60e3bd130" Jan 31 09:00:22 crc kubenswrapper[4908]: I0131 09:00:22.563143 4908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9318030d272213da75b3b8d8a80e12388cd2b8e8c20f043bcb5384b60e3bd130"} err="failed to get container status \"9318030d272213da75b3b8d8a80e12388cd2b8e8c20f043bcb5384b60e3bd130\": rpc error: code = NotFound desc = could not find container \"9318030d272213da75b3b8d8a80e12388cd2b8e8c20f043bcb5384b60e3bd130\": container with ID starting with 9318030d272213da75b3b8d8a80e12388cd2b8e8c20f043bcb5384b60e3bd130 not found: ID does not exist" Jan 31 09:00:23 crc kubenswrapper[4908]: I0131 09:00:23.949322 4908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47a6d207-14d1-4748-85b3-ff5753416079" path="/var/lib/kubelet/pods/47a6d207-14d1-4748-85b3-ff5753416079/volumes" Jan 31 09:00:24 crc kubenswrapper[4908]: I0131 09:00:24.939858 4908 scope.go:117] "RemoveContainer" containerID="bc6a0dd132ad45d61584cdfb77d93de0bca05105789afe3bd08822c2fa88b060" Jan 31 09:00:24 crc kubenswrapper[4908]: E0131 09:00:24.940711 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" Jan 31 09:00:32 crc kubenswrapper[4908]: I0131 09:00:32.824577 4908 scope.go:117] "RemoveContainer" containerID="ad9c3e751330496ece65eea7c818ec03ec6e90ed16e7993ca3e2eec49ab5dc53" Jan 31 09:00:36 crc kubenswrapper[4908]: I0131 09:00:36.940196 4908 scope.go:117] "RemoveContainer" containerID="bc6a0dd132ad45d61584cdfb77d93de0bca05105789afe3bd08822c2fa88b060" Jan 31 09:00:36 crc kubenswrapper[4908]: E0131 09:00:36.940887 4908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7vgm_openshift-machine-config-operator(a4e21704-e401-411f-99c0-4b4afe2bcf9f)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7vgm" podUID="a4e21704-e401-411f-99c0-4b4afe2bcf9f" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515137342300024443 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015137342301017361 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015137326227016515 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015137326227015465 5ustar corecore